pr.qxd 2/12/04 5:06 PM Page iii
PACS AND IMAGING INFORMATICS BASIC PRINCIPLES AND APPLICATIONS
H. K. Huang, D.Sc., FRCR (Hon.) Professor of Radiology and Biomedical Engineering, University of Southern California, Los Angeles Chair Professor of Medical Informatics The Hong Kong Polytechnic University Honorary Professor, Shanghai Institute of Technical Physics The Chinese Academy of Science
A JOHN WILEY & SONS, INC., PUBLICATION
pr.qxd 2/12/04 5:06 PM Page iv
Copyright © 2004 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format. Library of Congress Cataloging-in-Publication Data: Huang, H. K., 1939– PACS and imaging informatics : basic principles and applications / H. K. Huang. p. ; cm. Includes bibliographical references and index. ISBN 0-471-25123-2 (alk. paper) 1. Picture archiving and communication systems in medicine. 2. Imaging systems in medicine. [DNLM: 1. Radiology Information Systems. 2. Diagnostic Imaging. 3. Medical Records Systems, Computerized. WN 26.5 H874pg 2004] I. Title: Picture archiving and communication systems and imaging informatics. II. Huang, H. K., 1939– PACS. III. Title. R857.P52 H82 2004 616.07¢54—dc22 2003021220 Printed in the United States of America. 10 9 8 7 6 5 4 3 2 1
pr.qxd 2/12/04 5:06 PM Page v
To my wife, Fong, for her support and understanding and my daughter, Cammy, for her young wisdom and forever challenging spirit.
pr.qxd 2/12/04 5:06 PM Page vi
Introduction Chapter 1
Imaging Basics Chapter 2
Digital Radiography Chapter 3
CT/MR/US/NM Light Imaging Chapter 4
Compression Chapter 5
C/N
Part 1 IMAGING PRINCIPLES
Hospital/Radiology Information System Chapters 7 & 12
C/N
PACS Fundamentals Chapter 6
Standards & Protocols Chapter 7 C/N
Image/data Acquisition Gateway Chapter 8 C/N
PACS Controller and Image Archive Server Chapter 10
HIS/RIS/PACS Integration Chapter 12
C/N
C/N
Display Workstation Chapter 11
Fault-Tolerant Chapter 15
Management and Web-based PACS Chapter 13
Implementation/ Evaluation Chapter 17
Clinical Experience/ Pitfalls/Bottlenecks Chapter 18
Medical Imaging Informatics Chapter 19
Decision Support Tools Chapter 20
Part 2 PACS FOUNDAMENTALS
Image/Data Security Chapter 16
WAN
Telemedicine Teleradiology Chapter 14
Part 3 PACS OPERATIONS
Application Servers Chapter 21
Education/ Learning Chapter 22
Part 4 PACS-based Imaging Informatics Enterprise PACS Chapter 23
Part 5
C/N: Communication Networks (Chapter 9) WAN: Wide Area Network
pr.qxd 2/12/04 5:06 PM Page vii
CONTENTS IN BRIEF
FOREWORD PREFACE PREFACE TO THE FIRST EDITION ACKNOWLEDGMENTS LIST OF ACRONYMS
xxxi xxxv xxxix xli xliii
PART I MEDICAL IMAGING PRINCIPLES 1. Introduction 2. Digital Medical Image Fundamentals 3. Digital Radiography 4. Computed Tomography, Magnetic Resonance, Ultrasound, Nuclear Medicine, and Light Imaging 5. Image Compression
1 3 23 49 79 119
PART II PACS FUNDAMENTALS 6. Picture Archiving and Communication System Components and Work Flow 7. Industrial Standards (HL7 and DICOM) and Work Flow Protocols (IHE) 8. Image Acquisition Gateway 9. Communications and Networking 10. PACS Controller and Image Archive Server 11. Display Workstation 12. Integration of HIS, RIS, PACS, and ePR
153 155 171 195 219 255 277 307
PART III PACS OPERATION 13. PACS Data Management and Web-Based Image Distribution 14. Telemedicine and Teleradiology 15. Fault-Tolerant PACS 16. Image/Data Security 17. PACS Clinical Implementation, Acceptance, Data Migration, and Evaluation 18. PACS Clinical Experience, Pitfalls, and Bottlenecks
331 333 353 381 409 431 463
PART IV PACS-BASED IMAGING INFORMATICS 19. PACS-Based Medical Imaging Informatics 20. PACS as a Decision Support Tool 21. ePR-Based PACS Application Server for Other Medical Specialties 22. New Directions in PACS Learning and PACS-Related Training
485 487 509 539 567
PART V ENTERPRISE PACS 23. Enterprise PACS
589 591
REFERENCES GLOSSARY OF PACS CONCEPTS INDEX
611 633 637
vii
pr.qxd 2/12/04 5:06 PM Page ix
CONTENTS
FOREWORD
xxxi
PREFACE
xxxv
PREFACE TO THE FIRST EDITION
xxxil
ACKNOWLEDGMENTS LIST OF ACRONYMS
PART I
MEDICAL IMAGING PRINCIPLES
1. Introduction 1.1 1.2
1.3
1.4
Introduction Some Remarks on Historical Picture Archiving and Communication Systems (PACS) 1.2.1 Concepts, Conferences, and Early Research Projects 1.2.1.1 Concepts and Conferences 1.2.1.2 Early Funded Research Projects by the U.S. Government 1.2.2 PACS Evolution 1.2.2.1 In the Beginning 1.2.2.2 Large-Scale Projects 1.2.3 Standards and Anchoring Technologies 1.2.3.1 Standards 1.2.3.2 Anchoring Techologies What is PACS? 1.3.1 PACS Design Concept 1.3.2 PACS Infrastructure Design PACS Implementation Strategies 1.4.1 Background 1.4.2 Five PACS Implementation Models 1.4.2.1 The Home-Grown Model 1.4.2.2 The Two-Team Effort Model 1.4.2.3 The Turnkey Model 1.4.2.4 The Partnership Model 1.4.2.5 The Application Service Provider (ASP) Model
xli xliii
1 3 3 4 4 4 6 6 6 7 8 8 9 9 9 10 11 11 12 12 12 12 13 13 ix
pr.qxd 2/12/04 5:06 PM Page x
x
CONTENTS
1.5
1.6
1.7
A Global View of PACS Development 1.5.1 The United States 1.5.2 Europe 1.5.3 Asia Examples of Some Early Successful PACS Implementation 1.6.1 Baltimore VA Medical Center 1.6.2 Hammersmith Hospital 1.6.3 Samsung Medical Center Organization of This Book
2. Digital Medical Image Fundamentals 2.1 2.2 2.3 2.4
2.5
Terminology Density Resolution, Spatial Resolution, and Signal-To-Noise Ratio Radiological Test Objects and Patterns Image in the Spatial Domain and the Frequency Domain 2.4.1 Frequency Components of an Image 2.4.2 The Fourier Transform Pair 2.4.3 The Discrete Fourier Transform Measurement of Image Quality 2.5.1 Measurement of Sharpness 2.5.1.1 Point Spread Function (PSF) 2.5.1.2 Line Spread Function (LSF) 2.5.1.3 Edge Spread Function (ESF) 2.5.1.4 Modulation Transfer Function (MTF) 2.5.1.5 Relationship Between ESF, LSF, and MTF 2.5.1.6 Relationship Between the Input Image, the MTF, and the Output Image 2.5.2 Measurement of Noise
3. Digital Radiography 3.1
3.2
3.3
Principles of Conventional Projection Radiography 3.1.1 Radiology Work Flow 3.1.2 Standard Procedures Used in Conventional Projection Radiography 3.1.3 Analog Image Receptor 3.1.3.1 Image Intensifier Tube 3.1.3.2 Screen-Film Combination Digital Fluorography and Laser Film Scanner 3.2.1 Basic Principles 3.2.2 Video Scanner System and Digital Fluorography 3.2.3 Laser Film Scanner Imaging Plate Technology 3.3.1 Principle of the Laser-Stimulated Luminescence Phosphor Plate
13 13 14 15 15 16 16 16 17 23 23 25 25 28 28 31 34 34 35 35 36 36 37 38 39 45 49 49 49 51 51 52 54 57 57 58 59 61 62
pr.qxd 2/12/04 5:06 PM Page xi
CONTENTS
3.3.2
3.4
3.5
Computed Radiography System Block Diagram and Its Principle of Operation 3.3.3 Operating Characteristics of the CR System 3.3.4 Background Removal 3.3.4.1 What is Background Removal? 3.3.4.2 Advantages of Background Removal in Digital Radiography Full-Field Direct Digital Mammography 3.4.1 Screen-Film and Digital Mammography 3.4.2 Full-Field Direct Digital Mammography: Slot-Scanning Method Digital Radiography 3.5.1 Some Disadvantages of the Computed Radiography System 3.5.2 Digital Radiography 3.5.3 Integration of Digital Radiography with PACS 3.5.4 Applications of DR in Clinical Environment
xi
4. Computed Tomography, Magnetic Resonance, Ultrasound, Nuclear Medicine, and Light Imaging 4.1
4.2
4.3
4.4
Image Reconstruction from Projections 4.1.1 The Fourier Projection Theorem 4.1.2 The Algebraic Reconstruction Method 4.1.3 The Filtered (Convolution) Back-Projection Method 4.1.3.1 A Numerical Example 4.1.3.2 Mathematical Formulation Transmission X-Ray Computed Tomography (XCT) 4.2.1 Conventional XCT 4.2.2 Spiral (Helical) XCT 4.2.3 Cine XCT 4.2.4 Multislice XCT 4.2.4.1 Principles 4.2.4.2 Some Standard Terminology Used in Multislice XCT 4.2.5 Four-Dimensional (4-D) XCT 4.2.6 Components and Data Flow of an XCT Scanner 4.2.7 XCT Image Data 4.2.7.1 Slice Thickness 4.2.7.2 Image Data Size 4.2.7.3 Data Flow/Postprocessing Emission Computed Tomography 4.3.1 Single-Photon Emission CT (SPECT) 4.3.2 Positron Emission CT (PET) Advances in XCT and PET 4.4.1 PET/XCT Fusion Scanner 4.4.2 Micro Sectional Images
63 63 66 66 66 69 69 70 72 72 72 73 77 79 79 79 81 82 82 83 84 84 84 86 87 87 89 90 90 90 90 90 92 92 92 94 95 95 96
pr.qxd 2/12/04 5:06 PM Page xii
xii
CONTENTS
4.5
4.6
4.7
4.8
Nuclear Medicine 4.5.1 Principles of Nuclear Medicine Scanning 4.5.2 The Gamma Camera and Associated Imaging System Ultrasound Imaging 4.6.1 Principles of B-Mode Ultrasound Scanning 4.6.2 System Block Diagram and Operational Procedure 4.6.3 Sampling Modes and Image Display 4.6.4 Color Doppler Ultrasound Imaging 4.6.5 Cine Loop Ultrasound Imaging 4.6.6 Three-Dimensional US Magnetic Resonance Imaging 4.7.1 MR Imaging Basics 4.7.2 Magnetic Resonance Image Production 4.7.3 Steps in Producing an MR Image 4.7.4 MR Images (MRI) 4.7.5 Other Types of Images from MR Signals 4.7.5.1 MR Angiography (MRA) 4.7.5.2 Other Pulse Sequences Light Imaging 4.8.1 Microscopic Image 4.8.1.1 Instrumentation 4.8.1.2 Motorized Stage Assembly 4.8.1.3 Automatic Focusing Device 4.8.1.4 Resolution 4.8.1.5 Contrast 4.8.1.6 The Digital Chain 4.8.1.7 Color Image and Color Memory 4.8.2 Endoscopy
5. Image Compression 5.1 5.2 5.3
5.4
Terminology Background Error-Free Compression 5.3.1 Background Removal 5.3.2 Run-Length Coding 5.3.3 Huffman Coding Two-Dimensional Irreversible Image Compression 5.4.1 Background 5.4.2 Block Compression Technique 5.4.2.1 Two-Dimensional Forward Discrete Cosine Transform 5.4.2.2 Bit Allocation Table and Quantization 5.4.2.3 DCT Coding and Entropy Coding 5.4.2.4 Decoding and Inverse Transform 5.4.3 Full-Frame Compression
97 97 98 98 99 100 101 102 102 102 104 104 104 105 106 106 106 109 112 113 113 113 115 115 116 116 116 117 119 119 120 121 122 123 125 127 127 127 129 130 130 131 132
pr.qxd 2/12/04 5:06 PM Page xiii
CONTENTS
5.5
5.6
5.7
5.8
PART II
Measurement of the Difference Between the Original and the Reconstructed Image 5.5.1 Quantitative Parameters 5.5.1.1 Normalized Mean-Square Error 5.5.1.2 Peak Signal-to-Noise Ratio 5.5.2 Qualitative Measurement: Difference Image and Its Histogram 5.5.3 Acceptable Compression Ratio 5.5.4 Receiver Operating Characteristic Analysis Three-Dimensional Image Compression 5.6.1 Background 5.6.2 Basic Wavelet Theory and Multiresolution Analysis 5.6.3 One-, Two-, and Three-Dimensional Wavelet Transform 5.6.3.1 One-Dimensional 5.6.3.2 Two-Dimensional 5.6.3.3 Three-Dimensional 5.6.4 Three-Dimensional Image Compression with Wavelet Transform 5.6.4.1 The Block Diagram 5.6.4.2 Mathematical Formulation of the ThreeDimensional Wavelet Transform 5.6.4.3 Wavelet Filter Selection 5.6.4.4 Quantization 5.6.4.5 Entropy Coding 5.6.4.6 Some Results 5.6.4.7 Wavelet Compression in Teleradiology Color Image Compression 5.7.1 Examples of Color Image Used in Radiology 5.7.2 The Color Space 5.7.3 Compression of Color Ultrasound Images DICOM Standard and Food and Drug Administration (FDA) Guidelines 5.8.1 FDA 5.8.2 DICOM PACS FUNDAMENTALS
6. Picture Archiving and Communication System Components and Work Flow 6.1
PACS Components 6.1.1 Data and Image Acquisition Gateway 6.1.2 PACS Controller and Archive Server 6.1.3 Display Workstations 6.1.4 Application Servers 6.1.5 System Networks
xiii
133 133 133 134 135 136 136 140 140 140 141 141 142 144 145 145 145 146 147 147 147 148 148 148 150 150 151 151 152 153
155 155 155 156 157 158 158
pr.qxd 2/12/04 5:06 PM Page xiv
xiv
CONTENTS
6.2
6.3 6.4
6.5
6.6
PACS Infrastructure Design Concept 6.2.1 Industry Standards 6.2.2 Connectivity and Open Architecture 6.2.3 Reliability 6.2.4 Security A Generic PACS Work Flow Current PACS Architectures 6.4.1 Stand-Alone PACS Model 6.4.2 Client/Server Model 6.4.3 Web-Based Model PACS and Teleradiology 6.5.1 Pure Teleradiology Model 6.5.2 PACS and Teleradiology Combined Model Enterprise PACS and ePR with Images
7. Industrial Standards (HL7 and DICOM) and Work Flow Protocols (IHE) 7.1 7.2
7.3
7.4
Industrial Standards and Work Flow Protocol The Health Level 7 Standard 7.2.1 Health Level 7 7.2.2 An Example 7.2.3 New Trend in HL7 From ACR-NEMA to DICOM and DICOM Document 7.3.1 ACR-NEMA and DICOM 7.3.2 DICOM Document The DICOM 3.0 Standard 7.4.1 DICOM Data Format 7.4.1.1 DICOM Model of the Real World 7.4.1.2 DICOM File Format 7.4.2 Object Class and Service Class 7.4.2.1 Object Class 7.4.2.2 DICOM Services 7.4.3 DICOM Communication 7.4.4 DICOM Conformance 7.4.5 Examples of Using DICOM 7.4.5.1 Send and Receive 7.4.5.2 Query and Retrieve 7.4.6 New Features in DICOM 7.4.6.1 Visible Light (VL) Image 7.4.6.2 Structured Reporting (SR) Object 7.4.6.3 Content Mapping Resource 7.4.6.4 Mammography CAD (Computer-Aided Detection) 7.4.6.5 Waveform IOD
159 159 159 160 160 161 162 162 164 166 166 166 167 168
171 171 171 171 172 174 175 175 175 176 176 176 179 179 179 181 182 183 183 184 185 187 187 187 187 187 187
pr.qxd 2/12/04 5:06 PM Page xv
CONTENTS
7.5
7.6
IHE (Integrating the Healthcare Enterprise) 7.5.1 What is IHE? 7.5.2 The IHE Technical Framework and Integration Profiles 7.5.3 IHE Profiles 7.5.4 The Future of IHE 7.5.4.1 Multidisciplinary Effort 7.5.4.2 International Expansion Other Standards 7.6.1 UNIX Operating System 7.6.2 Windows NT/XP Operating Systems 7.6.3 C and C++ Programming Languages 7.6.4 Structural Query Language 7.6.5 XML (Extensible Markup Language)
8. Image Acquisition Gateway 8.1 8.2
8.3
8.4
8.5
8.6
Background DICOM-Compliant Image Acquisition Gateway 8.2.1 Background 8.2.2 DICOM-Based PACS Image Acquisition Gateway 8.2.2.1 Gateway Computer Components and Database Management 8.2.2.2 Determination of the End of Image Series Automatic Image Recovery Scheme for DICOM Conformance Device 8.3.1 Missing Images 8.3.2 Automatic Image Recovery Scheme 8.3.2.1 Basis for the Image Recovery Scheme 8.3.2.2 The Image Recovery Algorithm 8.3.2.3 Results and the Extension of the Recovery Scheme Interface of a PACS Module with the Gateway Computer 8.4.1 PACS Modality Gateway and HI-PACS Gateway 8.4.2 Image Display at the PACS Modality Workstation DICOM Conformance PACS Broker 8.5.1 Concept of the PACS Broker 8.5.2 An Example of Implemention of a PACS Broker Image Preprocessing 8.6.1 Computed Radiography (CR) and Digital Radiography (DR) 8.6.1.1 Reformatting 8.6.1.2 Background Removal 8.6.1.3 Automatic Orientation 8.6.1.4 Lookup Table Generation 8.6.2 Digitized X-Ray Images
xv
188 188 188 189 190 190 190 191 191 191 191 191 192 195 195 197 197 198 198 202 204 204 204 204 205 207 208 208 209 210 210 210 211 211 211 212 212 214 215
pr.qxd 2/12/04 5:06 PM Page xvi
xvi
CONTENTS
8.7
8.6.3 Digital Mammography 8.6.4 Sectional Images—CT, MR, and US An Example of a Gateway in a Clinical PACS Environment 8.7.1 Gateway in a Clinical PACS 8.7.2 Clinical Operation Conditions and Reliability: Weaknesses and Single Points of Failure
9. Communications and Networking 9.1
9.2
9.3
9.4
9.5
Background 9.1.1 Terminology 9.1.2 Network Standards 9.1.3 Network Technology 9.1.3.1 Ethernet and Gigabit Ethernet 9.1.3.2 ATM (Asynchronous Transfer Mode) 9.1.4 Network Components for Connectivity Cable Plan 9.2.1 Types of Networking Cables 9.2.2 The Hub Room 9.2.3 Cables for Input Sources 9.2.4 Cables for Image Distribution Digital Communication Networks 9.3.1 Background 9.3.2 Design Criteria 9.3.2.1 Speed of Transmission 9.3.2.2 Standardization 9.3.2.3 Fault Tolerance 9.3.2.4 Security 9.3.2.5 Costs PACS Network Design 9.4.1 External Networks 9.4.1.1 Manufacturer’s Image Acquisition Device Network 9.4.1.2 Hospital and Radiology Information Networks 9.4.1.3 Research and Other Networks 9.4.1.4 The Internet 9.4.1.5 Imaging Workstation Networks 9.4.2 Internal Networks Examples of PACS Networks 9.5.1 An Earlier PACS Network at UCSF 9.5.1.1 Wide Area Network 9.5.1.2 The Departmental Ethernet 9.5.1.3 Research Networks 9.5.1.4 Other PACS External Networks 9.5.1.5 PACS Internal Network
215 215 216 216 216 219 219 219 220 222 223 225 226 226 226 227 228 228 230 230 231 231 231 231 232 232 232 232 232 233 233 233 233 233 234 234 234 234 235 235 235
pr.qxd 2/12/04 5:06 PM Page xvii
CONTENTS
9.5.2
9.6
9.7
9.8
Network Architecture for Health Care IT and PACS 9.5.2.1 General Architecture 9.5.2.2 Network Architecture for Health Care IT 9.5.2.3 Network Architecture for PACS 9.5.2.4 UCLA PACS Network Architecture Internet 2 9.6.1 Image Data Communication 9.6.2 What is Internet 2 (I2)? 9.6.3 Current I2 Performance 9.6.4 Enterprise Teleradiology 9.6.5 Current Status Wireless Networks 9.7.1 Wireless LAN (WLAN) 9.7.1.1 The Technology 9.7.1.2 Performance 9.7.2 Wireless WAN (WWAN) 9.7.2.1 The Technology 9.7.2.2 Performance Self-Scaling Networks 9.8.1 Concept of Self-Scaling Networks 9.8.2 Design of the Self-Scaling Network in the Health Care Environment 9.8.2.1 Health Care Application 9.8.2.2 The Self-Scalar
10. PACS Controller and Image Archive Server 10.1
10.2
Image Management Design Concept 10.1.1 Local Storage Management via PACS Intercomponent Communication 10.1.2 PACS Controller System Configuration 10.1.2.1 The Archive Server 10.1.2.2 The Database System 10.1.2.3 The Archive Library 10.1.2.4 Backup Archive 10.1.2.5 Communication Networks PACS Controller and Archive Server Functions 10.2.1 Image Receiving 10.2.2 Image Stacking 10.2.3 Image Routing 10.2.4 Image Archiving 10.2.5 Study Grouping 10.2.6 RIS and HIS Interfacing 10.2.7 PACS Database Updates 10.2.8 Image Retrieving 10.2.9 Image Prefetching
xvii
235 235 237 237 238 238 239 241 241 247 247 248 248 248 249 251 251 251 251 251 253 253 253 255 256 256 257 257 258 259 259 259 259 260 260 261 262 262 262 263 263 263
pr.qxd 2/12/04 5:06 PM Page xviii
xviii
10.3 10.4
10.5
10.6
CONTENTS
PACS Archive Server System Operations DICOM-Compliant PACS Archive Server 10.4.1 Advantages of a DICOM-Compliant PACS Archive Server 10.4.2 DICOM Communications in PACS Environment 10.4.3 DICOM-Compliant Image Acquisition Gateways 10.4.3.1 Push Mode 10.4.3.2 Pull Mode DICOM PACS Archive Server Hardware and Software 10.5.1 Hardware Components 10.5.1.1 RAID 10.5.1.2 DLT 10.5.2 Archive Server Software 10.5.2.1 Image Receiving 10.5.2.2 Data Insert and PACS Database 10.5.2.3 Image Routing 10.5.2.4 Image Send 10.5.2.5 Image Query/Retrieve 10.5.2.6 Retrieve/Send 10.5.3 An Example Backup Archive Server 10.6.1 Backup Archive Using an Application Service Provider (ASP) Model 10.6.1.1 Concept of the Backup Archive Server 10.6.2 General Architecture 10.6.3 Recovery Procedure 10.6.4 Key Features 10.6.5 General Setup Procedures of the ASP Model
11. Display Workstation 11.1
11.2
11.3 11.4
Basics of a Display Workstation 11.1.1 Image Display Board 11.1.2 Display Monitor 11.1.3 Resolution 11.1.4 Luminance and Contrast 11.1.5 Human Perception 11.1.6 Color Display Ergonomics of Image Workstations 11.2.1 Glare 11.2.2 Ambient Illuminance 11.2.3 Acoustic Noise Due to Hardware Evolution of Medical Image Display Technologies Types of Image Workstation 11.4.1 Diagnostic Workstation 11.4.2 Review Workstation
264 264 264 265 265 265 265 266 267 267 268 268 269 269 269 270 270 270 270 271 271 271 272 273 273 274 277 277 278 279 280 281 281 282 282 282 283 283 284 285 285 287
pr.qxd 2/12/04 5:06 PM Page xix
CONTENTS
11.5
11.6
11.7
11.4.3 Analysis Workstation 11.4.4 Digitizing and Printing Workstation 11.4.5 Interactive Teaching Workstation 11.4.6 Desktop Workstation Image Display and Measurement Functions 11.5.1 Zoom and Scroll 11.5.2 Window and Level 11.5.3 Histogram Modification 11.5.4 Image Reverse 11.5.5 Distance, Area, and Average Gray Level Measurements 11.5.6 Optimization of Image Perception in Soft Copy 11.5.6.1 Background Removal 11.5.6.2 Anatomical Regions of Interest 11.5.6.3 Gamma Curve Correction 11.5.7 Montage Workstation User Interface and Basic Display Functions 11.6.1 Basic Software Functions in a Display Workstation 11.6.2 Workstation User Interface DICOM PC-Based Display Workstation 11.7.1 Hardware Configuration 11.7.1.1 Host Computer 11.7.1.2 Display Devices 11.7.1.3 Networking Equipment 11.7.2 Software System 11.7.2.1 Software Architecture 11.7.2.2 Software Modules in the Application Interface Layer
12. Integration of HIS, RIS, PACS, and ePR 12.1 12.2 12.3
Hospital Information System Radiology Information System Interfacing PACS with HIS and RIS 12.3.1 Workstation Emulation 12.3.2 Database-to-Database Transfer 12.3.3 Interface Engine 12.3.4 Reasons for Interfacing PACS with HIS and RIS 12.3.4.1 Diagnostic Process 12.3.4.2 PACS Image Management 12.3.4.3 RIS Administration 12.3.4.4 Research and Training 12.3.5 Some Common Guidelines 12.3.6 Common Data in HIS, RIS, and PACS 12.3.7 Implementation of RIS-PACS Interface 12.3.7.1 Trigger Mechanism Between Two Databases 12.3.7.2 Query Protocol
xix
287 288 288 289 289 289 290 290 292 292 293 293 293 293 293 296 296 297 300 300 300 300 301 301 301 303 307 308 308 311 311 311 311 312 312 313 313 314 314 314 315 315 317
pr.qxd 2/12/04 5:06 PM Page xx
xx
CONTENTS
12.3.8 12.4
12.5
PART III
An Example—The IHE Patient Information Reconciliation Profile Interfacing PACS with Other Medical Databases 12.4.1 Multimedia Medical Data 12.4.2 Multimedia in the Radiology Environment 12.4.3 An Example—The IHE Radiology Information Integration Profile 12.4.4 Integration of Heterogeneous Databases 12.4.4.1 Other Related Databases 12.4.4.2 Interfacing Digital Voice with PACS Electronic Patient Record (ePR) 12.5.1 Current Status of ePR 12.5.2 Integration of ePR with Images—An Example 12.5.2.1 The VistA Information System Architecture 12.5.2.2 VistA Imaging PACS OPERATION
13. PACS Data Management and Web-Based Image Distribution 13.1
13.2
13.3 13.4
13.5
PACS Data Management 13.1.1 Concept of the Patient Folder Manager 13.1.2 Online Radiology Reports Patient Folder Management 13.2.1 Archive Management 13.2.1.1 Event Triggering 13.2.1.2 Image Prefetching 13.2.1.3 Job Prioritization 13.2.1.4 Storage Allocation 13.2.2 Network Management 13.2.3 Display Server Management 13.2.3.1 IHE Presentation of Grouped Procedures (PGP) Profile 13.2.3.2 IHE Key Image Note Profile 13.2.3.3 IHE Simple Image and Numeric Report Profile Distributed Image File Server Web Server 13.4.1 Web Technology 13.4.2 Concept of the Web Server in PACS Environment Component-Based Web Server for Image Distribution and Display 13.5.1 Component Technologies 13.5.2 The Architecture of Component-Based Web Server 13.5.3 The Data Flow of the Component-Based Web Server
317 318 318 320 320 321 321 321 323 323 324 324 325 331 333 333 334 334 335 336 336 336 337 338 338 338 339 340 341 341 343 343 344 345 345 347 347
pr.qxd 2/12/04 5:06 PM Page xxi
CONTENTS
13.5.3.1
13.5.4 13.5.5
Query/Retrieve DICOM Image/Data Resided in the Web Server 13.5.3.2 Query/Retrieve DICOM Image/Data Resided in the PACS Archive Server Component-Based Architecture of Diagnostic Display Workstation Performance Evaluation
xxi
14. Telemedicine and Teleradiology 14.1 14.2 14.3
14.4
14.5
Introduction Telemedicine Teleradiology 14.3.1 Background 14.3.1.1 Why Do We Need Teleradiology? 14.3.1.2 What is Teleradiology? 14.3.1.3 Teleradiology and PACS 14.3.2 Teleradiology Components 14.3.2.1 Image Capture 14.3.2.2 Data Reformatting 14.3.2.3 Image Storage 14.3.2.4 Display Workstation 14.3.2.5 Communication Networking 14.3.2.6 User Friendliness 14.3.3 State-of-the-Art Technology 14.3.3.1 Wide Area Network—Asynchronous Transfer Mode (ATM), Broadband, DSL, Internet 2, and Wireless Technology 14.3.3.2 Display Workstation 14.3.3.3 Image Compression 14.3.3.4 Image/Data Privacy, Authenticity, and Integrity 14.3.4 Teleradiology Models 14.3.4.1 Off-Hour Reading 14.3.4.2 ASP Model 14.3.4.3 Web-Based Teleradiology 14.3.5 Some Important Issues in Teleradiology 14.3.5.1 Teleradiology Trade-Off Parameters 14.3.5.2 Medical-Legal Issues Telemammography 14.4.1 Why Do We Need Telemammography? 14.4.2 Concept of the Expert Center 14.4.3 Technical Issues Telemicroscopy 14.5.1 Telemicroscopy and Teleradiology 14.5.2 Telemicroscopy Applications
348 348 349 350
353 353 354 355 355 355 356 357 358 359 359 359 359 359 360 361
361 362 364 364 365 365 365 366 366 366 367 367 367 368 368 369 369 371
pr.qxd 2/12/04 5:06 PM Page xxii
xxii
CONTENTS
14.6
14.7
Real-Time Teleconsultation System 14.6.1 Background 14.6.2 System Requirements 14.6.3 Teleconsultation System Design 14.6.3.1 Image Display Work Flow 14.6.3.2 Communication Requirements During Teleconsultation 14.6.3.3 Hardware Configuration of the Teleconsultation System 14.6.3.4 Software Architecture 14.6.4 Teleconsultation Procedure and Protocol Trends in Telemedicine and Teleradiology
15. Fault-Tolerant PACS 15.1 15.2 15.3
15.4
15.5 15.6
15.7 15.8
Introduction Causes of a System Failure No Loss of Image Data 15.3.1 Redundant Storage at Component Levels 15.3.2 The Archive Library 15.3.3 The Database System No Interruption of PACS Data Flow 15.4.1 PACS Data Flow 15.4.2 Possible Failure Situations in PACS Data Flow 15.4.3 Methods to Protect Data Flow Continuity from Hardware Component Failure 15.4.3.1 Hardware Solutions and Drawbacks 15.4.3.2 Software Solutions and Drawbacks Current PACS Technology to Address Fault Tolerance Clinical Experiences with Archive Server Downtime: A Case Study 15.6.1 Background 15.6.2 PACS Server Downtime Experience 15.6.2.1 Hard Disk Failure 15.6.2.2 Motherboard Failure 15.6.3 Effects of Downtime 15.6.3.1 At the Management Level 15.6.3.2 At the Local Workstation Level 15.6.4 Downtime for Over 24 Hours 15.6.5 Impact of the Downtime on Clinical Operation Concept of Continuously Available PACS Design CA PACS Server Design and Implementation 15.8.1 Hardware Components and CA Design Criteria 15.8.1.1 Hardware Components in an Image Server 15.8.1.2 Design Criteria
372 372 373 373 373 373 374 375 376 378 381 381 382 384 384 385 386 386 386 387 387 387 388 389 390 390 390 391 391 391 391 392 392 392 392 393 393 393 394
pr.qxd 2/12/04 5:06 PM Page xxiii
15.8.2
15.8.3
15.8.4
15.8.5
CONTENTS
xxiii
Architecture of the CA Image Server 15.8.2.1 The Triple Modular Redundant Server 15.8.2.2 The Complete System Architecture System Evaluation 15.8.3.1 Testbed for System Evaluation 15.8.3.2 CA Image Server—Testing and Performance Measurement Applications of the CA Image Server 15.8.4.1 PACS and Teleradiology 15.8.4.2 Off-Site Backup Archive—Application Service Provider Model (ASP) Summary of Fault Tolerance and Failover 15.8.5.1 Fault Tolerance and Failover 15.8.5.2 Limitation of TMR Voting System 15.8.5.3 The Merit of the CA Image Server
395 395 398 400 400
16. Image/Data Security 16.1
16.2
16.3
16.4
16.5 16.6
Introduction and Background 16.1.1 Introduction 16.1.2 Background Image/Data Security Method 16.2.1 Four Steps in Digital Signature and Digital Envelope 16.2.2 General Methodology Digital Envelope 16.3.1 Image Signature, Envelope, Encryption, and Embedding 16.3.1.1 Image Preprocessing 16.3.1.2 Hashing (Image Digest) 16.3.1.3 Digital Signature 16.3.1.4 Digital Envelope 16.3.1.5 Data Embedding 16.3.2 Data Extraction and Decryption 16.3.3 Some Performance Measurements 16.3.3.1 The Robustness of the Hash Function 16.3.3.2 The Percentage of the Pixel Changed in Data Embedding 16.3.3.3 Time Required to Run the Complete Image/ Data Security Assurance 16.3.4 Limitation of the Method DICOM Security 16.4.1 Current DICOM Security Profiles 16.4.2 Some New DICOM Security on the Horizon HIPAA and Its Impacts on PACS Security PACS Security Server and Authority for Assuring Image Authenticity and Integrity
401 405 405 405 406 406 407 407 409 409 409 409 412 412 412 413 413 413 415 415 416 417 419 419 419 420 421 421 422 422 423 423 425
pr.qxd 2/12/04 5:06 PM Page xxiv
xxiv
CONTENTS
16.6.1
16.7
Comparison of Image-Embedded DE Method and DICOM Security 16.6.2 An Image Security System in a PACS Environment Significance of Image Security
17. PACS Clinical Implementation, Acceptance, Data Migration, and Evaluation 17.1
17.2
17.3 17.4
17.5
17.6
17.7
17.8
Planning to Install a PACS 17.1.1 Cost Analysis 17.1.2 Film-Based Operation 17.1.3 Digital-Based Operation 17.1.3.1 Planning a Digital-Based Operation 17.1.3.2 Checklist for Implementation of a PACS Manufacturer’s Implementation Strategy 17.2.1 System Architecture 17.2.2 Implementation Strategy 17.2.3 Integration of an Existing In-House PACS with a New Manufactured PACS Template for PACS RFP PACS Implementation Strategy 17.4.1 Risk Assessment Analysis 17.4.2 Implementation Phase Development 17.4.3 Development of Workgroups 17.4.4 Implementation Management Implementation 17.5.1 Site Preparation 17.5.2 Defining Equipment 17.5.3 Work Flow Development 17.5.4 Training Development 17.5.5 On-Line: Live 17.5.6 Burn-In Period and System Management System Acceptance 17.6.1 Resources and Tools 17.6.2 Acceptance Criteria 17.6.2.1 Quality Assurance 17.6.2.2 Technical Testing 17.6.3 Acceptance Test Implementation Image/Data Migration 17.7.1 Migration Plan Development 17.7.1.1 Data Migration Schedule 17.7.1.2 Data Migration Tools 17.7.1.3 Date Migration Tuning 17.7.2 An Example—St. John’s Health Center PACS System Evaluation
425 426 428
431 431 431 433 436 436 438 439 439 439 440 442 442 442 443 443 444 444 444 445 446 446 447 447 448 448 449 450 451 452 452 452 453 453 453 454 455
pr.qxd 2/12/04 5:06 PM Page xxv
CONTENTS
17.8.1
17.8.2
17.8.3
Subsystem Throughput Analysis 17.8.1.1 Residence Time 17.8.1.2 Prioritizing System Efficiency Analysis 17.8.2.1 Image Delivery Performance 17.8.2.2 System Availability 17.8.2.3 User Acceptance Image Quality Evaluation—ROC Method 17.8.3.1 Image Collection 17.8.3.2 Truth Determination 17.8.3.3 Observer Testing and Viewing Environment 17.8.3.4 Observer Viewing Sequences 17.8.3.5 Statistical Analysis
18. PACS Clinical Experience, Pitfalls, and Bottlenecks 18.1
18.2
18.3
18.4
Clinical Experience at Baltimore VA Medical Center 18.1.1 Benefits 18.1.2 Costs 18.1.3 Savings 18.1.3.1 Film Operation Costs 18.1.3.2 Space Costs 18.1.3.3 Personnel Costs 18.1.4 Cost-Benefit Analysis Clinical Experience at St. John’s Health Center 18.2.1 St. John’s Health Center’s PACS 18.2.2 Backup Archive 18.2.2.1 The Second Copy 18.2.2.2 The Third Copy 18.2.2.3 Off-Site Third Copy Archive Server 18.2.3 FT Backup Archive and Disaster Recovery 18.2.3.1 ASP Backup Archive 18.2.3.2 Disaster Recovery Procedure PACS Pitfalls 18.3.1 During Image Acquisition 18.3.1.1 Human Errors at Imaging Acquisition Devices 18.3.1.2 Procedure Errors at Imaging Acquisition Devices 18.3.2 At the Workstation 18.3.2.1 Human Error 18.3.2.2 System Deficiency PACS Bottlenecks 18.4.1 Network Contention 18.4.2 Slow Response at Workstation 18.4.3 Slow Response from Archive Server
xxv
455 456 457 457 458 458 459 459 460 460 460 460 461 463 463 464 466 467 467 467 467 467 467 467 469 469 469 471 471 471 472 473 473 473 474 475 475 476 477 477 479 480
pr.qxd 2/13/04 5:51 PM Page xxvi
xxvi
CONTENTS
18.5
PART IV
Pitfalls in DICOM Conformance 18.5.1 Incompatibility in DICOM Conformance Statement 18.5.2 Methods of Remedy PACS-BASED IMAGING INFORMATICS
19. PACS-Based Medical Imaging Informatics 19.1
19.2
19.3
19.4
Medical Imaging Informatics Infrastructure (MIII) 19.1.1 Concept of MIII 19.1.2 MIII Architecture and Components 19.1.3 PACS and Related Data 19.1.4 Image Processing 19.1.5 Database and Knowledge Base Management 19.1.6 Visualization and Graphic User Interface 19.1.7 Communication Networks 19.1.8 Security 19.1.9 System Integration PACS-Based Medical Imaging Informatics Medical Imaging Informatics Infrastructure 19.2.1 Background 19.2.2 Resources in the PACS-Based Medical Imaging Informatics Infrastructure 19.2.2.1 Content-Based Image Indexing 19.2.2.2 Three-Dimensional Rendering and Visualization 19.2.2.3 Distributed Computing 19.2.2.4 Grid Computing CAD in PACS Environment 19.3.1 Computer-Aided Detection and Diagnosis 19.3.2 Methods of Integrating CAD in PACS and MIII Environments 19.3.2.1 CAD without PACS 19.3.2.2 CAD with DICOM PACS Summary of MIII
20. PACS as a Decision Support Tool 20.1
Outcome Analysis of Lung Nodule with Temporal CT Image Database 20.1.1 Background 20.1.2 System Architecture 20.1.3 Graphic User Interface 20.1.4 An Example 20.1.4.1 Case History 20.1.4.2 Temporal Assessment 20.1.5 Temporal Image Database and the MIII
481 481 483 485 487 488 488 488 488 490 490 490 490 491 491 491 491 492 492 493 495 500 504 504 504 504 504 507 509 509 509 510 511 511 511 511 513
pr.qxd 2/13/04 5:51 PM Page xxvii
CONTENTS
20.2
20.3
Image Matching with Data Mining 20.2.1 The Concept of Image Matching 20.2.2 Methodology 20.2.2.1 Data Collection of the Image Matching Database 20.2.2.2 Image Registration and Segmentation 20.2.2.3 An Example of MRI Brain Image Matching with Images in the Database 20.2.3 Image Matching as a Diagnostic Support Tool for Brain Diseases in Children 20.2.3.1 Methods 20.2.3.2 Submitting an Image for Matching 20.2.3.3 Evaluation 20.2.4 Summary of Image Matching Bone Age Assessment with a Digital Hand Atlas 20.3.1 Why Digital Hand Atlas? 20.3.2 PACS-Based and MIII-Driven CAD of Bone Age Assessment 20.3.3 Bone Age Assessment with Digital Hand Wrist Radiograph 20.3.4 Image Analysis 20.3.4.1 Segmentation Procedure 20.3.4.2 Wavelet Decomposition 20.3.4.3 Fuzzy Classifier 20.3.5 Image Database 20.3.6 Integration with Clinical PACS 20.3.6.1 Web-Enabled Hand Atlas Database 20.3.6.2 CAD Server and its Integration with PACS Workstation 20.3.6.3 Steps in Development of the Web-Based Digital Hand Atlas 20.3.6.4 Operation Procedure 20.3.7 Summary of PACS-Based and MIII-Driven CAD for Bone Age Assessment
21. ePR-Based PACS Application Server for Other Medical Specialties 21.1
21.2
PACS Application Server for Other Medical Specialties 21.1.1 ePR-Based PACS Application Server 21.1.2 Review of Electronic Patient Record (ePR) Image-Assisted Surgery System 21.2.1 Concept of the IASS 21.2.1.1 System Components 21.2.1.2 PACS Workflow and the Spinal Surgery Server 21.2.2 Minimally Invasive Spinal Surgery Work Flow 21.2.3 Definition and Functionality of the IASS 21.2.3.1 Definition
xxvii
514 514 515 515 515 517 518 518 519 520 522 524 524 525 525 527 527 528 529 531 533 533 533 533 536 537 539 539 539 540 542 542 542 542 542 542 542
pr.qxd 2/13/04 5:51 PM Page xxviii
xxviii
21.3
CONTENTS
21.2.3.2 Functionality 21.2.4 The IASS Architecture 21.2.4.1 IASS Hardware Components 21.2.4.2 IASS Software Modules 21.2.4.3 Communication Network Connection 21.2.4.4 Industrial Standard 21.2.5 Summary and Current Status Radiation Therapy Server 21.3.1 Radiation Therapy, PACS, and Imaging Informatics 21.3.2 Work Flow of RT Images 21.3.3 DICOM Standard and RT DICOM 21.3.4 PACS and Imaging Informatics-Based RT Server and Dataflow 21.3.5 An Example of an Electronic Patient Record (ePR) in the PACS and Image Informatics-Based RT Server 21.3.6 RT Data Flow—From Input to the Server to the Display 21.3.7 Summary of Current Status
22. New Directions in PACS Learning and PACS-Related Training 22.1
22.2
New Direction in PACS Education 22.1.1 PACS as a Medical Image Information Technology System 22.1.2 Education and Training of PACS in the Past 22.1.2.1 PACS Training Ten to Five Years Ago 22.1.2.2 Five Years Ago to the Present 22.1.3 New Trend of Education and Training Using the Concept of a PACS Simulator 22.1.3.1 Problems with the Past and Current Training Methods 22.1.3.2 New Trend: Comprehensive Training to Include System Integration 22.1.3.3 New Tools—The PACS Simulator 22.1.4 Examples of PACS Simulator 22.1.4.1 PACS Simulator and Training Programs in the Hong Kong Polytechnic University 22.1.4.2 PACS Simulator and Training in the Image Processing and Informatics Laboratory University of Southern California 22.1.5 Future Perspective in PACS Training 22.1.6 PACS Training Summary Changing PACS Learning with New Interactive and Media-Rich Learning Environments 22.2.1 SimPHYSIO and Distance Learning 22.2.1.1 SimPHYSIO 22.2.1.2 Distance Learning
543 545 545 545 545 545 545 548 548 549 553 558 560 561 565 567 568 568 568 568 569 569 569 570 570 573 573
576 579 579 580 581 581 582
pr.qxd 2/13/04 5:51 PM Page xxix
CONTENTS
22.2.2 22.3
PART V
Perspectives of Using SimPHYSIO Technology for PACS Learning Interactive Digital Breast Imaging Teaching File 22.3.1 Background 22.3.2 Computer-Aided Instruction Model 22.3.3 The Teaching File Script and Data Collection 22.3.4 Graphic User Interface 22.3.5 Interactive Teaching File as a Training Tool
xxix
ENTERPRISE PACS
23. Enterprise PACS 23.1 23.2
23.3
23.4
23.5
Background Concept of Enterprise PACS and Early Models 23.2.1 Concept of Enterprise-Level PACS 23.2.2 Early Models of Enterprise PACS Design of Enterprise-Level PACS 23.3.1 Conceptual Design 23.3.1.1 IT Supports 23.3.1.2 Operation Options 23.3.1.3 Architectural Options 23.3.2 Financial and Business Models 23.3.3 The Hong Kong Hospital Authority Healthcare Enterprise—An Example An Example of an Enterprise-Level Chest TB Screening 23.4.1 Background 23.4.1.1 Chest Screening at the Department of Health, Hong Kong 23.4.1.2 Radio-Diagnostic Services, Department of Health 23.4.1.3 Analysis of the Current Problems 23.4.2 The Design of the MIACS 23.4.2.1 Descriptions of Components 23.4.2.2 Work Flow of the Digital MIACS 23.4.2.3 Current Status Summary of Enterprise PACS and Image Distribution
583 583 584 584 586 586 587
589 591 591 592 592 593 595 595 596 597 597 598 601 602 602 602 603 604 604 604 608 609 609
REFERENCES
611
PACS AND IMAGING INFORMATICS GLOSSARY
633
INDEX
639
pr.qxd 2/12/04 5:06 PM Page xxx
pr.qxd 2/12/04 5:06 PM Page xxxi
FOREWORD
THE MANUFACTURER’S POINT OF VIEW William M. Angus, M.D., Ph.D., FACR, h.c. Senior Vice President Philips Medical Systems North America Company Bothell, Washington Mailing address: 335 Crescent Dr., Palm Beach, Fl 33480 561-833-0792 Seven years ago, based on years of experience as an educator, researcher and advisor to both the medical profession and the industries which serve it, Dr. H. K. Huang authored his first text, PACS: Picture Archiving and Communication Systems in Biomedical Imaging—a work primarily directed to the training of medical informatics scientists who were urgently needed by healthcare facilities and industry. The information in that text greatly enhanced the education of those who would ultimately and successfully deal with the problem of proper application of technology in the design of medical information systems. Three years later, reacting promptly to rapid advances in technology and a growing number of clinically functioning PACS installations, Dr. Huang prepared an entirely new volume, PACS (Picture Archiving and Communication Systems): Basic Principles and Applications. This work, although it supplemented its predecessor by including conceptual and technological advances which had surfaced in the intervening years, was primarily intended for the training of those in industry and in the field of healthcare who dealt with the everyday practical realities of planning, operating and maintaining PAC systems. That effort and its salutary effect on prevailing problems were greatly appreciated by the healthcare profession and industry alike. Now, after another three years, Dr. Huang gives us a new text which summarizes and updates the information in both of its predecessors and directs it appropriately to continued development in the field of medical informatics. In addition, this volume deals with streamlining and expanding the PACS concept into every aspect of the healthcare enterprise. Finally, this work is enriched by contributions from Dr. Huang’s students and colleagues, as well by a summary of his own personal interests and experiences which provide us with an historical sketch of the development of PACS and a portrait of a very remarkable and dedicated researcher, educator, and author, Bernie Huang— a friend to many and a gentle man. xxxi
pr.qxd 2/12/04 5:06 PM Page xxxii
xxxii
FOREWORD
THE ACADEMIC AND CARS (COMPUTER ASSISTED RADIOLOGY AND SURGERY) CONGRESS CHAIRMAN POINT OF VIEW Professor Heinz U. Lemke, Ph.D. Technical University of Berlin Berlin, German
[email protected] As in his previous PACS book, Bernie Huang’s work and his related publications are a leading source of reference and learning. This new book provides an excellent state-of-the-art review of PACS and of the developing field of Imaging Informatics. It gives a lead where PACS is moving to. There is much ground to be covered with PACS research and development and its applications. PACS is extending rapidly from the now well-established radiology applications into a hospital-wide PACS. New challenges are accompanying its spread into other clinical fields. Particularly important are the modeling and analysis of the workflow of the affected clinical disciplines as well as the interface issues with the imageconnected electronic patient record. Although the awareness of these issues is increasing rapidly, equally important is the recognition in the professional community, that a more rigorous scientific/systems engineering method is needed for system development. Imaging Informatics can provide an answer to structured design and implementation of PACS infrastructures and applications. Image processing and display, knowledge management and computer aided diagnosis are a first set of Imaging Informatics applications. These will certainly be followed by a wide-spread expansion into therapeutic applications, e.g. radiation therapy and computer assisted surgery. Special workflow requirements and the specification of therapy assist systems need to be considered in a corresponding PACS design. Pre- and intra-operative image acquisition rather than image archiving is becoming a central focus in image-assisted surgery systems. These new developments and possibilities are well represented in Bernie Huang’s publication. It is comforting to know that the knowledge generated by Bernie Huang and others in the field of PACS is regularly documented in this series of now classic text books.
THE ACADEMIC AND HEALTH CENTER POINT OF VIEW Edward V. Staab, M.D. Professor Department of Radiology, Wake Forest University School of Medicine, Medical Center Blvd. Winston-Salem, North Carolina, 27157-1008
[email protected] 336-716-2466 This revision of Bernie Huang’s book on PACS comes with a new title, PACS and Imaging Informatics. The change in titles with each edition seems very appropriate
pr.qxd 2/12/04 5:06 PM Page xxxiii
FOREWORD
xxxiii
considering the dynamic nature of this discipline along with changing conceptual horizons regarding imaging informatics. The prior two editions have been extensively used by those interested in the technical aspects of PACS and as a standard textbook for training purposes. In the foreword to the previous edition, I commented on the beginning widespread deployment of PACS at least for partial solutions to image management. In the very short time interval from the previous edition we now see a nearly complete PACS introduction into most of the major medical centers. These digital formats have enabled the widespread use of teleradiology. Standards like DICOM have continued to evolve and become more robust and accepted. Today, all the major manufacturers of imaging equipment provide DICOM connectivity. The large film manufacturers are investing heavily into digital technology and the validation and installation of PACS equipment and software has become much less onerous. In 1998, when I wrote the previous foreword, many were predicting a change in the practice of radiology, in part brought about by the conversion to PACS. Few foresaw the incredible increased utilization of imaging procedures and the subsequent demands placed on imaging services that have transpired over the past few years. The sheer volume of data coming from cross-sectional imaging devices is challenging the current PACS technology. The use of imaging has escalated tremendously after a brief downturn in usage in the late 1990’s. The value of imaging is now integral to the practice or research of almost every discipline in medicine, including non-traditional users of imaging like psychiatry and dermatology. There is a real workforce shortage in imaging readers, which requires more attention to better efficiency in handling and analyzing the data. Thus, it is even more important for those dealing with digital medical images to understand some details of PACS technology. This book represents a real contribution towards meeting those needs. During the past few years advances in the linkage of imaging information in PACS with other medical information resident in other related medical information systems has been fostered through the IHE (Integrating Healthcare Enterprise) activities of the RSNA and its electronics communication committee. IHE is an attempt to link many textual databases with PACS using HL7 standards to produce more robust information systems for physicians. Today, PACS is considered a component of the larger information system. Dr. Huang has addressed these changes throughout this edition. Of course, as could be predicted, once PACS caught on then simple installations would not suffice the appetites of the busy clinicians and radiologists. They now look for web-based solutions so that some of the work can be done at home and other locations. Telecommunications has allowed for transmittal of images to remote locations for more timely interpretation and additional support for taxing workflow issues. PACS installations have generated a thorough examination of the workflow in large radiology departments. It has been recognized that the introduction of PACS must be accompanied with a thorough review of the workflow in a department and medical center or enterprise to gain the most out of the investment. There remain a number of issues that must be addressed. Further integration of the different information systems will be necessary. The display stations will need to improve and become more ergonomic. There needs to be methods developed to display better the large datasets, perhaps in 3D to start or with some interactive 2D/3D methods. Graphic datasets and comparisons of images and other graphic data over the course of a patients care will be needed. What is really scary for the
pr.qxd 2/12/04 5:06 PM Page xxxiv
xxxiv
FOREWORD
information technology person is what is transpiring on the research fronts regarding new methods for imaging. Several new acquisition technologies are under investigation and perhaps the most promising are those related to optical imaging. The advent of development of molecular imaging to complement the new understanding of molecular medicine will truly tax the PACS environment. Except in a few very special circumstances we currently are not able to image individual cells and molecules in patients, but there is an expanding ability to measure molecular events, particularly with new tracers in nuclear medicine. Today, PACS is not very friendly to the nuclear medicine systems. The functional nature of nuclear medicine entails prolonged and even several days of imaging within a single study and is not handled well by PACS. Nuclear Medicine also uses color displays and heavily relies on graphics. Another area of development that will challenge PACS is the advent of combined imaging with PET/CT, MRI/optical, SPECT/optical and many more combinations in the future. The beleaguered radiologist suffering under mountains of imaging data is beginning to recognize the need for computed aided detection (CAD) and even computed diagnosis (CD). This will only become more important if the large-scale screening trials, such as the National Lung Screening Trial (NLST), and the Digital Mammography screening trial, result in positive outcomes. Other screening trials are contemplated. PACS provides an important element in the infrastructure for the development of CAD/CD tools. Perhaps the most exciting aspect of the digital evolution is the foreseeable benefits from combining all these datasets for individual patient care and the added information that can be gained regarding population studies of normal patients and disease states. Dr. Huang has introduced these concepts and discusses some relevant current examples in Chapters 4, 19, 20, and 21 of this edition. Combinations of data lead to new research hypotheses not otherwise considered. Imaging is important because it documents the anatomic and physiologic phenotypic expression of normal and disease states and can be linked to genomics, proteinomics, metabolomics, the physiome and many other relevant databases. However, to make full use of these potential capabilities it will be necessary for the medical community to pay close attention to the structural ways it records data. Dr. Huang has the perfect background for compiling this text. He has extensive experience, actually installing PACS in several institutions and as a consultant to numerous medical centers both in the United States and internationally. One of the best ways to learn is to teach and he teaches the subject extensively and listens carefully for new ideas and concepts from his students and colleagues. So, Dr. Huang once again has produced a book that is useful for those interested in the current state of the art in PACS and this one along with the previous editions will serve as historical documents regarding the evolution of digital imaging. The interested reader will find ideas for future challenges to PACS development and how this technology can be used to expand our medical knowledge.
pr.qxd 2/12/04 5:06 PM Page xxxv
PREFACE
In the early 1980s, members of the radiology community envisioned a future practice built around the concept of a picture archiving and communication system (PACS). Consisting of image acquisition devices, storage archive units, display workstations, computer processors, and databases, all integrated by a communications network and a data management system, the concept has proven to be much more than the sum of its parts. During the past twenty years, as technology has matured, PACS applications have gone beyond radiology to affect and improve the entire spectrum of health care delivery. PACS installations are now seen around the world—installations that support focused clinical applications as well as hospitalwide clinical processes are seen on every continent. The relevance of PACS has grown far beyond its initial conception; for example, these systems demonstrate their value by facilitating large-scale longitudinal and horizontal research and education in addition to clinical services. PACS-based imaging informatics is one such important manifestation. Many important concepts, technological advances, and events have occurred in this field since the publication of my previous books, PACS in Biomedical Imaging, published by VCH in 1996, and PACS: Basic Principles and Applications published by John Wiley & Sons in 1999. With this new effort, I hope to summarize these developments and place them in the appropriate context for the continued development of this exciting field. First, however, I would like to discuss two perspectives that led to the preparation of the current manuscript: the trend in PACS, and my personal interest and experience.
THE TREND The overwhelming evidence of successful PACS installations, large and small, demonstrating improved care and streamlined administrative functions, has propelled the central discussion of PACS from purchase justification to best practices for installing and maintaining a PACS. Simply put, the question is no longer “Should we?” but “How should we?”. Further spurring the value-added nature of PACS are the mature standards of Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL-7) for medical images and health data, respectively, which are accepted by all imaging and health care information manufacturers. The issue of component connectivity has gradually disappeared as DICOM conformance components have become a commodity. Armed with DICOM and HL-7, the Radiological Society of North America (RSNA) pioneered the concept of Integrating the Healthcare Enterprise (IHE), which develops hospital and radiology work flow profiles to ensure system and component compatibility within the goal of streamlining patient care. For these reasons, the readers will find xxxv
pr.qxd 2/12/04 5:06 PM Page xxxvi
xxxvi
PREFACE
the presentation of the PACS operation in this book shifted from technological development, as in the last two books, to analysis of clinical work flow. Finally, the expansion and development of inexpensive Windows-based PC hardware and software with flat panel display has made it much more feasible to place high-quality diagnostic workstations throughout the health care enterprise. These trends set the stage of Part 1: Medical Imaging Principles (Chapters 1–5) and Part 2: PACS Fundamentals (Chapters 6–12) of the book.
MY PERSONAL INTEREST AND EXPERIENCE Over the decade since I developed PACS at UCLA (1991) and at UCSF (1995) from laboratory environments to daily clinical use, I have had the opportunity to serve in an advisory role, assisting with the planning, design and clinical implementation of many large-scale PACS outside of my academic endeavors, at, to name several, Cornell Medical Center, New York Hospital; Kaoshung VA Medical Center, Taiwan; and St. John’s Hospital, Santa Monica, CA. I had the pleasure of engaging in these three projects from conception to daily clinical operation. There are many other implementations for which I have provided advice in planning or reviewed the design and work flow. I have trained many engineers and health care providers from different parts of the world, who visit our laboratory for short, intensive PACS training sessions and return to their respective institutes or countries to implement their own PACS. I have learned much through these interactions on a wide variety of issues, including image management, system fault tolerance, security, implementation and evaluation, and PACS training [Part 3: PACS Operation (Chapters 13–18)]. My recent three-year involvement in the massive Hong Kong Healthcare PACS and image distribution planning involving 44 public hospitals and 92% of the health care marketplace has prompted me to look more closely into questions of the balance between technology and cost in enterprise-level implementation. I am grateful for the fascinating access to the “backroom” PACS R&D of several manufacturers and research laboratories provided by these assignments. These glimpses have helped me to consolidate my thoughts on enterprise PACS (Part 5, Chapter 23), Web-based PACS (Chapter 13), and the electronic patient record (ePR) complete with images (Chapters 12, 21). In the area of imaging informatics, my initial engagement was outcomes research in lung nodule detection with temporal CT images (Section 20.1). In the late 1990s, Dr. Michael Vannier, then Chairman of Radiology at University of Iowa and Editorin-Chief, IEEE Transactions of Medical Imaging, offered me the opportunity as a Visiting Professor to present some of my thoughts in imaging informatics; while there, I also worked on the IAIMS (Integrated Advanced Information Management Systems) project funded by the National Library of Medicine (NLM). This involvement, together with the opportunity to serve the study section at NLM for four years, has taught me many of the basic concepts of medical informatics, thus leading to the formulation of PACS-Based Medical Imaging Informatics (Part 4: PACSbased Imaging Informatics, Chapters 19–22), and the research in large-scale imaging informatics project Bone Age Assessment with a Digital Atlas (Section 20.3). The latter project is especially valuable because it involves the complete spectrum of imaging informatics infrastructure from data collection and standardization, image
pr.qxd 2/12/04 5:06 PM Page xxxvii
PREFACE
xxxvii
processing and parameter extraction, web-based image submission and bone age assessment result distribution, to system evaluation. Other opportunities have also allowed me to collect the information necessary to complete Chapters 21 and 22. The Hong Kong Polytechnic University (PolyU) has a formal training program in Radiography leading to certificates as well as B.Sc., M.S., and Ph.D. degrees. With the contribution of the University and my outstanding colleagues there, we have built a teaching-based clinical PACS Laboratory and hands-on training facility. Much of the data in Chapter 22 was collected in this one-of-a-kind resource. My daughter, Cammy, who is at Stanford University developing multimedia e-learning materials, helped me to expand the PACS learning concept with her multimedia technique to enrich the contents of Chapter 22. Working with colleagues outside of radiology has allowed me to develop the concepts of the PACS-based radiation therapy server and the neurosurgical server presented in Chapter 21. In 1991, Dr. Robert Ledley, my former mentor at Georgetown University and Editor-in-Chief of the Journal of Computerized Medical Imaging and Graphics offered me an opportunity to act as Guest Editor of a Special Issue, Picture Archiving and Communication Systems, summarizing the ten-year progress of PACS. In 2002, Bob graciously offered me another opportunity to edit a Special Issue, PACS: Twenty Years Later, revisiting the progress in PACS in the ensuing ten years and looking into the future of PACS research and development trends. A total of 15 papers plus an editorial were collected; the senior authors are Drs. E. L. Siegel, Heinz Lemke, K. Inamura, Greg Mogel, Christ Carr, Maria Y. Y. Law, Cammy Huang, Brent Liu, Minglin Li, Fei Cao, Jianguo Zhang, Osman Ratib, Ewa Pietka, and Stephan Erberich. Some materials used in this book were culled from these papers. I would be remiss in not acknowledging the debt of gratitude owed many wonderful colleagues for this adventure in PACS, and now imaging informatics. I thank Drs. Edmund Anthony Franken, Gabriel Wilson, Robert Leslie Bennett, and Hooshang Kangarloo for their encouragement and past support. I can never forget my many students and postdoctoral fellows; I have learned much (and continue to learn even now) from their contributions. Over the past ten years, we have received support from many organizations with a great vision for the future: NCI, NLM, the National Institute of Biomedical Imaging and Bioengineering, other Institutes of the National Institutes of Health, the U.S. Army Medical Research and Materiel Command, the California Breast Research Program, the Federal Technology Transfer Program, and the private medical imaging industry. The support of these agencies and manufacturers has allowed us go beyond the boundaries of current PACS and open new frontiers in research such as imaging informatics. In 2002, Mr. Shawn Morton, Vice President and Publisher, Medical Sciences, John Wiley, who was the editor of my last book, introduced me to Ms. Luna Han, Senior Editor, Life & Medical Sciences, at Wiley. I have had the pleasure of working with these fine individuals in developing the concepts and contents of this manuscript. Selected portions of the book have been used as lecture materials in graduate courses, “Medical Imaging and Advanced Instrumentation” at UCLA, UCSF, and UC Berkeley; “Biomedical Engineering Lectures” in Taiwan, Republic of China and the People’s Republic of China; “and PACS and Medical Imaging Informatics” at PolyU, Hong Kong, and will be used in the “Medical Imaging and Informatics” track
pr.qxd 2/12/04 5:06 PM Page xxxviii
xxxviii
PREFACE
at the Department of Biomedical Engineering, School of Engineering, USC. It is our greatest hope that this book will not only provide guidelines for those contemplating a PACS installation but also inspire others to apply PACS-based imaging informatics as a tool toward a brighter future for health care delivery. Agoura Hills, CA and Hong Kong
H.K. (Bernie) Huang
pr.qxd 2/12/04 5:06 PM Page xxxix
PREFACE TO THE FIRST EDITION
Picture archiving and communication systems (PACS) is a concept that was perceived in the early 1980s by the radiology community as a future method of practicing radiology. PACS consists of image acquisition devices, storage archiving units, display workstations, computer processors, and databases. These components are integrated by a communications network and data management system. During the past ten years, technologies related to these components have matured, and their applications have gone beyond radiology to the entire health care delivery system. As a result, PACS for special clinical applications, as well as large-scale hospitalwide PACS, have been installed throughout the United States and the world. Since the last book PACS in Biomedical Imaging was published by VCH in 1996, several important events have occurred related to PACS. First, many successful PACS installations, large and small, have documented the positive outcome in patient care using the system. Second, the health care administrators, through several years of continued education in PACS, have realized the importance of using PACS in streamlining their operations. The justification of purchasing PACS has not become the issue. Instead, the issue is the best mechanism for installing the PACS. Third, the U.S. military, due to its relative success in the MDIS project, has started a DIN/PACS II project, with an allocation of up to 800 million dollars for the next few years. The injection of new funds into PACS stimulates the further growth of the field. Fourth, digital imaging and communication in medicine (DICOM) has become the de facto standard, and is accepted by all major imaging manufacturers. The issue of connectivity has disappeared, and DICOM conformance components have become a commodity. Fifth, the NT PC hardware and software platform has advanced to the level that it can be used for high quality diagnostic workstations. The relatively inexpensive NT PC allows for affordable workstation installation through health care enterprise. Finally, in 1996, John Wiley & Sons took over the ownership of VCH and inherited the copyright of the last book. Mr. Shawn Morton, Senior Medical Editor, based on his experience in medical publishing, felt that a new book in PACS documenting the aforementioned changes in PACS was due. He suggested that I consider writing a new book reflecting the current trends in PACS and addressing a wider audience, rather than only engineers and medical physicists. Through my interest in the field of PACS, I owe a debt of gratitude to Dr. Edmund Anthony Franken, Jr., past Chairman of the Department of Radiology, University of Iowa; and Drs. Gabriel Wilson, Robert Leslie Bennett, and Hooshang Kangarloo, past Chairmen of the Department of Radiological Sciences at UCLA for their encouragement and support. During the past three years at UCSF, we have continued receiving support from the National Library of Medicine, National Institutes of Health, U.S. Army Medical Research and Material Command, California Breast Research Program, and the xxxix
pr.qxd 2/12/04 5:06 PM Page xl
xl
PREFACE TO THE FIRST EDITION
Federal Technology Transfer Program, for future PACS R&D. This funding allows us to go beyond the boundaries of current PACS and open a new frontier research based on the PACS database. Examples are electronic patient records related to PACS, integration of multiple PACS, and large-scale longitudinal and horizontal basic and clinical research. From our experience during the past fifteen years, we can identify some major contributions in advancing PACS development. Technologically, the first laser film digitizers developed for clinical use by Konica and Lumisys; the development of computed radiography (CR) was by Fuji and its introduction from Japan to the United States by Dr. William Angus of Philips Medical Systems; and the largecapacity optical disk storage developed by Kodak were all critical. Also highly significant were the redundant array of inexpensive disks (RAID); 2000-line and 72 Hz display monitors; the system integration methods developed by Siemens Ganmasonics and Loral for large scale PACS; the DICOM Committee’s efforts to standardize (especially Professor Steve Horii’s unselfish and tireless educating the public on its importance); and the asynchronous transfer mode (ATM) technology for merging local area network and wide area network communications. In terms of events, the annual SPIE PACS and Medical Imaging meeting, and the EuroPACS are the continuous driving force for PACS. In addition, the annual Computer Assisted Radiology (CAR) meeting organized by Professor Heinz Lemke, and the Image Management and Communication (IMAC) organized every other year by Professor Seong K. Mum, provide a forum for international PACS discussion. The InfoRAD Section at the RSNA since 1993 organized first by Dr. Laurens V. Ackerman, and then Dr. C. Carl Jaffe with the live demonstration of DICOM interface, set the tone for industrial PACS open architecture. The many refresher courses in PACS during RSNA organized first by Dr. C. Douglas Maynard and then Dr. Edward V. Staab, provided further education in PACS to the radiology community. In 1992 Second Generation PACS, Professor Michel Osteaux’s edited book, provided me with the inspiration that PACS is moving towards a hospital integrated approach. When Dr. Roger A. Bauman became Editor-in-Chief of the then-new Journal of Digital Imaging in 1998, the consolidation of PACS research and development publications became possible. Colonel Fred Goeringer orchestrated the Army MDIS project, resulting in several large-scale PACS installations which provided major stimuli and funding opportunities for the PACS industry. All these contributions have profoundly influenced my thoughts on the direction of PACS research and development, as well as the contents of this book. This book summarizes our experiences in developing the concept of PACS, and its relationship to the electronic patient record, and more importantly, the potential of using PACS for better health care delivery and future research. Selected portions of the book have been used as lecture materials in graduate courses entitled Medical Imaging and Advanced Instrumentation at UCLA, UCSF, and UC Berkeley, and in the Biomedical Engineering Lectures in Taiwan, Republic of China, and the People’s Republic of China. We hope this book will provide a guideline for those contemplating a PACS installation, and inspire others using PACS as a tool to improve the future of health care delivery. San Francisco and Agoura Hills, CA
H.K. (Bernie) Huang
pr.qxd 2/12/04 5:06 PM Page xli
ACKNOWLEDGMENTS
Many people have provided valuable assistance during the preparation of this book—in particular, many of my past graduate students, postdoctoral fellows, and colleagues, from whom I have learned the most. Part 1 Medical Imaging Principles (Chapters 2–5) resulted from substantial revision of my last book, PACS: Basic Principles and Applications, published in 1999. Some of the materials were originally contributed by K. S. Chuang, Ben Lo, Ricky Taira, Brent Stewart, Paul Cho, ShyhLiang Andrew Lou, Albert W. K. Wong, Jun Wang, Xiaoming Zhu, Stephan Wong, Johannes Stahl, Jianguo Zhang, Ewa Pietka, and Fei Cao. Part 2 PACS Fundamentals (Chapters 6–12) consists of materials based on revised standards, updated technologies, and earlier and current clinical experience of others and ourselves. Part 3 PACS Operation (Chapters 13–18) and Part 5 Enterprise PACS (Chapter 23) are mostly our group’s personal experience during the past four years in planning, designing, implementing, and operating large-scale PAC systems. Part 4 Imaging Informatics (Chapters 19–22) provides an overview of the current trend in medical imaging informatics research learned from my colleagues and other researchers. Specifically, I am thankful for contributions from Fei Cao (Sections 9.6, 15.8, 16.5, 16.6.2, 20.3, 22.1, 22.3), Greg Mogel (Sections 1.2.2, 23.3), Brent Liu (Sections 3.1.1, 3.1.2, 6.3, 6.4, 8.5, 10.6, 15.6, 15.8, 17.4, 17.5, 17.6, 17.7, 17.8, 18.2, 22.1), Michael Z. Zhou (Sections 7.4, 16.4, 22.1), Jianguo Zhang (Sections 3.3.4, 6.4.3, 10.5.3, 11.5.6, 13.5, 14.6, 22.1), Ewa Pietka (Sections 8.6.1.3, 19.1, 20.3), Maria Y. Y. Law (Sections 21.3, 22.1, 23.3, 23.4), Cammy Huang (Section 22.2), X. Q. Zhou (Chapter 16), F. Yu (Section 9.6), Lawrence Chan (Sections 8.2.2.1, 9.6, 21.3), Harold Rutherford and Ruth Dayhoff (Section 12.5.2), Minglin Li (Section 11.3), Michael F. McNitt-Gray (Section 4.2.4), Christopher Carr (Sections 7.5, 12.4), Eliot Siegel (Section 18.1), Heinz Lemke (Section 1.5.2), and Kiyonari Inamura (Section 1.5.3). Finally, a special thank you must go to Michael Z. Zhou, Lawrence Chan, Kenneth Wan, and Mary Hall, who contributed some creative art work; Michael Zhou, who read through all the chapters to extract the list of acronyms and index and to organize the references; and Maria Y. Y. Law, who edited the entire manuscript.
xli
pr.qxd 2/12/04 5:06 PM Page xlii
This book was written with the assistance of the following staff members and consultants of the Imaging Processing and Informatics Laboratory, USC: Fei Cao, Ph.D. Assistant Professor Brent J. Liu, Ph.D. Assistant Professor Greg T. Mogel, M.D. Assistant Professor Michael Z. Zhou, M.S., Ph.D. Candidate Research Associate Ewa Pietka, Ph.D., D.Sc. Consultant Professor, University of Silesia, Katowich, Poland Jianguo Zhang, Ph.D. Consultant Professor, Shanghai Institute of Technical Physics, The Chinese Academy of Science Maria Y. Y. Law, M.Phil., BRS., Teach Dip., Ph.D. Candidate Consultant Assistant Professor, Hong Kong Polytechnic University, and Associate Professor, Shanghai Institute of Technical Physics, The Chinese Academy of Science
pr.qxd 2/12/04 5:06 PM Page xliii
LIST OF ACRONYMS
1-D 2-D 3-D 4-D ABF ACC ACR-NEMA ACSII A/D ADM ADT AE Al AMP AMS ANSI AP API ASI ASIC ASP AT ATL ATM AUI Az
one-dimensional two-dimensional three-dimensional four-dimensional air-blown fiber Ambulatory Care Center American College of Radiology–National Electrical Manufacturers Association American Standard Code for Information Interchange analog-to-digital converter an Acquisition and Display Module admission, discharge, transfer DICOM application entity aluminum amplifier an automatic PACS monitoring system American National Standards Institute anterior-posterior, access point Application Program Interface NATO Advanced Study Institute application-specific integrated circuit active server pages, application service provider acceptance testing active template library asynchronous transfer mode attachment unit interface area under the ROC curve
BAA bpp BDF BERKOM BME BMI BNC
bone age assessment bit per pixel building distribution center Berlin communication project biomedical engineering bone mass index a type of connector for 10 Base2 cables
CA CAD CAI
continuous available computer-aided detection, diagnosis computer-aided instruction xliii
pr.qxd 2/12/04 5:06 PM Page xliv
xliv
LIST OF ACRONYMS
CalREN CARS CCD CCU CDDI CDRH CF CFR CHLA CIE CMS CNA COM CORBA CPRS CPU CR CRF CRT CSMA/CD CSU/DSU CT CTN
California Research and Education Network computer-assisted radiology and surgery charge-coupled device coronary care unit copper distributed data interface Center for Devices and Radiological Health computerized fluoroscopy contrast frequency response Childrens Hospital Los Angeles Commission Internationale de L’Eclairage clinical management system campus network authority component object model Common Object Request Broker Architecture clinical patient record system central processing unit computed radiography central retransmission facility—head end cathode ray tube carrier sense multiple access with collision detection channel service unit/data service unit computed tomography central test node
D/A DASM DB DBMS DC DCM DCT DDR DE DEC DECRAD DES DF DHHS DICOM DIFS DIMSE DIN/PACS DLT DMR DOD DOM DOT DP DQE
digital-to-analog data acquisition system manager a unit to measure signal loss database management system direct current DICOM discrete cosine transform digital reconstructed radiography digital envelope digital equipment corporation DEC radiology information system data encryption standard digital fluorography Department of Health and Human Services Digital Imaging and Communication in Medicine distributed image file server DICOM message service element Digital Imaging Network/PACS digital linear tape double modular redundancy Department of Defense distributed object manager directly observed treatment display and processing detector quantum efficiency
pr.qxd 2/12/04 5:06 PM Page xlv
LIST OF ACRONYMS
DR DRR DR11-W DS DSA DSC DSL DSP DSSS DTD DVA DVSA DVD DVH
digital radiography digital reconstructed radiographs a parallel interface protocol digital service digital subtraction angiography digital scan converter digital subscriber line digital signal processing chip direct sequence spread spectrum document type definition digital video angiography digital video subtraction angiography digital versatile disk dose volume histogram
ECG ECT EDGE EIA EMR EP EPI EPI EPR ESF EuroPACS
electrocardiogram emission computed tomography enhanced data rates for GSM Evolution Electronic Industry Association electronic medical record Electrophysiology echo-planar imaging electronic portal image electronic patient record edge spread function European Picture Archiving and Communication System Association
fMRI FCR FDA FDDI FFBA FFD FFDDM FFT FID FIFO FM FP FRS FT FT FTE FTP FWHM
functional MRI Fuji CR (U.S.) Food and Drug Administration fiber distributed data interface full-frame bit allocation compression algorithm focus to film distance full-field direct digital mammography fast Fourier transform free induction decay first-in-first-out folder manager false positive fast reconstruction system fault tolerance Fourier transform full-time equivalent file transfer protocol full width at half-maximum
GEMS GIF
General Electric Medical System graphics interchange format
xlv
pr.qxd 2/12/04 5:06 PM Page xlvi
xlvi
LIST OF ACRONYMS
G-P GPRS GSM GUI GW
Greulich and Pyle hand atlas general packet radio service global system for mobile communication graphic user interface gateway
HA H and D curve HIMSS HII HIPAA HI-PACS HIS HK HA HL7 HMOs HOI HP HPCC HTML HTTP Hz
high availability Hurter and Driffield characteristic curve Healthcare Information and Management System Society healthcare information infrastructure Health Insurance Portability and Accountability Act hospital-integrated PACS hospital information system Hong Kong Hospital Authority Health Level 7 health maintenance organizations Health Outcomes Institute Hewlett-Packard high-performance computing and communications hypertext markup language hypertext transfer protocol Hertz (cycle/s)
I2 IASS ICD ICMP ICU ID IDF IDNET IEC IFT IHE IIS IMAC
Internet 2 image-assisted surgery system International Classification of Diseases internet control message protocol intensive care unit identification intermediate distribution frame a GEMS imaging modality network International Electrotechnical Commission inverse Fourier transform Integrating the Healthcare Enterprise internet information server a meeting devoted to image management and communication intensity-modulated radiation therapy identifier names and codes radiology information exhibit at RSNA input/output DICOM information object definition imaging plate image processing and informatics Laboratory at USC integrated security layer v1.00 integrated service digital network international Standards Organization image self-scaling network
IMRT INC InfoRAD I/O IOD IP IPI ISCL ISDN ISO ISSN
pr.qxd 2/12/04 5:06 PM Page xlvii
LIST OF ACRONYMS
xlvii
IT IUPAC IVA
information Technology International Union of Pure and Applied Chemistry Intravenous Video Arteriography
JAMIT Java JND JPEG
Japan Association of Medical Imaging Technology just another vague acronym just-noticeable differences Joint Photographic Experts Group
kV(p)
kilovolt potential difference
LAN LCD LINAC L.L. LOINC lp LRI LSB LSF LUT
local area network liquid crystal display linear accelerator left lower Logical Observation, Inc. line pair Laboratory for Radiological Informatics least significant bit line spread function lookup table
mA MAC MAN Mbyte MDF MDIS MFC MGH MHS MIACS MIDS MIII MIME MIMP
milliampere media access control, message authentication code metropolitan area network megabyte message development framework medical diagnostic imaging support systems Microsoft Foundation Class Massachusetts General Hospital message header segment—a segment used in HL7 medical image archiving and communication system medical image database server medical image informatics infrastructure Multi-purpose internet mail extension mediware information message processor—a computer software language for HIS used by the IBM computer a nonprofit defense contractor magnetic optical disk modulator/demodulator multiprocessors milliRoentgen MR angiography magnetic resonance imaging MR spectroscopic Imaging mobile site module modulation transfer function mobile unit modules
MITRE MOD MODEM MP mR MRA MRI MRSI MSM MTF MUM
pr.qxd 2/12/04 5:06 PM Page xlviii
xlviii
LIST OF ACRONYMS
MUMPS MZH
Massachusetts General Hospital Utility Multi-Programming System—a computer software language Mount Zion Hospital
NA NATO ASI NDC NEC NEMA NFS NGI NIE NIH NINT NLM NM NMSE NPC NPRM NTSC NVRAM
Numerical Aperture North Atlantic Treaty Organization-Advanced Science Institutes national drug codes, network distribution center Nippon Electronic Corporation National Electrical Manufacturers’ Association network file system next generation Internet network interface equipment National Institutes of Health nearest integer function National Library of Medicine nuclear medicine normalized mean square error nasopharynx carcinoma notice of processed rule making national television system committee nonvolatile random access memory
OC OD ODBC OFDM OS OSI
optical carrier optical density open database connectivity orthogonal frequency division multiplexing operating system open system interconnection
PA PACS PBR PC PDA PET PGP PHD PICT PL PMS PMT PoP PPI ppm PRA PRF PSF PSL
posterior-anterior picture archiving and communication system pathology-bearing region personal computer personal digital assistant positron emission tomography presentation of grouped procedures personal health data Macintosh picture format plastic Philips Medical Systems photomultiplier tube point of presence parallel peripheral interface parts per million HL7 patient record architecture pulse repetition frequency point spread function photostimulable luminescence
pr.qxd 2/12/04 5:06 PM Page xlix
LIST OF ACRONYMS
PSNR PTD PVM
peak signal-to-noise ratio parallel transfer disk parallel virtual machine system
QA QC Q/R
quality assurance quality control DICOM Query/Retrieve
RAID RAM RETMA RF RFP RGB RIM RIS RLE PLUS ROC ROI RS RSNA RT
redundant array of inexpensive disks random access memory Radio-Electronics-Television Manufacturers Association Radio frequency request for proposal red, green, and blue in color HL7 reference information model radiology information system run-length encoding PACS local user support receiver operating characteristic region of interest receiving site Radiological Society of North America radiation therapy
S-bus SAN SCP SCSI SCU SD SEQUEL SFVAMC SITP SJHC SMIBAF SMPTE SMZO SNOMED SNR Solaris 2.x
a computer bus used by SPARC Storage Area Network DICOM Service Class Provider small computer systems interface DICOM Service Class User standard deviation structured English query language San Francisco VA Medical Center Shanghai Institute of Technical Physics St. John’s Health Center super medical image broker and archive facility Society of Motion Picture and Television Engineers Social and Medical Center East, Vienna systemized nomenclature of medicine signal-to-noise ratio a computer operating system version 2.x used in a SUN computer synchronous optical network Service object pair—a functional unit of DICOM a computer system manufactured by Sun Microsystems single-photon emission computed tomography International Society for Optical Engineering Single point of failure structured query language
SONET SOP SPARC SPECT SPIE SPOF SQL
xlix
pr.qxd 2/12/04 5:06 PM Page l
l
LIST OF ACRONYMS
SR SS SSL ST SUN OP
DICOM Structured Report sending site secure socket layer a special connector for optical fibers Sun computer operating system
T1 TC TCP/IP TDS TFS TGC TIFF TLS TMR TP TPS
DS-1 private line threshold contrast transport control protocol/internet protocol tube distribution system for optical fibers Teaching file script Time gain compensation tagged image file format transport layer security triple modular redundancy true positive treatment planning system
UCAID
The University Corporation for Advanced Internet Development University of California at Los Angeles University of California at San Francisco DICOM unique identifier University of Hawaii Universal Medical Device Nomenclature System Unified Medical Language System Universal Mobile Telecommunications Service a computer operating system software used by Sun and other computers uninterruptible power supply uniform resource locator ultrasound university of southern california United States Army Virtual Radiology Environment unshielded twisted pair
UCLA UCSF UID UH UMDNS UMLS UMTS UNIX UPS URL US USC USAVRE UTP VA VAMC VAX vBNS VL VM VME VMS VPN VR VRAM
Department of Veterans Affairs VA Medical Center a computer system manufactured by Digital Equipment Corporation very high-performance backbone network service visible light imaging a computer operating system software used by IBM computers a computer bus used by older Sun and other computers a computer operating system software used by DEC computers virtual private network DICOM value representation video RAM
pr.qxd 2/12/04 5:06 PM Page li
LIST OF ACRONYMS
VRE VS
virtual radiology environment virtual simulator
WAN WEP Wi-Fi WLAN WORM WS WSU WWAN WWW
wide area network wired equivalent privacy wireless fidelity wireless LAN write once read many workstation working storage unit wireless WAN world wide web
XCT XML
X-ray transmission computed tomography extensible markup language
YCbCr
luminance and two chrominance coordinates used in color digital imaging luminance—in-phase and quadrature chrominance color coordinates
YIQ
li
Ch01.qxd 2/12/04 4:59 PM Page 1
PART I
MEDICAL IMAGING PRINCIPLES
Ch01.qxd 2/12/04 4:59 PM Page 3
CHAPTER 1
Introduction
1.1
INTRODUCTION
PACS (picture archiving and communication system) based on digital, communication, display, and information technologies (IT) has revolutionized the practice of radiology, and in a sense, of medicine during the past 10 years. This textbook introduces its basic concept, terminology, technology development, implementation, its integration to clinical practice, and PACS-based applications.There are many advantages of introducing digital, communications, display and IT to the conventional paper- and film-based operation in radiology and medicine. For example, through digital imaging plate and detector technology and various energy source digital imaging modalities it is possible to improve the modality diagnostic value while at the same time reducing the radiation exposure to the patient, and through the computer and display, it is possible to manipulate a digital image for value-added diagnosis. Also, digital, communication, and IT technologies can be used to understand the health care delivery workflow resulting in speeding up health care delivery and reducing operation costs. With all these benefits, the digital, communication, and IT technologies are gradually changing the method of acquiring, storing, viewing, and communicating medical images and related information in the health care industry. One natural development along this line is the emergence of digital radiology departments and the digital health care delivery environment. A digital radiology department has two components: a radiology information management system (RIS) and a digital imaging system. The RIS is a subset of the hospital information system (HIS) or clinical management system (CMS). When these systems are combined with the electronic patient (or medical) record (ePR or eMR) system, which manages selected data of the patient, we are envisioning the arrival of the total filmless and paperless health care delivery system. The digital imaging system, sometimes referred to as a picture archiving and communication system (PACS) or image management and communication system (IMAC), involves image acquisition, archiving, communication, retrieval, processing, distribution, and display. A digital health care environment consists of the integration of HIS/CMS, ePR, PACS, and other digital clinical systems. The combination of HIS and PACS is sometime referred to as hospital integrated PACS (HI-PACS). The health care delivery system related to PACS
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
3
Ch01.qxd 2/12/04 4:59 PM Page 4
4
INTRODUCTION
and IT is reaching one billion dollars per year (excluding imaging modalities) and is growing. Up-to-date information on these topics can be found in multidisciplinary literature, reports from research laboratories of university hospitals, and medical imaging manufacturers, but not in a coordinated way. Therefore, it is difficult for a radiologist, hospital administrator, medical imaging researcher, radiological technologist, trainee in diagnostic radiology, or student in engineering and computer science to collect and assimilate this information. The purpose of this book is to consolidate PACS-related topics and its integration with HIS and ePR into one self-contained text. Here the emphasis is on the basic principles, augmented with current technological developments and examples in these topics.
1.2 SOME REMARKS ON HISTORICAL PICTURE ARCHIVING AND COMMUNICATION SYSTEMS (PACS) 1.2.1
Concepts, Conferences, and Early Research Projects
1.2.1.1 Concepts and Conferences The concept of digital image communication and digital radiology was introduced in the late 1970s and early 1980s. Professor Heinz U. Lemke introduced the concept of digital image communication and display in his original paper in 1979 (Lemke, 1979). SPIE (International Society for Optical Engineering) sponsored a Conference on Digital Radiography held at the Stanford University Medical Center and chaired by Dr. William R. Brody (Brody, 1981). Dr. M. Paul Capp and colleagues introduced the idea of the “photoelectronic radiology department” and depicted a system block diagram of the demonstration facility at the University of Arizona Health Sciences Center (Capp, 1981). Professor S.J. Dwyer III predicted the cost of managing digital diagnostic images in a radiology department (Dwyer, 1982). However, the technology had not yet matured, and it was not until the First International Conference and Workshop on Picture Archiving and Communication Systems (PACS) held in Newport Beach, CA, January 1982 sponsored by SPIE (Fig. 1.1) (Duerinckx, 1982) that these concepts began to be recognized. During that meeting, the term “PACS” was coined. Thereafter, and to this day, the PACS and Medical Imaging Conferences have been combined into a joint SPIE meeting held each February in Southern California. In Asia and Europe, a similar timeline has taken place. The First International Symposium on PACS and PHD (personal health data), sponsored by the Japan Association of Medical Imaging Technology (JAMIT), was held in July 1982 (JAMIT, 1983). This conference, combined with the Medical Imaging Technology meeting, also became an annual event. In Europe, the EuroPACS (Picture Archiving and Communication Systems in Europe) has held annual meetings since 1983 (Niinimaki, 2003) and remains the driving force for European PACS information exchange. Notable among the many PACS-related meetings that occur regularly are two others: the CAR (computer-Assisted radiology) (Lemke, 2002) and IMAC (image management and communication) meetings (Mun, 1989). CAR is an annual event organized by Professor Lemke of the Technical University of Berlin since 1985 (CAR expanded its name to CARS in 1999, adding computer-assisted surgery to
Ch01.qxd 2/12/04 4:59 PM Page 5
SOME REMARKS ON HISTORICAL PICTURE ARCHIVING AND COMMUNICATION SYSTEMS
5
Figure 1.1 The first set of PACS Conference Proceedings sponsored by SPIE in 1982.
the Congress). IMAC is a biannual conference started in 1989 and organized by Professor Seong K. Mun of Georgetown University. SPIE and CARS annual conferences have been most consistent in publishing conference proceedings. The proceedings provide fast information exchange for researchers working in this field, and many have benefited from such information sources. A meeting dedicated to PACS sponsored by NATO ASI (Advanced Study Institute) was PACS in Medicine held in Evian, France, October 12–24, 1990. Approximately 100 scientists from over 17 countries participated, and the ASI proceedings summarized international efforts in PACS research and development at that time (Huang et al., 1991). This meeting played a central role in the formation of a critical PACS project: the Medical Diagnostic Imaging Support System (MDIS) project sponsored by the U.S. Army Medical Research and Materiel Command, which has been responsible for large-scale military PACS installations in the United States (Mogel, 2003). The InfoRAD Section at the Radiological Society of North America (RSNA) Scientific Assembly has been instrumental in the continued development of PACS technology and its growing clinical acceptance. First organized in 1993 by Dr. Laurens V. Ackerman (and subsequently Dr. C. Carl Jaffe) and showcasing a ground-breaking live demonstration of a digital imaging and communication in medicine (DICOM) interface, InfoRAD has repeatedly set the tone for industrial PACS development. Many refresher courses in PACS during RSNA meetings organized first by Dr. C. Douglas Maynard and then by Dr. Edward V. Staab have provided continuing education in PACS to the radiology community. When Dr. Roger A. Bauman became Editor-in-Chief of the then-new Journal of Digital Imaging in 1998, the consolidation of PACS research and development peer-reviewed papers
Ch01.qxd 2/12/04 4:59 PM Page 6
6
INTRODUCTION
in one representative journal became possible. Followed by Dr. Steve Horii, and now Dr. Jamice Honeyman is currently the Editor-in-Chief of the journal. 1.2.1.2 Early Funded Research Projects by the U.S. Government One of the earliest research projects related to PACS in the United States was a teleradiology project sponsored by the U.S. Army in 1983. A follow-up project was the “Installation Site for Digital Imaging Network and Picture Archiving and Communication System (DIN/PACS)” funded by the U.S. Army and administered by the MITRE Corporation in 1986 (MITRE/ARMY, 1986). Two university sites were selected for the implementation, the University of Washington in Seattle, and the Georgetown University-George Washington University Consortium in Washington, DC with the participation of Philips Medical Systems and AT&T. The National Cancer Institute of the U.S. National Institutes of Health (NIH) funded UCLA as one of its first PACS-related research projects under the title of Multiple Viewing Stations for Diagnostic Radiology in 1985. 1.2.2
PACS Evolution
1.2.2.1 In the Beginning A PACS integrates many components related to medical imaging for clinical practice. Depending on the application, a PACS can be simple, consisting of a few components, or it can be a complex hospital-integrated system. For example, a PACS for an intensive care unit (ICU) may comprise no more than a scanner adjacent to the film developer for digitization of radiographs, a baseband communication system to transmit, and a video monitor in the ICU to receive and display images. Such a simple system was actually implemented by Dr. Richard J. Steckel (Steckel, 1972) as early as 1972. On the other hand, implementing a comprehensive hospital-integrated PACS is a major undertaking that requires careful planning and a million-dollar investment. Because of different operating conditions and environments, the PACS evolution is different in North America, Europe, and Asia. Initially, PACS research and development in North America was largely supported by government agencies and manufacturers. In the European countries, development was supported through a multinational consortium, a country, or a regional resource. European research teams tend to work with a single major manufacturer, and, because most PACS components were developed in the United States or Japan, they were not as readily available to the Europeans. European research teams emphasized PACS modeling and simulation, as well as the investigation of the image processing component of PACS. In Asia, Japan led the early PACS research and development and treated it as a national project. National resources were distributed to various manufacturers and university hospitals. A single manufacturer or a joint venture of several companies integrated a PACS system and installed it in a hospital for clinical evaluation. The manufacturer’s PACS specifications tended to be rigid and left little room for the hospital research teams to modify the technical specifications. During the fifth IMAC meeting in Seoul, South Korea, October 1997, three invited lecturers described the evolution of PACS in Europe, America, and Japan. It was apparently because of these lectures that these regions’ varied PACS research and development emphases gradually merged, which led to many successful PACS implementation around the world. Four major factors contributed to these interna-
Ch01.qxd 2/12/04 4:59 PM Page 7
SOME REMARKS ON HISTORICAL PICTURE ARCHIVING AND COMMUNICATION SYSTEMS
7
tional accomplishments: (1) information exchange from the aforementioned conferences (SPIE, CAR, IMAC, RSNA), (2) introduction of the image and data format standards (DICOM) and their gradual acceptance by private industry, (3) globalization of the imaging manufacturers, and (4) development and sharing of solutions to several difficult technical and clinical problems in PACS. 1.2.2.2 Large-Scale Projects Roger Bauman, in his two 1996 papers in the Journal of Digital Imaging (Bauman 1996, 1996), defined a large-scale PACS as one that satisfies the following four conditions: 1. 2. 3. 4.
In daily clinical operation At least three or four imaging modalities connected to the system Containing workstations inside and outside of the radiology department Able to handle at least 20,000 procedures per year
Such a definition loosely separated the “large” and the “small” PACS at that time. However, nowadays most PACS installed are meeting this requirement. Colonel Fred Goeringer instrumented the Army MDIS project, resulting in several large-scale PACS installations, which in turn provided a major stimulus for the PACS industry (Mogel, 2003). Dr. W. Hruby opened a completely digital radiology department in the Danube Hospital in Vienna in 1990, setting the tone for future total digital radiology departments (Hruby and Maltsidis, 2000). Figure 1.2A shows Dr. Hruby, and Fig. 1.2B depicts two medical imaging pioneers, Professor Lemke and Dr. Michael Vannier (then Editor-in-Chief, IEEE Transactions on Medical Imaging) at the Danube Hospital opening ceremony. These two projects set the stage for the continuing PACS development.
A Figure 1.2 A Professor Hruby (right, with dark suit) at the Danube Hospital, Vienna, during the Open House Ceremony in 1990.
Ch01.qxd 2/12/04 4:59 PM Page 8
8
INTRODUCTION
B
C Figure 1.2 B Professor Lemke (left) and Dr. Vannier (right) during the Danube Hospital Open House Ceremony. C Dr. Steve Horii in one of his DICOM lectures.
1.2.3
Standards and Anchoring Technologies
1.2.3.1 Standards The American College of Radiology—National Electrical Manufacturers’ Association (ACR-NEMA) and later DICOM Standards (DICOM, 1996) are the necessary requirements of system integration in PACS. The establishment of these Standards and their acceptance by the medical imaging community required the contributions of many people from both academia and industry.
Ch01.qxd 2/12/04 4:59 PM Page 9
WHAT IS PACS?
9
Among the contributors from academia, special note should be made of Professor Steve Horii. His unselfish and tireless efforts in educating the public in the meaning and importance of these standards have been vital to the success of PACS (Fig. 1.2C). 1.2.3.2 Anchoring Technologies Many anchoring technologies that have developed during the past 20 years have contributed to the success of PACS operation. This section lists some of them by subject. Although many of these have been improved by later technologies, it is instructive for historical purposes to review them. Because these technologies are well-known now, only a line of introduction for each is given. A picture of an early version of some technologies can found in an article by Huang (2003). For more detailed discussions of these technologies, the reader is referred to other Huang references (1987, 1996, 1999). The anchoring technologies include: •
•
•
•
• •
• •
•
The first laser film digitizers developed for clinical use by Konica and Lumisys and the direct CR chest unit by Konica Computed radiography (CR) by Fuji and its introduction from Japan to the United States by Dr. William Angus of Philips Medical Systems (PMS) The first digital interface unit using the DR11-W transmitting CR images to outside of the device designed and implemented by the UCLA PACS team Hierarchical storage: Integration of a large-capacity optical disk Jukebox by Kodak with the redundant array of inexpensive disks (RAID) using the AMASS software designed by the UCLA PACS team Multiple display: Six 512 monitors at UCLA Multiple display: Three 1024 monitors at UCLA with hardware supported by Dr. Harold Rutherford of the Gould DeAnza 2000-line and 72-Hz display CRT monitors by MegaScan System integration methods developed by the Siemens Gammasonics and Loral for large scale PACS in the MDIS project Asynchronous transfer mode (ATM) technology by Pacific Bell, merging the local area network and high-speed wide area network communications for PACS application by the Laboratory for Radiological Informatics, UCSF
1.3 1.3.1
WHAT IS PACS? PACS Design Concept
A picture archiving and communication system consists of image and data acquisition, storage, and display subsystems integrated by digital networks and application software. It can be as simple as a film digitizer connected to a display workstation with a small image database or as complex as an enterprise image management system. PACS developed in the late 1980s, designed mainly on an ad hoc basis, serving small subsets, called modules, of the total operation of a radiology department. Each of these PACS modules functioned as an independent island unable to
Ch01.qxd 2/12/04 4:59 PM Page 10
10
INTRODUCTION
communicate with other modules. Although they demonstrated the PACS concept and worked adequately for each of the different radiology and clinical services, the piecemeal approach did not address the intricacy of connectivity and cooperation between modules. This weakness surfaced as more PACS modules were added to hospital networks. Maintenance, routing decisions, coordination of machines, fault tolerance, and expandability of the system became increasingly difficult problems. The inadequacy of the early design concept was due partially to a lack of understanding by the implementers of the complexity of a largescale PACS and to the unavailability at that time of many necessary PACS-related technologies. PACS design, as we now understand, should emphasize system connectivity. A general multimedia data management system that is easily expandable, flexible, and versatile in its operation calls for both top-down management to integrate various hospital information systems and a bottom-up engineering approach to build a foundation (i.e., PACS infrastructure). From the management point of view, a hospitalwide or enterprise PACS is attractive to administrators because it provides economic justification for implementing the system. Proponents of PACS are convinced that its ultimately favorable cost-benefit ratio should not be evaluated as the balance of the resources of the radiology department alone but should extend to the entire hospital or enterprise operation. This concept has gained momentum. Many hospitals, and some enterprise-level health care entities, around the world have implemented large-scale PACS and have provided solid evidence that PACS improves the efficiency of health care delivery and at the same time saves hospital operational costs. From the engineering point of view, the PACS infrastructure is the basic design concept to ensure that PACS includes features such as standardization, open architecture, expandability for future growth, connectivity, reliability, fault tolerance, and cost-effectiveness. This design philosophy can be constructed in a modular fashion with the infrastructure design described in Section 1.3.2. 1.3.2
PACS Infrastructure Design
The PACS infrastructure design provides the necessary framework for the integration of distributed and heterogeneous imaging devices and makes possible intelligent database management of all patient-related information. Moreover, it offers an efficient means of viewing, analyzing, and documenting study results and furnishes a method for effectively communicating study results to referring physicians. The PACS infrastructure consists of a basic skeleton of hardware components (imaging device interfaces, storage devices, host computers, communication networks, display systems) integrated by a standardized, flexible software system for communication, database management, storage management, job scheduling, interprocessor communication, error handling, and network monitoring. The infrastructure as a whole is versatile and can incorporate rules to reliably perform not only basic PACS management operations but also more complex research, clinical service, and education requests. The software modules of the infrastructure embody sufficient understanding and cooperation at a system level to permit the components to work together as a system rather than as individual networked computers.
Ch01.qxd 2/12/04 4:59 PM Page 11
PACS IMPLEMENTATION STRATEGIES
11
Generic PACS Components & Data Flow
HIS Database
Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS CONTROLLERS & Archive Server
Application Servers
Workstations
Web Server
Figure 1.3 PACS basic components and data flow. HIS: hospital information system; RIS: radiology information system. System integration and clinical implementation are two other necessary components during the implementation after the system is physically connected. The application servers connected to the PACS controller broadens the PACS infrastructure for research, education and other clinical applications.
Hardware components include patient data servers, imaging modalities, data/ modality interfaces, PACS controllers with database and archive, and display workstations connected by communication networks for handling the efficient data/ image flow in the PACS. Image and data stored in the PACS can be extracted from the archive and transmitted to application servers for various uses. Figure 1.3 shows the PACS basic components and data flow. This diagram will be expanded to present additional details in later chapters. The PACS application server concept shown in the bottom of Fig. 1.3 broadens the role of PACS in the health care delivery system contributing to the advance of this field during the past several years.
1.4 1.4.1
PACS IMPLEMENTATION STRATEGIES Background
A PACS integrates many components related to medical imaging for clinical practice. During the past 20 years, many hospitals and manufacturers in the United States and abroad have researched and developed PACS of varying complexity. Many of these systems are in daily clinical use. These systems can be loosely categorized into five models according to methods of implementation described in Section 1.4.2.
Ch01.qxd 2/12/04 4:59 PM Page 12
12
1.4.2
INTRODUCTION
Five PACS Implementation Models
1.4.2.1 The Home-Grown Model Most early PACS implementation efforts were initiated by university hospitals and academic departments and by research laboratories of major imaging manufacturers. In this model, a multidisciplinary team with technical know-how is assembled by the radiology department or the hospital. The team becomes a system integrator, selecting PACS components from various manufacturers. The team develops system interfaces and writes the PACS software according to the clinical requirements of the hospital. This model allows the research team to continuously upgrade the system with state-of-the-art components. The system so designed is tailored to the clinical environment and can be upgraded without depending on the schedule of the manufacturer. On the other hand, a substantial commitment is required of the hospital to assemble a multidisciplinary team. In addition, because the system developed will be one of a kind, consisting of components from different manufacturers, service and maintenance will be difficult. Now that PACS technology has matured, very few institutions select this model for PACS implementation. However, the development of the PACS application server shown in Fig. 1.3 does require the concept of this model. 1.4.2.2 The Two-Team Effort Model A team of experts, from both outside and inside the hospital, is assembled to write detailed specifications for the PACS for specific clinical environment. A manufacturer is contracted to implement the system. This second model is a team effort between the hospital and manufacturers. This model was chosen by the U.S. military services when they initiated the Medical Diagnostic Imaging Support System (MDIS) concept in the late 1980s. The MDIS adopted the military procurement procedures in acquiring PACS for military hospitals and clinics. The primary advantage of the two-team model is that the PACS specifications are tailored to a specific clinical environment, yet the responsibility for implementing is delegated to the manufacturer. The hospital acts as a purchase agent and does not have to be concerned with the installation. The specifications written by the hospital team tend to be overambitious because they may underestimate the technical and operational difficulty in implementing certain clinical functions. The designated manufacturer, on the other hand, may lack clinical experience and can overestimate the performance of each component. As a result, the completed PACS may not meet the overall specifications. The cost of contracting the manufacturer to develop a specified PACS is also high because only one such system is built. This model has been gradually replaced by the partnership model. 1.4.2.3 The Turnkey Model The manufacturer develops a turnkey PACS and installs it in a department for clinical use. This model is market driven. Some manufacturers see potential profit in developing a specialized turnkey PACS to promote the sale of other imaging equipment. The advantage of this model is that the cost of delivering a generic system tends to be lower. One disadvantage is that if the manufacturer needs a couple of years to complete the production cycle, fast-moving computer and communication technologies may render the system obsolete. Also, it is doubtful whether a generalized
Ch01.qxd 2/12/04 4:59 PM Page 13
A GLOBAL VIEW OF PACS DEVELOPMENT
13
PACS can be used for every specialty in a single department and for every radiology department. 1.4.2.4 The Partnership Model The partnership model consists of the hospital and a manufacturer forming a partnership to share the responsibility. It is very suitable for large-scale PACS implementation. During the past five years, because of the availability of PACS clinical data, health centers have learned to take advantage of the good and discard the bad features of a PACS for their own clinical environments. As a result, the boundaries between the three aforementioned implementation models have gradually fused, resulting in the partnership model. In this model, the health center forms a partnership with a selected manufacturer or a system integrator, which is responsible for its PACS implementation, maintenance, service, training, and upgrading. The arrangement can be a long-term purchase with a maintenance contract or a lease of the system. A tightly coupled partnership can include the training provided by the manufacturer to the hospital in engineering, maintenance, and system upgrade and the sharing of financial responsibility. 1.4.2.5 The Application Service Provider (ASP) Model A system integrator provides all PACS-related service to a client, which can be the entire hospital or a practice group. No IT requirements are needed by the client. ASP is attractive for smaller subsets of the PACS implementation, for example, off-site archive, long-term image archive/retrieval or second-copy archive, DICOM-Web server development, and Web-based image database. For a large, comprehensive PACS implementation, the ASP model requires detailed investigation and a suitable system integrator must be identified. Each model has its advantages and disadvantages. Table 1.1 summarizes the advantages and disadvantages of these five models.
1.5 1.5.1
A GLOBAL VIEW OF PACS DEVELOPMENT The United States
The PACS development in the United States has benefited from four factors: 1. Many research laboratories in the university and small-sized companies in private industry entered the field in 1982. They have been supported by various government agencies, venture capitals, and the IT industry 2. Two major government agencies have been heavily supporting the PACS implementation; the hospitals in the U.S. Department of Defense (Mogel, 2003), and the Department of Veterans Affairs (VA) Medical Center Enterprise (see Chapter 23 for more detail) 3. A major imaging equipment and PACS manufacturer is based in the U.S. 4. Fast-moving and successful small IT companies contribute innovative technologies to PACS development. There are at least 200 large or small PAC systems being used now. Nearly every new hospital being built or designed has a PACS implementation plan attached to its architectural blueprints.
Ch01.qxd 2/12/04 4:59 PM Page 14
14
INTRODUCTION
TABLE 1.1
Advantages and Disadvantages of Five PACS Implementation Models
Method
Advantages
Disadvantages
1. Home-grown system
Built to specifications State-of-the-art technology Continuously upgrading Not dependent on a single manufacturer
Difficult to assemble a team One-of-a-kind system Difficult to service and maintain
2. Two-team effort
Specifications written for a specific clinical environment Implementation delegated to the manufacturer
Specifications overambitious Underestimates technical and operational difficulty Manufacturer lacks clinical experience Expensive
3. Turnkey model
Lower cost Easier maintenance
Too general Not state-of-the-art technology
4. Partnership
System will keep up with technology advancement Health center does not have to worry about the system becoming obsolete Manufacturer has longterm contract to plan ahead
Expensive to the health center Manufacturer may not want to sign a partnership contract with a less prominent center
Minimizes initial capital May accelerate potential return on investment No risk in technology obsolescence Provides flexible growth No space requirement in data center
More expensive over 2- to 4-year time frame compared with a capital purchase Customer has no ownership in equipment
5. ASP
1.5.2
Center must consider the longevity and stability of the manufacturer
Europe
PACS development in Europe has been favored by three factors: 1. European institutions entered into hospital information system- and PACSrelated research and development in the early 1980s. 2. Three major PACS manufacturers are based in Europe. 3. Two major PACS-related annual conferences, EuroPACS and CARS, are based in Europe. Many innovative PACS-related technologies were actually invented in Europe; however, there are far more working PACS installations in the United States than in Europe as of today. Lemke studied the possible factors that account for this phenomenon and came up with results shown in Table 1.2 (Lemke, 2003).
Ch01.qxd 2/12/04 4:59 PM Page 15
EXAMPLES OF SOME EARLY SUCCESSFUL PACS IMPLEMENTATION
15
TABLE 1.2 Nine Positive Factors (for the United States) and Hindering Factors (for Europe) for the Implementation of PACS Favorable Factors in the US 1. 2. 3. 4.
Flexible investment culture Business infrastructure of health care Calculated risk mindedness Competitive environment
5. 6. 7. 8. 9.
Technological leadership drive Speed of service oriented Including PACS expert consultants “Trial and error” approach Personal gain driven
Hindering Factors in Europe Preservation of workplace culture Social service oriented health care Security mindedness Government and/or professional association control No change “if it works manually” Quality of service oriented “Do it yourself” mentality “Wait and see” approach If it fails, “Find somebody to blame”
Lemke (2003).
Over the past two years, European countries have recognized the importance of interhospital communications, which leads to the enterprise-level PACS concept and development. Many countries like Sweden, Norway, France, Italy, Austria, and Germany are in the process of developing such large-scale enterprise-level PACS implementation models. 1.5.3
Asia
Two major forces in PACS development are in Japan and South Korea. Japan first entered into PACS research, development, and implementation in 1982. According to a survey by Inamura (2003), as of 2002, there were a total of 1468 PACS in Japan: • • •
Small: 1174 (fewer than 4 display workstations) Medium: 203 (5–14 display workstations) Large: 91 (15–1300 display workstations)
Some of the large PACS systems are the results of interconnecting legacy PAC systems with newer PAC systems. Earlier Japan PAC systems were not necessarily DICOM compliant or connected to HIS. However, recently more and more PAC systems are adhering to the DICOM standard and coupling HIS, RIS, and PACS. The second large-scale countrywise PACS development is in South Korea. The fast growth path of PACS development during the past five years is almost miraculous because of three major factors: the lack of a domestic X-ray film industry, the economic crisis in 1997, and the National Health Insurance PACS Reimbursement Act (Huang, 2003).
1.6
EXAMPLES OF SOME EARLY SUCCESSFUL PACS IMPLEMENTATION
These three successful PACS installations use similar architectures and technologies.
Ch01.qxd 2/12/04 4:59 PM Page 16
16
1.6.1
INTRODUCTION
Baltimore VA Medical Center
The Baltimore, Maryland VA Medical Center (VAMC), operating with approximately 200 beds, was totally digital, except in mammography, since its opening in 1994. All examinations are 100% archived in PACS with bidirectional HIS/RIS interface. Currently, the system serves three other hospitals including the VAMC’s: Ft. Howard Hospital (259 beds), Perry Point Hospital (677 beds), and Baltimore Rehabilitation and Extended Care Facility. More connections are planned. Since its operation in 1994, surveys from clinicians have indicated that they prefer the filmless system over conventional films. An economic analysis also indicates that filmless operations are offset by equipment depreciation and maintenance costs. The general statistics are as follows: radiology department volumes increased 58%; lost examinations decreased from 8% to 1%; productivity increased 71%; examination decreased 60%, and image reading time decreased 15%. These results suggest that the VAMC and the networked hospitals as a whole increased health care efficiency and reduced operational cost as a result of PACS implementation. A more detailed description of the Baltimore VAMC in terms of the clinical experience is given in Section 18.1. The ePR system with images of the entire VAMC enterprise is given in Section 12.5.2. 1.6.2
Hammersmith Hospital
When the Hammersmith Hospital in London, England built a new radiology department, a committee chaired by the hospital director of finance and information planned a top-down whole-hospital PACS project. The hypothesis of the project was that cost savings arising from PACS would be due to increased efficiency in the hospital. The Hammersmith Hospital facility included the Royal Postgraduate Medical School and the Institute of Obstetrics and Gynaecology. It consisted of 543 beds and served 100,000 people. Justification of the project was based on the direct and indirect cost saving components. In direct cost saving, the following components were considered: archive material and film use, labor, maintenance, operation and supplies, space and capital equipment, and buildings. Indirect cost saving comprised junior medical staff time, reductions in unnecessary investigations, saving in radiologist, technologist, and clinician time, redesignation and change of use of a number of acute beds, and reduction in length of stay. The system had a 10-terabyte long term archive and a 256-gigabyte short-term storage servicing 168 workstations. It has since been overhauled in storage capacity. After the system had been operating for two years, it had improved hospitalwide efficiency, the number of filing clerks had been reduced from 8 to 1, and 3.3 radiologists had been eliminated, physicist/information technology personnel had increased by 1.5, and no films were stored on site. 1.6.3
Samsung Medical Center
Samsung Medical Center in Seoul, South Korea, a 1100-bed general teaching hospital, started a PACS implementation plan with gradual expansion beginning in 1994. The medical center has over 4000 outpatient clinic visits per day and performs about 340,000 examinations per year. The PACS at Samsung serves the following
Ch01.qxd 2/12/04 4:59 PM Page 17
ORGANIZATION OF THIS BOOK
17
functions: primary and clinical diagnosis, conference, slide making, generation of teaching material, and printing hard copies for referring physicians. The system started with 35 workstations in 1994 and had expanded to 145 workstations supported by a 4.5-terabyte long-term archive and 256-gigabyte short-term storage by 1999. It fetched an average of 600–900 examinations/day. All image reviews were from workstations except mammography. Samsung has overhauled its PACS during the past several years, and its capability and capacity have improved substantially. These three successful PACS operations share some commonalties: 1. They are totally digital and use workstations as the primary diagnosis tool, 2. All of them were implemented with the second implementation strategy (twoteam effort), 3. Medical center administration made the initial commitment of total PACS implementation and has continued it support for the system upgrade, 4. All three systems have been under excellent leadership from the top down. Because these three systems were implemented with the second (two team) model, they now face problems during expansion of the system including upgrade and extension services to affiliated hospitals. Data migration, back-up archive, faulttolerant operation, integration with legacy systems, and fast wide-area networks have become major concerns. These issues are discussed in later chapters.
1.7
ORGANIZATION OF THIS BOOK
PACS and Imaging Informatics consists of 5 parts with 23 chapters. Chapter 1 provides some remarks on historical PACS, design concept, and implementation strategies and some successful large-scale PACS implementation. Figure 1.3 presents the basic block diagram of PACS including the concept of application server. Figure 1.4 charts the organization of this book based on the five parts. Part I: Imaging Principles are covered in Chapters 2–5. Chapter 2 describes the fundamentals of digital radiological imaging. It is assumed that the reader already has some basic knowledge in conventional radiographic physics. This chapter introduces the basic terminology used in digital radiological imaging with examples. Familiarizing oneself with this terminology will facilitate the reading of later chapters. Chapters 3 and 4 discuss radiological and light image acquisition systems. Chapter 3 presents conventional projection radiography. Because radiography still accounts for 65–70% of current examinations in a typical radiology department, methods of obtaining digital output from radiographs are crucial for the success of implementing a PACS. For this reason, laser film scanner, digital fluorography, laser-stimulated luminescence phosphorimaging plate (computed radiography), and digital radiography (DR) technologies, including the full-field direct digital mammography, are discussed. Chapter 4 presents sectional and light imaging. The concept of image reconstruction from projections is first introduced, followed by basic knowledge in transmission and emission computed tomography (CT), ultrasound imaging, and
Ch01.qxd 2/12/04 4:59 PM Page 18
18
INTRODUCTION
Introduction Chapter 1
Imaging Basics Chapter 2
Digital Radiography Chapter 3
CT/MR/US/NM Light Imaging Chapter 4
Compression Chapter 5
C/N
Part 1 IMAGING PRINCIPLES
Hospital/Radiology Information System Chapters 7 & 12
C/N
PACS Fundamentals Chapter 6
Standards & Protocols Chapter 7 C/N
Image/data Acquisition Gateway Chapter 8 C/N
PACS Controller and Image Archive Server Chapter 10
HIS/RIS/PACS Integration Chapter 12
C/N
C/N
Display Workstation Chapter 11
Fault-Tolerant Chapter 15
Management and Web-based PACS Chapter 13
Implementation/ Evaluation Chapter 17
Clinical Experience/ Pitfalls/Bottlenecks Chapter 18
Medical Imaging Informatics Chapter 19
Decision Support Tools Chapter 20
Part 2 PACS FOUNDAMENTALS
Image/Data Security Chapter 16
WAN
Telemedicine Teleradiology Chapter 14
Part 3 PACS OPERATIONS
Application Servers Chapter 21
Education/ Learning Chapter 22
Part 4 PACS-based Imaging Informatics Enterprise PACS Chapter 23
Part 5
C/N: Communication Networks (Chapter 9) WAN: Wide Area Network
Figure 1.4 Organization of this book.
magnetic resonance imaging. Nuclear medicine imaging and light imaging including microscopic and endoscopic are discussed. Note that Chapters 3 and 4 are not a comprehensive treatise of projection and sectional imaging. Instead, they provide certain basic terminology of images encountered in diagnostic radiology and light imaging used in medicine, emphasizing the
Ch01.qxd 2/12/04 4:59 PM Page 19
ORGANIZATION OF THIS BOOK
19
digital aspect rather than the physics, of these modalities. The reader should grasp the digital procedures of these imaging modalities to facilitate his/her PACS design and implementation plan. This digital imaging basis is essential for a thorough understanding of interfacing these imaging modalities to a PACS. The concepts of patient work flow and data workflow are also introduced. Chapter 5 covers image compression. After an image has been captured in digital form from an acquisition device, it is transmitted to a storage device for long-term archiving. In general, a digital image file requires a large storage capacity for archiving. For example, an average two-view computed radiography study comprises about 20 Mbytes. Therefore, it is necessary to consider how to compress an image file into a compact form before storage or transmission. The concept of reversible (lossless) and irreversible (lossy) compression are discussed in detail followed by the description of cosine and wavelet transformation compression methods. Techniques are also discussed for handling three-dimensional image [(x, y, z), or (x, y, t)], and even four-dimensional (x, y, z, t) data sets that occur often in sectional imaging and color images from Doppler ultrasound and light imaging. Part II: PACS Fundamentals are covered in Chapters 6–12. Chapter 6 introduces PACS fundamentals including its components and architecture, work flow, and operation models and the concept of image-based electronic patient records (ePR). In Chapter 7, we introduce the industrial standard and protocols. For medical data, HL (Health Level) 7, is reviewed. For image format and communication protocols the former American College of Radiology-National Electrical Manufacturers Association (ACR-NEMA) standard is briefly mentioned followed by a detailed discussion of the Digital Imaging and Communication in Medicine (DICOM) standard that has been adopted by the PACS community. Integrating of Healthcare Enterprise (IHE) protocols, which allow smooth work flow between PACS DICOM components, are presented with examples. Chapter 8 presents the image acquisition gateway. It covers the systematic method of interfacing imaging acquisition devices with the HL 7 and DICOM standards and performing automatic error recovery schemes. The concept of the DICOM broker is introduced, which allows the direct transfer of patient information from the hospital information system (HIS) to the imaging device, eliminating potential typographical errors by the radiographer/technologist at the imaging device console. Chapter 9 discusses image communications and networking. The latest technology in digital communications with asynchronous transfer mode (ATM), gigabit Ethernet, and Internet 2 technologies is described. Chapter 10 presents the DICOM PACS controller and image archive. The PACS image management design concept and software are first introduced, followed by the presentation of three storage technologies essential for PACS operation: redundant array of inexpensive disks (RAID), digital optical cartridge tape, and DVDROM (digital versatile disk). Four recent successfully implemented archive concepts: off-site backup, application service provider (ASP), data migration, and disaster recovery are presented. Chapter 11 discusses image display (both soft copy and hard copy output). An historical review of the development of image display is first introduced, followed by the types of workstation. The detailed design of the DICOM PC-based display
Ch01.qxd 2/12/04 4:59 PM Page 20
20
INTRODUCTION
workstation is presented. Liquid crystal display (LCD) is gradually replacing the cathode ray tube (CRT) for display of diagnostic images; a review of this technology is given. Chapter 12 describes the integration of PACS with the hospital information system (HIS), the radiology information system (RIS), and other medical databases. This chapter forms the cornerstone for the extension of PACS modules to hospitalintegrated PACS and to enterprise-level PACS. Part III: PACS Operation includes Chapters 13–18. Chapter 13 presents PACS data management, distribution, and retrieval. The concept of Web-based PACS and its data flow are introduced. Web-based PACS can be used to populate the number of image workstations throughout the whole hospital and the enterprise, and the image-based ePR cost effectively. Chapter 14 describes telemedicine and teleradiology. State-of-the-art technologies including the Internet 2 and teleradiology service models are presented. Some important issues in teleradiology including cost, quality, and medical-legal issues are discussed. Current concepts in telemammography and telemicroscopy are also introduced. Chapter 15 discusses the concept of PACS fault tolerance. Causes of PACS failure are first listed, followed by the meaning of no loss of image data and no interruption of PACS data flow. Current PACS technology for addressing fault tolerance is presented. The concept of continuous available (CA) PACS design is given, along with an example of a CA PACS archive server. Chapter 16 presents the concept of image/data security. Data security becomes more and more important in telehealth and teleradiology, which use public highspeed wide area networks connecting examination sites with expert centers. This chapter reviews currently available data security technology and discusses one method using the concept of image digital signature. Chapter 17 describes PACS implementation and system evaluation. Both the institution’s and manufacturer’s points of view in PACS implementation are discussed. Some standard methodologies in PACS system implementation, acceptance, and evaluation are given. Chapter 18 describes some PACS clinical experience, pitfalls, and bottlenecks. For clinical experience, special interest is shown for hospital-wise performance. For pitfalls and bottlenecks, some commonly encountered situations are illustrated and remedies are recommended. Part IV: PACS-Based Imaging Informatics consists of Chapters 19–22. This part describes various applications that use PACS data. Chapter 19 describes the PACSbased medical imaging informatics infrastructure. Several examples are used to illustrate components and their connectivity in the infrastructure. Chapter 20 presents the use of PACS as a decision support tool. Major topics are PACS-based computer-aided detection and diagnosis (CAD, CADx) and image match with large image databases. Three examples are given in bone age assessment, diagnostic support tool for brain diseases in children, and outcome analysis of lung nodules. Chapter 21 presents the concept of the PACS application server (See Fig. 1.3, lower center). Its connection to the PACS and components in the server are described. Two examples in radiation therapy server and image-assisted surgery are given.
Ch01.qxd 2/12/04 4:59 PM Page 21
ORGANIZATION OF THIS BOOK
21
Chapter 22 discusses the new direction in PACS learning and PACS-based training. Three topics are discussed. The first describes the PACS simulator concept and how it can be used to understand the PACS concept, design, implementation, work flow, and clinical use and other PACS-related applications. The second topic is on changing PACS learning with new interactive and media-rich learning environments. The third example is on using an interactive breast imaging teaching file as a learning tool. Part V: Enterprise PACS consists of Chapter 23. This chapter first provides an overview of some major developments in enterprise-level PACS in the U.S., Europe, and Asia. The basic infrastructures of enterprise-level PACS and business models are given, followed by the design of two enterprise-level examples on PACS with ePR image distribution and a chest TB screening system.
Ch02.qxd 2/12/04 5:00 PM Page 23
CHAPTER 2
Digital Medical Image Fundamentals
2.1
TERMINOLOGY
This chapter discusses some fundamental concepts and tools in digital medical imaging that are used throughout this text. Medical imaging here means both radiological and light images used for diagnosic purposes. These concepts are derived from conventional radiographic imaging and digital image processing. For an extensive review of these subjects, see Barrett and Swindell (1981), Benedetto et al. (1990), Bertram (1970), Beutel et al. (2000), Bracewell (1965), Brigham (1979), Cochran et al. (1967), Curry et al. (1987), Dainty and Shaw (1984), Gonzalez and Cointz (1982), Hendee and Wells (1997), Huang (1996), Robb (1995), Rosenfeld and Kak (1976), and Rossman (1969). Digital Image. A digital image is a two-dimensional array of nonnegative integers f(x, y), where 1 £ x £ M and 1 £ y £ N, in which M and N are positive integers representing the number of columns and rows, respectively. For any given x and y, the small square in the image represented by the coordinates (x, y) is called a picture element or a pixel, and f(x, y) is its corresponding pixel value. When M = N, f becomes a square image; most sectional images used in medicine are square images. If the image f(x, y, z) is three dimensional, then the picture element is called a voxel. Digitization and digital capture. Digitization is a process that quantizes or samples analog signals into a range of digital values. Digitizing a picture means converting the continuous gray tones in the picture into a digital image. About 70% of radiological examinations, including skull, chest, breast, abdomen, bone, and mammogram are captured on X-ray films or computed radiography procedure. The process of projecting a three-dimensional body into a two-dimensional image is called projection radiography. An X-ray film can be converted to digital numbers with a film digitizer. The laser scanning digitizer is the gold standard among digitizers because it can best preserve the resolution of the original analog image. A laser film scanner can digitize a standard X-ray film (14 in. ¥ 17 in.) to 2000 ¥ 2500 pixels with 12 bits per pixel. Another method to acquire digital projection radiography is computed radiography (CR), a technology that uses a laser-stimulated luminescence phosphor imaging plate as an X-ray detector. The imaging plate is exposed, and a latent image is formed in it. A laser beam is used to scan the exposed imaging plate. The latent image is excited and emits light photons, which are detected and converted to electronic signals. The electronic signals are converted to digital signals to form a digital PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
23
Ch02.qxd 2/12/04 5:00 PM Page 24
24
DIGITAL MEDICAL IMAGE FUNDAMENTALS
X-ray image. Recently developed direct X-ray detectors can capture the X-ray image without going through an additional medium like the imaging plate. This method of image capture is sometimes called direct digital radiography. Images obtained from the other 30% of radiology examinations, which include computed tomography (CT or XCT), nuclear medicine (NM), positron emission tomography (PET), single-photon emission computed tomography (SPECT), ultrasonography (US), magnetic resonance imaging (MRI), digital fluorography (DF), and digital subtraction angiography (DSA), are already in digital format when they are generated. Digital radiological image. The aforementioned images are collectively called digitized or digital radiological images; digitized if obtained through a digitizer and digital if generated digitally. The pixel (voxel) value (or gray level value, or gray level) can range from 0 to 255 (8 bit), from 0 to 511 (9 bit), from 0 to 1023 (10 bit), from 0 to 2045 (11 bit), and from 0 to 4095 (12 bit), depending on the digitization procedure or the radiological procedure used. These gray levels represent physical or chemical properties of the anatomical structures in the object. For example, in an image obtained by digitizing an X-ray film, the gray level value of a pixel represents the optical density of the small square area of the film. In the case of X-ray computed tomography (XCT), the pixel value represents the relative linear attenuation coefficient of the tissue; in MRI, it corresponds to the magnetic resonance signal response of the tissue; and in ultrasound imaging, it is the echo signal of the ultrasound beam when it penetrates the tissues. Image size. The dimensions of an image are the ordered pair (M, N), and the size of the image is the product M ¥ N ¥ k bits where 2k equals the gray level range. In sectional images, most of the time M = N. The exact dimensions of a digital image are sometimes difficult to specify because of the design constraints imposed on the detector system for various examination procedures. Therefore, for convenience, we call a 512 ¥ 512 image a 512 image, a 1024 ¥ 1024 image a 1 K image, and a 2048 ¥ 2048 image a 2 K image, even though the image itself may not be exactly 512, 1024, or 2048 square. Also, in computers, 12 bits is an odd number for the computer memory and storage device to handle. For this reason, 16 bits or 2 bytes are normally allocated to store a 12-bit pixel. Table 2.1 lists the sizes of some conventional medical images. Histogram. The histogram of an image is a plot of the pixel value (abscissa) against the frequency of occurrence of the pixel value in the entire image (ordinate). For an image with 256 possible gray levels, the abscissa of the histogram ranges from 0 to 255. The total pixel count under the histogram is equal to M ¥ N. The histogram represents pixel value distribution, an important characteristic of the image (see Fig. 5.4B and D, for examples). Image Display. A digital image can either be printed on film or paper as a hard copy or be displayed on a cathode ray tube (CRT) video monitor or a liquid crystal display (LCD) as a soft copy. The latter is volatile, because the image disappears once the display device is turned off. To display a soft-copy digital radiological image, the pixel values are first converted to analog signals compatible with conventional video signals used in the television industry. This procedure is called digital-to-analog (D/A) conversion. Soft-copy display can handle image sizes from 256, 512, 1024, to 2048. Figure 2.1 shows the relative comparison between these four image sizes.
Ch02.qxd 2/12/04 5:00 PM Page 25
RADIOLOGICAL TEST OBJECTS AND PATTERNS
TABLE 2.1
25
Sizes of Some Common Medical Images One Image (bits)
Nuclear medicine (NM) Magnetic resonance imaging (MRI) Ultrasound (US)* Digital subtraction angiogrpahy (DS) Digital microscopy Digital color microscopy Color light images Computed tomography (CT) Computed/digital radiography (CR/DR) Digitized X-rays Digital mammography
# of Images/Exam
One Exam ination
128 ¥ 128 ¥ 12 256 ¥ 256 ¥ 12
30–60 60–3000
1–2 MB 8 MB up
512 ¥ 512 ¥ 8 (24) 512 ¥ 512 ¥ 8
20–240 15–40
5–60 MB 4–10 MB
512 ¥ 512 ¥ 8 512 ¥ 512 ¥ 24 512 ¥ 512 ¥ 24 512 ¥ 512 ¥ 12 2048 ¥ 2048 ¥ 12
1 1 4–20 40–3000 2
0.25 MB 0.75 MB 3–15 MB 20 MB up 16 MB
2048 ¥ 2048 ¥ 12 4000 ¥ 5000 ¥ 12
2 4
16 MB 160 MB
* Doppler US with 24-bit color images.
2.2 DENSITY RESOLUTION, SPATIAL RESOLUTION, AND SIGNAL-TO-NOISE RATIO The quality of a digital image is characterized by three parameters: spatial resolution, density resolution, and signal-to-noise ratio. The spatial and density resolutions are related to the number of pixels and the range of pixel values used to represent the object. In a square image N ¥ N ¥ k, N is related to the spatial resolution and k to the density resolution. A high signal-to-noise ratio means that the image has a strong signal but little noise and is very pleasing to the eye, hence a better-quality image. Figure 2.2 demonstrates the concept of spatial and density resolutions of a digital image using a CT body image (512 ¥ 512 ¥ 12 bits) as an example. Figure 2.2A shows the original and three images with a fixed spatial resolution (512 ¥ 512) but three variable density resolutions (8, 6, and 4 bits/pixel). Figure 2.2B shows the original and three images with a fixed density resolution (12 bits/pixel) but three variable spatial resolutions (256 ¥ 256, 128 ¥ 128, and 32 ¥ 32 pixels). Clearly, the quality of the CT image is decreasing starting from the original. For an example of an image with noise introduced, see Figure 2.9. Spatial resolution, density resolution, and signal-to-noise ratio of the image should be adjusted properly when image is acquired. A high-resolution image requires a larger memory capacity for storage and a longer time for image transmission and processing.
2.3
RADIOLOGICAL TEST OBJECTS AND PATTERNS
Test objects or patterns (sometimes called phantoms) used to measure the density and spatial resolutions of radiological imaging equipment can be either physical phantoms or digitally generated patterns.
Ch02.qxd 2/12/04 5:00 PM Page 26
26
DIGITAL MEDICAL IMAGE FUNDAMENTALS
Figure 2.1 Terminology used in a radiological image: types of image, image size, and pixel.
A physical phantom is used to measure the performance of a digital radiological device. It is usually constructed with different materials shaped in various geometric configurations embedded in a uniform background material (e.g., water or plastic). The most commonly used geometric configurations are circular cylinder, sphere, line pairs (alternating pattern of narrow rectangular bars with background of the same width), step wedge, and star shape. The materials used to construct these configurations are lead, various plastics, water, air, and iodine solutions of various concentrations. If the radiodensity of the background material differs greatly from that of the test object, it is called a high-contrast phantom; otherwise, it is a low-
Ch02.qxd 2/12/04 5:00 PM Page 27
RADIOLOGICAL TEST OBJECTS AND PATTERNS
A
A
R
R
30 cm
O:512x512x12
10/22/2002
30 cm
8 bits/pixel
W609 L-50
10/22/2002
R
R
30 cm
6 bits/pixel
W609 L-50
A
A
A
27
10/22/2002
30 cm
4 bits/pixel
10/22/2002
W609 L-50
Figure 2.2 Illustration of spatial and density resolutions using an abdominal CT image (512 ¥ 512 ¥ 12 bits) as an example. (A) The original and three images with a fixed spatial resolution (512 ¥ 512) but three variable density resolutions (8, 6, and 4 bits/pixel). (B) The original and three images with a fixed density resolution (12 bits/pixel) but three variable spatial resolutions (256 ¥ 256, 128 ¥ 128, and 32 ¥ 32 pixels). Clearly, the quality of the CT image is decreasing starting from the original.
contrast phantom. The circular cylinder, sphere, and step-wedge configurations are commonly used to measure spatial and density resolutions. Thus the statement that the X-ray device can detect a 1-mm cylindrical object with 0.5% density difference from the background means that this particular radiological imaging device can produce an image of the cylindrical object made from material that has an X-ray attenuation difference from the background of 0.5%; thus the difference between the average pixel value of the object and that of the background is measurable or detectable. A digitally generated pattern, on the other hand, is used to measure the performance of the display component of a digital radiological device. In this case, the various geometric configurations are generated digitally. The gray level values of
Ch02.qxd 2/12/04 5:00 PM Page 28
28
DIGITAL MEDICAL IMAGE FUNDAMENTALS
A
A
R
R
30 cm
30 cm
O:512x512x12
10/22/2002
W609 L-50
256x256x12
A
10/22/2002
W609 L-50
A
R
R
30 cm
30 cm
B
128x128x12
10/22/2002
W609 L-50
32x32x12
10/22/2002
W609 L-50
Figure 2.2 Continued
these configurations are inputted to the display component according to certain specifications. A digital phantom is an ideal digital image. Any distortion of these images observed from the display is a measure of the imperfections of the display component. The most commonly used digital phantom is the Society of Motion Picture and Television Engineers (SMPTE) phantom/pattern. Figure 2.3 shows some commonly used physical phantoms, their corresponding X-ray images, and the SMPTE digitally generated patterns.
2.4 2.4.1
IMAGE IN THE SPATIAL DOMAIN AND THE FREQUENCY DOMAIN Frequency Components of an Image
If pixel values f(x, y) of a digital radiological image represent anatomical structures in space, one can say that the image is defined in the spatial domain. The image
Ch02.qxd 2/12/04 5:00 PM Page 29
IMAGE IN THE SPATIAL DOMAIN AND THE FREQUENCY DOMAIN
29
A Figure 2.3 Some commonly used physical test objects and digitally generated test patterns. (A) Physical: A-1, star-shaped line pair pattern embedded in water contained in a circular cylinder; A-2, high-contrast line pair; A-3, low-contrast line pair; A-4, step wedge. (B) Corresponding X-ray images. Morie patterns can be seen in B-1. (C) Digitally generated 512 images: C-1, high-contrast line pair [gray level = 0, 140; width (in pixels) of each line pair = 2, 4, 6, 8, 10, 12, 14, 16, 32, 64, and 128]; C-2, low-contrast line pair [gray level = 0, 40; width (in pixels) of each line pair = 2, 4, 8, 16, 20, and 28]. The line pair (LP) indicated in the figure shows the width of 16 pixels. (D) Soft copy display of the 1024 ¥ 1024 Society of Motion Picture and Television Engineers (SMPTE) phantom using the JPEG format (see Chapter 5) depicts both contrast blocks [0 (black) - 100 (white)%] and high-contrast line pairs (four corners and midright) and low-contrast line pairs (midleft). D-1, display adjusted to show as many contrast blocks as possible resulted in the low-contrast line pairs being barely visible; D-2, display adjusted to show the low-contrast line pairs resulted in indistinguishable contrast blocks (0–40% and 60–100%). The adjustment of soft copy display is discussed in Chapter 11.
f(x, y) can also be represented as its spatial frequency components (u, v) through a mathematical transform (see Section 2.4.2). In this case, we use the symbol F(u, v) to represent the transform of f(x, y) and say that F(u, v) is the frequency representation of f(x, y) and is defined in the frequency domain. F(u, v) is again a digital image, but it bears no visual resemblance to f(x, y) (see Fig. 2.4). With proper training, however, one can use information appearing in the frequency domain, and not easily visible in the spatial domain, to detect some inherent characteristics of each type of radiological image. If the image has many edges, there would be many highfrequency components. On the other hand, if the image has only uniform materials, like water or plastic, then it has low-frequency components.
Ch02.qxd 2/12/04 5:00 PM Page 30
30
DIGITAL MEDICAL IMAGE FUNDAMENTALS
B,C Figure 2.3 Continued
The concept of using frequency components to represent anatomical structures might seem strange at first, and one might wonder why we even have to bother with this representation. To understand this better, consider that a radiological image is composed of many two-dimensional sinusoidal waves, each with individual amplitude and frequency. For example, a digitally generated “uniform image” has no frequency components, only a constant (dc) term. An X-ray image of the hand is composed of many high-frequency components (edges of bones) and few lowfrequency components, whereas an abdominal X-ray image of the urinary bladder filled with contrast material is composed of many low-frequency components (the contrast medium inside the urinary bladder) but very few high-frequency components. Therefore, the frequency representation of a radiological image gives a different perspective on the characteristics of the image under consideration. On the basis of this frequency information in the image, we can selectively change the frequency components to enhance the image. To obtain a smoother appearing image we can increase the amplitude of low-frequency components, whereas to enhance the edges of bones in the hand X-ray image we can magnify the amplitude of the high-frequency components.
Ch02.qxd 2/12/04 5:00 PM Page 31
IMAGE IN THE SPATIAL DOMAIN AND THE FREQUENCY DOMAIN
31
D-1 Figure 2.3 Continued
Manipulating an image in the frequency domain also yields many other advantages. For example, we can use the frequency representation of an image to measure its quality. This requires the concepts of point spread function (PSF), line spread function (LSF), and modulation transfer function (MTF), discussed in Section 2.5. In addition, radiological images obtained from image reconstruction principles are based on frequency component representation. Utilization of frequency representation also gives an easier explanation of how an MRI image is formed. (This is discussed in Chapter 4.) 2.4.2
The Fourier Transform Pair
As discussed above, a radiological image defined in the spatial domain (x, y) can be transformed to the frequency domain (u, v). The Fourier transform is one method for doing this. The Fourier transform of a two-dimensional image f(x, y), denoted by F{f(x, y)}, is given by:
Ch02.qxd 2/12/04 5:00 PM Page 32
32
DIGITAL MEDICAL IMAGE FUNDAMENTALS
D-2 Figure 2.3 Continued
ᑤ{ f ( x, y)} = F (u, v) =
•
Ú Ú f ( x, y) exp[-i 2p(ux + vy)]dxdy
-•
= Re(u, v) + i Im(u, v)
(2.1)
where i = -1 and Re(u, v) and Im(u, v) are the real and imaginary components of F(u, v), respectively. The magnitude function F (u, v) = [Re 2 (u, v) + Im 2 (u, v)]
1 2
(2.2)
is called the Fourier spectrum and |F(u, v)|2 the energy spectrum of f(x, y). The function Im(u, v) F(u, v) = tan -1 (2.3) Re(u, v)
Ch02.qxd 2/12/04 5:00 PM Page 33
IMAGE IN THE SPATIAL DOMAIN AND THE FREQUENCY DOMAIN
33
Figure 2.4 (A) A digital chest X-ray image represented in spatial domain (x, y). (B) The same digital chest X-ray image represented in the frequency domain (u, v).The low-frequency components are in the center, and the high-frequency components are on the periphery.
is called the phase angle. The Fourier spectrum, the energy spectrum, and the phase angle are three parameters derived from the Fourier transform that can be used to represent the properties of an image in the frequency domain. Figure 2.4B shows the Fourier spectrum of the chest image in Figure 2.4A. Given F(u, v), f(x, y) can be obtained by using the inverse Fourier transform
Ch02.qxd 2/12/04 5:00 PM Page 34
34
DIGITAL MEDICAL IMAGE FUNDAMENTALS
ᑤ -1 [F (u, v)] = f ( x, y) •
=
Ú Ú F (u, v) exp[-i 2 p(ux + vy)]dudv
(2.4)
-•
The two functions f(x, y) and F(u, v) are called the Fourier transform pair. The Fourier and the inverse Fourier transforms enable the transformation of a twodimensional image from the spatial domain to the frequency domain, and vice versa. In digital imaging, we use the discrete Fourier transform for computation instead of using Eq. (2.1), which is continuous. Fourier transform can also be used in 3dimensional (3-D) imaging; by adding the z-component in the equation, it can transform a 3-D image from the spatial to the frequency domain, and vice versa. 2.4.3
The Discrete Fourier Transform
The Fourier transform is a mathematical concept; to apply it to a digital image, the formulas must be converted to a discrete form. The discrete Fourier transform is an approximation of the Fourier transform. For a square digital radiological image, the integrals in the Fourier transform pair can be approximated by summations as follows: F (u, v) =
1 N
N -1 N -1
  f ( x, y) exp[-i 2 p(ux + vy) x =0 y=0
N]
(2.5)
N]
(2.6)
for u, v = 0, 1, 2, . . . , N - 1, and f ( x, y) =
1 N
N -1 N -1
  F (u, v) exp[-i 2 p(ux + vy) u =0 v=0
for x, y = 0, 1, 2, . . . , N - 1. The f(x, y) and F(u, v) shown in Eqs. (2.5) and (2.6) are called the discrete Fourier transform pair. It is apparent from these two equations that once the digital radiological image f(x, y) is known, its discrete Fourier transform can be computed with simple multiplication and addition, and vice versa.
2.5
MEASUREMENT OF IMAGE QUALITY
Image quality is a measure of the performance of an imaging system used for a specific radiological examination. Although the process of making a diagnosis from a radiological image is often subjective, a higher-quality image does provide better diagnostic information. We will describe some physical parameters for measuring image quality based on the concepts of density and spatial resolutions and signalto-noise level introduced above. In general, the quality of an image can be measured by its sharpness, resolving power, and noise level. Image sharpness and resolving power are related and are inherited from the design of the instrumentation, whereas image noise arises from
Ch02.qxd 2/12/04 5:00 PM Page 35
MEASUREMENT OF IMAGE QUALITY
35
photon fluctuations from the energy source and electronic noise accumulated through the imaging chain. Even if there were no noise in the imaging system (a hypothetical case), the inherent optical properties of the imaging system might well prevent the image of a high-contrast line pair phantom from giving sharp edges between black and white areas. By the same token, even if a perfect imaging system could be designed, the nature of random photon fluctuation would introduce noise into the image. Sections 2.5.1 and 2.5.2 discuss the measurement of sharpness and noise based on the established theory of measuring image quality in diagnostic radiological devices. Certain modifications are included to permit adjustment for digital imaging terminology. 2.5.1
Measurement of Sharpness
2.5.1.1 Point Spread Function (PSF) Consider the following experiment. A small circular hole is drilled in the center of a lead plate (phantom), which is placed between an X-ray tube (energy source) and an image receptor. An image of this phantom is obtained, which can be recorded on a film or by digital means and be displayed on a TV monitor (see Fig. 2.5A). The gray level distribution of this image (corresponding to the optical density) is comparatively high in the center of the image, where the hole is located, and decreases radially outward, reaching zero at a certain distance away from the center. Ideally, if the circular hole is small enough and the imaging system is a perfect system, we would expect to see a perfectly circular hole in the center of the image with a uniform gray level within the hole and zero elsewhere. The size of the circle in the image would be equal to the size of the circular hole in the plate if no magnification were introduced during the experiment. However, in practice, such an ideal image never exists. Instead, a distribution of the gray level, as described above, will be observed. This experiment demonstrates that the image of a circular hole in the phantom never has a well-defined sharp edge but has, instead, a certain unsharpness. If the circular hole is small enough, the shape of this gray level distribution is called the point spread function (PSF) of the imaging system (consisting of the X-ray source, the image receptor, and the display). The point spread function of the imaging system can be used as a measure of the unsharpness of an image produced by this imaging system. In practice, however, the point spread function of an imaging system is very difficult to measure. For the experiment described in Figure 2.5A, the size of the circular hole must be chosen very carefully. If the circular hole is too large, the image formed in the detector and seen in the display would be dominated by the circular image and one would not be able to measure the gray level distribution any more. On the other hand, if the circular hole is too small, the image formed becomes the image of the X-ray focal spot, which does not represent the complete imaging system. In either case, the image cannot be used to measure the PSF of the imaging system. Theoretically, the point spread function is a useful concept in estimating the sharpness of an image. Experimentally, the point spread function is difficult to measure because of the constraints just noted. To circumvent this difficulty in determining the point spread function of an imaging system, the concept of the line spread function is introduced.
Ch02.qxd 2/12/04 5:00 PM Page 36
36
DIGITAL MEDICAL IMAGE FUNDAMENTALS
Figure 2.5 Experimental setup for defining the point spread function (PSF) (A), the line spread function (LSF) (B), and the edge spread function (ESF) (C).
2.5.1.2 Line Spread Function (LSF) Replace the circular hole with a long, narrow slit in the lead plate (slit phantom) and repeat the experiment. The image formed on the image receptor and seen on the display becomes a line of certain width with a nonuniform gray level distribution. The gray level value is high in the center of the line, decreasing toward the sides until it assumes the gray level of the background. The shape of this gray level distribution is called the line spread function (LSF) of the imaging system. Theoretically, a line spread function can be considered as a line of continuous holes placed very closely together. Experimentally, the line spread function is much easier to measure than the PSF. Figure 2.5B illustrates the concept of the line spread function of the system. 2.5.1.3 Edge Spread Function (ESF) If the slit phantom is replaced by a single step wedge (edge phantom) such that half of the imaging area is lead and the other is air, the gray level distribution of the image is the edge spread function (ESF) of the system. For an ideal imaging system, any trace perpendicular to the edge of this image would yield a step function
Ch02.qxd 2/12/04 5:00 PM Page 37
MEASUREMENT OF IMAGE QUALITY
ESF( x) = 0 x0 > x ≥ -B = A B ≥ x ≥ x0
37
(2.7)
where x is the direction perpendicular to the edge, x0 is the location of the edge, -B, and B are the left and right boundaries of the image, and A is a constant. Mathematically, the line spread function is the first derivative of the edge spread function given by the equation LSF ( x) =
d[ESF ( x)] dx
(2.8)
It should be observed that the edge spread function is easy to obtain experimentally, because only an edge phantom is required to set up the experiment. Once the image has been obtained with the image receptor, a gray level trace perpendicular to the edge yields the edge spread function of the system. To compute the line spread function of the system, it is only necessary to take the first derivative of the edge spread function. Figure 2.5C depicts the experimental setup to obtain the edge spread function. 2.5.1.4 Modulation Transfer Function (MTF) Now substitute the edge phantom with a high-contrast line pair phantom with different spatial frequencies and repeat the preceding experiment. In the image receptor, an image of the line pair phantom will form. From this image, the output amplitude (or gray level) of each spatial frequency can be measured. The modulation transfer function (MTF) of the imaging system, along a line perpendicular to the line pairs, is defined as the ratio between the output amplitude and the input amplitude expressed as a function of spatial frequency MTF(u) = (output amplitude input amplitude)u
(2.9)
where u is the spatial frequency measured in the direction perpendicular to the line pairs. Mathematically, the MTF is the magnitude [see Eq. (2.2)] of the Fourier transform of the line spread function of the system given by the following equation: MTF(u) = ᑤ[LSF ( x)] =
•
Ú [LSF ( x) exp(-i 2pxu)]dx
(2.10)
-•
It is seen from Eq. (2.9) that the MTF measures the modulation of the amplitude (gray level) of the line pair pattern in the image. The amount of modulation determines the quality of the imaging system. The MTF of an imaging system, once known, can be used to predict the quality of the image produced by the imaging system. For a given frequency u, if MTF(v) = 0 for all v ≥ u, the imaging system under consideration cannot resolve spatial frequency equal to or higher than u. The MTF so defined is a one-dimensional function; it measures the spatial resolution of the imaging system only in a certain direction. Extreme care must be exercised to specify the direction of measurement when the MTF is used to describe the spatial resolution of the system.
Ch02.qxd 2/12/04 5:00 PM Page 38
38
DIGITAL MEDICAL IMAGE FUNDAMENTALS
Not that the MTF of a system is multiplicative; that is, if an image is obtained by an imaging system consisting of n components, each having its own MTFi, then the total MTF of the imaging system is expressed by the following equation n
MTF(u) = ’ MTFi (u)
(2.11)
i =1
where P is the multiplication symbol. It is obvious that a low MTFi (u) value in any given component i will yield an overall low MTF(u) of the complete system. 2.5.1.5 Relationship Between ESF, LSF, and MTF The MTF described in Section 2.5.1.4 is sometimes called the high-contrast response of the imaging system, because the line pair phantom used is a high-contrast phantom (see Fig. 2.3D at the four corners and the midright line pairs). By “high contrast,” we mean that the object (lead) and the background (air) yield high radiographic contrast. On the other hand, MTF obtained with a low-contrast phantom (Fig. 2.3D, midleft) constitutes the low-contrast response of the system. The MTF value obtained with a high-contrast phantom is always larger than that obtained with a lower-contrast phantom for a given spatial frequency. With this background, we are ready to describe the relationship between the edge spread function, the line spread function, and the modulation transfer function. Let us set up an experiment to obtain the MTF of a digital imaging system composed of a light table, a video camera, a digital chain that converts the video signals into digital signals (A/D) and forms the digital image, and a D/A that converts the digital image to video so that it can be displayed on a TV monitor. The experimental steps are as follows: 1. Cover half the light table with a sharp-edged, black-painted metal sheet (edge phantom). 2. Obtain a digital image of this edge phantom with the imaging system, as shown in Figure 2.6A. Then the ESF(x) has a gray level distribution (as shown in Fig. 2.6B, arrows), which is obtained by taking the average value of several lines (a–a) perpendicular to the edges. Observe the noise characteristic of the ESF(x) in the figure. 3. The LSF of the system can be obtained by taking the first derivative of the ESF numerically [Eq. (2.8)], which is indicated by the arrows shown in Figure 2.6B. The resulting LSF is depicted in Figure 2.6C. 4. To obtain the MTF of the system in the direction perpendicular to the edge, a one-dimensional (1-D) Fourier transform [Eq. (2.10)] is applied to the LSF shown in Figure 2.6C. The magnitude of this 1-D Fourier transform is then the MTF of the imaging system in the direction perpendicular to the edge. The result is shown in Figure 2.6D. This completes the experiment of obtaining the MTF from the ESF and the LSF. In practice, we can take 10% of the MTF values as the minimum resolving power of the imaging system. In this case, the MTF of this imaging system is about 1.0 cycle/mm, or can resolve one line pair per millimeter.
Ch02.qxd 2/12/04 5:00 PM Page 39
MEASUREMENT OF IMAGE QUALITY
39
Figure 2.6 Relationship between the ESF, the LSF, and the MTF. (A) Experimental setup. The imaging chain consists of a light table, a video camera, a digital chain that converts the video signals into digital signals (A/D) and forms the digital image, and a D/A that converts the digital image to video so that it can be displayed on a TV monitor. The object under consideration is an edge phantom. (B) The ESF (arrows) by averaging several lines parallel to a–a. (C) The LSF. (D) The MTF.
2.5.1.6 Relationship Between the Input Image, the MTF, and the Output Image Let A = 1, B = p, and x0 = 0, and extend the ESF described in Eq. (2.7) to a periodic function with period 2p shown in Figure 2.7A. This periodic function can be expressed as a Fourier series representation, or more explicitly, a sum of infinitely many sinusoidal functions, as follows: ESF( x) =
(sin 3 x) (sin 5 x) (sin 7 x) 1 2È ˘ + Ísin x + + + + . . .˙ 2 pÎ 3 5 7 ˚
(2.12)
The first term, 1/2, in this Fourier series is the DC term. Subsequent terms are sinusoidal, and each is characterized by an amplitude and a frequency. If the partial sum of Eq. (2.12) is plotted, then it is apparent that the partial sum will approximate the periodic step function more closely as the number of terms used to form the partial sum increases (Fig. 2.7B). We can also plot the amplitude spectrum or the spatial frequency spectrum shown in Figure 2.7C, which is a plot of the amplitude against the spatial frequency [Eq. (2.12)]. From this plot we can observe that the periodic step function ESF(x) can be decomposed into infinite com-
Ch02.qxd 2/12/04 5:00 PM Page 40
40
DIGITAL MEDICAL IMAGE FUNDAMENTALS
Figure 2.6 Continued
ponents, each of which has an amplitude and frequency. To reproduce this periodic function ESF(x) exactly, it is necessary to include all the components. If some of the components are missing or have diminished amplitude values, the result is a diffused or unsharp edge. A major concern in the design of an imaging system is how to avoid missing frequency components or diminished amplitudes in the frequency components in the output image.
Ch02.qxd 2/12/04 5:00 PM Page 41
MEASUREMENT OF IMAGE QUALITY
41
Figure 2.7 Sinusoidal function representation of an edge step function. (A) The edge step function. (B) Partial sums of sinusoidal functions; numerals correspond to the number of terms summed, described in Eq. (2.12). (C) Amplitude spectrum of the step function.
The MTF of a system can be used to predict the missing or modulated amplitudes. Consider the lateral view image of a plastic circular cylinder taken with a perfect X-ray imaging system. Figure 2.8A shows an optical density trace perpendicular to the axis of the circular cylinder in the perfect image. How would this trace
Ch02.qxd 2/12/04 5:00 PM Page 42
42
DIGITAL MEDICAL IMAGE FUNDAMENTALS
Figure 2.8 Relationship between the input, the MTF, and the output. (A) A line profile from a lateral view of a circular cylinder from a perfect imaging system. (B) The spatial frequency spectrum of A. (C) MTF of the imaging system described in Fig. 2.6D. (D) The output frequency response (B ¥ C). (E) The predicted line trace from the imperfect imaging system obtained by an inverse Fourier transform of D. (F) Superposition of A and E showing the rounding of the edges (arrows) due to the imperfect imaging system.
Ch02.qxd 2/12/04 5:00 PM Page 43
MEASUREMENT OF IMAGE QUALITY
43
Figure 2.8 Continued
look if the image was digitized by the video camera system described in Section 2.5.1.5? To answer this question, we first take the Fourier transform of the trace from the perfect image, which gives its spatial frequency spectrum (Fig. 2.8B). If this frequency spectrum is multiplied by the MTF of the imaging system shown in Figure
Ch02.qxd 2/12/04 5:00 PM Page 44
44
DIGITAL MEDICAL IMAGE FUNDAMENTALS
2.6D, frequency by frequency, the result is the output frequency response of the trace (Fig. 2.8D, after normalization) obtained with this digital imaging system. It is seen from Figure 2.8, B and D, that there is no phase shift; that is, all the zero crossings are identical between the input and the output spectra. The output frequency spectrum has been modulated to compensate for the imperfection of the video camera digital imaging system. Figure 2.8E shows the expected trace. Figure 2.8F is the superposition of the perfect and the expected trace. It is seen that both corners in the expected trace (arrows) lose their sharpness. This completes the description of how the concepts of point spread function, line spread function, edge spread function, and modulation transfer function of an imaging system can be used to measure the unsharpness of an output image. The concept of using the MTF to predict the unsharpness due to an imperfect system has also been introduced.
A
A
R
R
30 cm
30 cm
Original
A
10/22/2002 A
1,000 noise
W420L58
R
10/22/2002 A
B
R
30 cm
30 cm
10,000 noise
C
W420L58
10/22/2002
W420L58
100,000 noise
10/22/2002
W420L58
D
Figure 2.9 (A) An abdominal CT image (512 ¥ 512 ¥ 12) shown in Fig. 2.2. Random noises are added: (B) 1000 pixels, (C) 10,000 pixels, and (D) 100,000 pixels. The coordinates of each randomly selected pixel within the body region are obtained from a random generator. The new pixel value, selected from a range between 0.7 to 1.3 that of the original value, is determined by a second random generator.
Ch02.qxd 2/12/04 5:00 PM Page 45
MEASUREMENT OF IMAGE QUALITY
2.5.2
45
Measurement of Noise
MTF is often used as a measure of the quality of the imaging system. By definition, it is a measure of certain optical characteristics of an imaging system, namely, the ability to resolve fine details. It provides no information regarding the effect of noise on radiological contrast of the image. Because both unsharpness and noise can affect the image quality, an imaging system with large MTF values at high frequencies does not necessarily produce a high-quality image if the noise level is high. Figure 2.9A shows an abdominal CT image, and Fig. 2.9B, C, and D show the same CT image with various degrees of random noise added. On comparing these figures it is clearly seen that noise degrades the quality of the image. The study of the noise that arises from quantum statistics, electronic noise, optical diffusion, and film grain represents another measure on the image quality. To study the noise, we need the concept of power spectrum, or Wiener spectrum, which describes the noise produced by an imaging system. Let us make the assumption that all the noises N are random in nature and do not correlate with the signals S that form the image; then the signalto-noise power ratio spectrum, or the signal-to-noise power ratio P(x, y), of each pixel is defined by P ( x, y) =
S 2 ( x, y) N 2 ( x, y)
(2.13)
Figure 2.10 illustrates the signal and the associated random noise in a line trace on a uniform background image. A high signal-to-noise ratio (SNR) means that the image is less noisy. A common method to increase the SNR (i.e., reduce the noise in the image) is to obtain many images of the same object under the same conditions and average them. This, in a
Figure 2.10 Example demonstrating the signal and the noise in a line trace on a uniform background image (white). The small variation along the profile is the noise. If there were no noise, the line trace would have been a straight line.
Ch02.qxd 2/12/04 5:01 PM Page 46
46
DIGITAL MEDICAL IMAGE FUNDAMENTALS
A
B
C
Figure 2.11 The difference between a digitized X-ray chest image before and after it has been averaged. (A) Chest X-ray film digitized with a video camera. (B) Digital image of the same chest X-ray film digitized 16 times and then averaged. (C) Image B subtracted from image A.
Ch02.qxd 2/12/04 5:01 PM Page 47
MEASUREMENT OF IMAGE QUALITY
47
sense, minimizes the contribution of the random noise to the image. If M images are averaged, the average signal-to-noise power ratio P(x, y) becomes P ( x, y) =
M 2 S 2 ( x, y) = MP ( x, y) MN 2 ( x, y)
(2.14)
The SNR is the square root of the power ratio SNR( x, y) = P ( x, y) = M P ( x, y)
(2.15)
Therefore, the SNR increases by the square root of the number of images averaged. Equation (2.15) indicates that it is possible to increase the signal-to-noise ratio of the image by this averaging technique. The average image will have less random noise, which gives a smoother visual appearance. For each pixel, the noise N(x, y) defined in Eq. (2.13) can be approximated by using the standard deviation between the image under consideration and the computed average image. Figure 2.11 illustrates how the signal-to-noise ratio of the imaging system is computed. Take a chest X-ray image, digitize it with an imaging system M times, and average the results pixel by pixel. If we assume that the average image f(x, y) is the signal (Fig. 2.11B), and the noise of each pixel (x, y) can be approximated by the standard deviation between f and fi, where fi is a digitized image, and M ≥ i ≥ 1, then the signal-to-noise ratio for each pixel can be computed by using Eqs. (2.13) and (2.15). Figure 2.11, A, B, and C, shows a digitized image fi(x, y), the average image f(x, y) with M = 16, and the difference image between fi(x, y) and f(x, y). The difference image shows a faint image of the chest, demonstrating that the noise in the imaging system is not random but systematic because of the design of the digital chain; otherwise, it would only show random noise.
Ch03.qxd 2/12/04 5:03 PM Page 49
CHAPTER 3
Digital Radiography
3.1
PRINCIPLES OF CONVENTIONAL PROJECTION RADIOGRAPHY
Conventional projection radiography accounts for 70% of diagnostic imaging procedures. Therefore, to transform radiology from a film-based to a digital-based operation, we must understand radiology work flow, conventional projection radiographic procedures, and digital radiography. This chapter discusses these three topics. There are two approaches to converting an analogy-based image to digital form. The first is to utilize existing equipment in the radiographic procedure room and only change the image receptor component. Two technologies, computed radiography (CR) using the photostimulable phosphor imaging plate technology and digital fluorography, are in this category. This approach does not require any modification in the procedure room and is therefore more easily adopted for daily clinical practice. The second approach is to redesign the conventional radiographic procedure equipment, including the geometry of the X-ray beams and the image receptor. This method is therefore more expensive to adopt, but the advantage is that it offers special features like low X-ray scatter that would not otherwise be achievable in the conventional procedure. 3.1.1
Radiology Work Flow
PACS is a system integration of both patient work flow and diagnostic components. A thorough comprehension of radiology work flow would allow efficient system integration and hence a better PACS design for the radiology department and the hospital. Radiology work flow can vary from department to department; for this reason, work flow analysis is the first step for PACS design and implementation. Figure 3.1 shows a generic radiology work flow before PACS and is explained as follows. 1. Patient arrives in hospital for Radiology examination (exam). 2. Patient registers in Radiology area. If patient is new, patient is registered in Hospital Information System (HIS). 3. Radiology exam ordered in Radiology Information System (RIS) by Radiology registration clerk. Exam accession number automatically assigned. Requisition printed. PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
49
Ch03.qxd 2/12/04 5:03 PM Page 50
50
DIGITAL RADIOGRAPHY
(6)
Radiology Workflow
(11) (10)
(7)
(8)
(12)
Modality (5) (9)
(2) (3) (4)
(1)
(14)
(13)
Figure 3.1 A generic radiology work flow.
4. Technologists receive requisition from registration clerk. Technologist calls the patient in waiting area for exam. 5. Patient arrives at modality for radiological exam. Technologist enters patient information in RIS. 6. Technologist performs exam and obtains hard copy films of the radiological exam. Technologist enters exam completed in RIS. 7. Hardcopies and paperwork sent to film clerk in clerical area. 8. Film clerk pulls patient film jacket for historical films and reports. 9. Film clerk assembles all paperwork and films ready for radiologist to review. 10. Film clerk hangs relevant films in a viewing box for radiologist to review with unhanged films in film jacket. 11. Radiologist previews paperwork and reports, reads exam, and dictates exam report. 12. Transcriptionist fetches the dictation and types draft report that corresponds to exam accession number within RIS. 13. Radiologist reviews draft report, makes any necessary corrections, and signs off on report. 14. Final results report available on RIS for clinician viewing. In designing the PACS, the aforementioned work flow must be integrated in the design so that the PACS operation will not be too much different from the current work flow.
Ch03.qxd 2/12/04 5:03 PM Page 51
PRINCIPLES OF CONVENTIONAL PROJECTION RADIOGRAPHY
3.1.2
51
Standard Procedures Used in Conventional Projection Radiography
Conventional X-ray imaging procedures are used in all subspecialties of a radiology department, including outpatient, neuroimaging, emergency, pediatric, breast imaging, chest, genitourinary, gastrointestinal, cardiovascular, and musculoskeletal. In each subspecialty, there are two major work areas, the diagnostic area and the X-ray procedure room. These areas may be shared among subspecialties. Although the detailed exam procedures may differ among subspecialties, the basic steps can be summarized as follows: 1. Transfer patient-related information from HIS and RIS to the X-ray procedure room before the examination. 2. Check patient X-ray requisition for anatomical area of interest for imaging. 3. Set patient in position, stand up or on tabletop, for X-ray examination and adjust the X-ray collimator for the size of the exposed area or the field size. 4. Select a proper film screen cassette. 5. Place cassette in the holder located behind or on the table under the patient. 6. Determine X-ray exposure factors for obtaining the optimal quality image with minimum exposure. 7. Turn on the X-rays to obtain a latent image of the patient on the film screen cassette. 8. Process the exposed film through a film processor. 9. Retrieve the developed film from the film processor. 10. Inspect the radiograph through a light box for proper exposure or other errors (e.g., patient positioning or movement). 11. Repeat steps 3–10 if the image quality on the film is unacceptable for diagnosis, always keeping in mind that the patient should not be subjected to unnecessary additional X-ray exposure. 12. Submit the film to a radiologist for approval. 13. Remove the patient from the table after the radiologist has determined that the quality of the radiograph is acceptable for diagnosis. 14. Release the patient. Figure 3.2 shows a standard setup of a conventional radiographic procedure room and a diagnostic area (for diagnosis and reporting). The numbers in the figure correspond to the above-listed steps for a tabletop examination. Observe that if digital radiography with PACS is used (see Section 3.5), the three components in the diagnostic area, film processing, storage, and film viewing (replaced by workstations), can be eliminated, resulting in a saving of space as well as a more effective and efficient operation. 3.1.3
Analog Image Receptor
Radiology work flow gives a general idea of how the radiology department handles the patient coming in for examination and how the image examination results are
Ch03.qxd 2/12/04 5:03 PM Page 52
52
DIGITAL RADIOGRAPHY
DIAGNOSTIC AREA 8
1
9
2
14
Film Processing Area (Dark Room)
X-rays Requisition, Dispatcher and Patient Escort Area
Un-Exposed Film-Screen Cassette Storage
Film Viewing and Diagnosis Area (Light Box)
10 4
11 12
X-RAY PROCEDURE ROOM
X-ray Tube
6
7 3
Patient Table
13 X-ray Generator
5 X-ray Control Console
Bucky Tray
Figure 3.2 Standard setup of a radiographic procedure and diagnostic area.
reviewed and archived. This section discusses equipment used for radiographic examination, and some physics principles are needed. After the patient is exposed to X-rays during an examination, the attenuated Xray photons exiting the patient carry the information of an image; the problem is that they cannot be visualized by the human eye. This information must be converted into a latent image, which in turn can be transformed into a visual image. Two commonly used analog image receptors are the image intensifier tube and screen/film combination discussed below in this section. 3.1.3.1 Image Intensifier Tube The image intensifier tube is used often as the image receptor in projection radiography. The image intensifier tube is particularly useful for fluorographic and digital subtraction angiography procedures, which
Ch03.qxd 2/12/04 5:03 PM Page 53
PRINCIPLES OF CONVENTIONAL PROJECTION RADIOGRAPHY
53
allow imaging of moving structures in the body and dynamic processes in real time. If X-ray films were used for these types of examinations, the dose exposure to the patient would be very high and would not be acceptable. The use of an image intensifier can maximize the dynamic information available from the study and minimize the X-ray exposure to the patient, but gives a lower image quality than that of the film. An image intensifier tube is shown schematically in Figure 3.3. The formation of the image is as follows. X-rays penetrate the patient, exit, enter the image intensifier tube through the glass envelope, and are absorbed in the input phosphor intensifying screen. The input screen converts X-ray photons to light photons. The light photons emitted from the screen next strike the light-sensitive photocathode, causing the emission of photoelectrons. These electrons are then accelerated across the tube (by approximately 25,000 V) and strike the output screen. In this way, the attenuated X-ray photons are converted to proportional electrons. This stream of electrons is focused and converges on an output phosphor screen. At the time the electrons are absorbed by the output phosphor, the image information carried by the electron stream is once again converted to light photons, but in a much larger quantity (brightness gain) than the light output from the input phosphor, hence the term “image intensifier.” Image intensifiers are generally listed by the diameter of the input phosphor, ranging from 4.5 to 14 in. The light from the output phosphor is then coupled to an optical system for recording with a movie camera (angiography), a TV camera (fluorography), or a spot film camera.
Figure 3.3 Schematic of the image intensifier tube and the formation of an image on the output screen.
Ch03.qxd 2/12/04 5:03 PM Page 54
54
DIGITAL RADIOGRAPHY
3.1.3.2 Screen-Film Combination The screen-film combination consists of a double-emulsion radiation-sensitive film sandwiched between two intensifying screens housed inside a lighttight cassette (Fig. 3.4). The X-ray film of silver halide crystals suspended with a gelatin medium, which is coated on both sides by an emulsion, consists of a transparent (polyethylene terephthalate) plastic substrate
Figure 3.4 Schematic of a screen-film cassette and the formation of a latent image on the film, not to scale. (Distances between screen, film, base material, screen shown in the drawing are used for illustration purposes; they are actually in contact with each other.)
Ch03.qxd 2/12/04 5:03 PM Page 55
PRINCIPLES OF CONVENTIONAL PROJECTION RADIOGRAPHY
55
called the base. A slight blue tint is commonly incorporated in the base to give the radiograph a pleasing appearance. A photographic emulsion can be exposed to X-rays directly, but it is more sensitive to light photons of much less energy (⬃2.5–5 eV). For this reason, an intensifying screen is used to absorb the attenuated X-rays first. The screen, which is made of a thin phosphor layer (e.g., crystalline calcium tungstate), is more sensitive to the diagnostic X-ray energy (20–90 keV). The X-ray photons exiting the patient impinge onto an intensifying screen, causing it to emit visible light photons that are collected by the film to form a latent image. X-ray photons that are not absorbed by the front screen in the cassette can be absorbed by the back screen. The light emitted from this second screen then exposes the emulsion on the back side of the film. The double emulsion film thus can effectively reduce the patient exposure by half. With the screen/film as the image detector, the patient receives much a lower exposure than is the case when the film alone is used. Image blur due to patient motion can also be minimized with a shorter exposure time if the screen-film combination is used. The film is then developed, forming a visible X-ray film. Figure 3.4 presents a sectional view of the film/screen detector as well as its interaction with X-ray photons. Film Optical Density The number of developed silver halide crystals per unit volume in the developed film determines the amount of light from the viewing box that can be transmitted through a unit volume. This transmitted light is referred to as the optical density (OD) of the film in that unit volume. Technically, the OD is defined as the logarithm base 10 of one reciprocal transmittance of a unit intensity of light. OD = log10 (1 transmittance) = log10 (I o I t )
(3.1)
where Io is light intensity at the viewing box before transmission through the film and It is light intensity after transmission through the film. The film optical density is used to represent the degree of film darkening due to X-ray exposure. Characteristic Curve of the X-ray Film The relationship between the amount of X-ray exposure received and the film optical density is called the characteristic curve or the H and D curve (after F. Hurter and V. C. Driffield, who first published such a curve in England in 1890). The logarithm of relative exposure is plotted instead of the exposure itself, partly because it compresses a large linear scale to a manageable logarithm scale, which makes analysis of the curve easier. Figure 3.5 shows an idealized curve with three segments, the toe (A), the linear segment (B), and the shoulder (C). The toe is the based density or the base-plus-fog level (usually OD = 0.12–0.20). For very low exposures, the film optical density remains at the fog level and is independent of exposure level. Next is a linear segment over which the optical density and the logarithm of relative exposure are linearly related (usually between OD = 0.3 and 2.2). The shoulder corresponds to high exposures or overexposures where most of the silver halides are converted to metallic silver (usually OD = 3.2). The
Ch03.qxd 2/12/04 5:03 PM Page 56
56
DIGITAL RADIOGRAPHY
.
Optical Density
3.0 D2
C
B
.
D1 0.2
A E1
E2
LOG RELATIVE EXPOSURE ON FILM
Figure 3.5 The relationship between logarithm of relative X-ray exposure and the film optical density plotted as a curve, the characteristic curve or the H and D curve; see text for discussion of points A, B, C, D1, D2, E1, and E2.
film becomes saturated, and the optical density is no longer a function of exposure level. The characteristic curve is usually described by one of the following terms: the film gamma, the average gradient, the film latitude, or the film speed. The film gamma (g) is the maximum slope of the characteristic curve and is described by the formula gamma(g ) = (D2 - D1 ) (log10 E2 - log10 E1 )
(3.2)
where D2 is the highest OD value within the steepest portion of the curve (see Fig. 3.5), D1 is the lowest OD value within the steepest portion of curve, E2 is the exposure corresponding to D2, and E1 is the exposure corresponding to D1. The average gradient of the characteristic curve is the slope of the characteristic curve calculated between optical density 0.25 and 2.00 above base plus the fog level for the radiographic film under consideration. The optical density range between 0.25 to 2.00 is considered acceptable for diagnostic radiology application. For example, assuming a base and fog level of 0.15, the range of acceptable optical density is 0.40–2.15. The average gradient can be represented by the following formula: average gradient = (D2 - D1 ) (log10 E2 - log10 E1 )
(3.3)
where D2 is 2.00 + base and fog level, D1 is 0.25 + base and fog level, E2 is the exposure corresponding to D2, and E1 is exposure corresponding to D1.
Ch03.qxd 2/12/04 5:03 PM Page 57
57
DIGITAL FLUOROGRAPHY AND LASER FILM SCANNER
The film latitude describes the range of exposures used in the average gradient calculation. Thus, as described in Eq. (3.3), the film latitude is equal to log10 E2 - log10 E1. The film speed (unit: 1/roentgen) can be defined as follows: speed = 1 E
(3.4)
where E is the exposure (in roentgens) required to produce a film optical density of 1.0 above base and fog. Generally speaking, 1. The latitude of a film varies inversely with film contrast, film speed, film gamma, and average gradient. 2. Film gamma and average gradient of a film vary directly with film contrast. 3. Film fog level varies inversely with film contrast. 4. Faster films require less exposure to achieve a specific density than slower films.
3.2 3.2.1
DIGITAL FLUOROGRAPHY AND LASER FILM SCANNER Basic Principles
Because 70% of radiographic procedures still use film as an output medium, it is necessary to develop methods to convert images on films to digital format. This section discusses two methods, video camera plus an A/D converter, and laser film scanner. As described in Chapter 2, when a film is digitized, the shades of gray in the film are quantized into a two-dimensional array of nonnegative integers called pixels. Two factors dictate whether the digitized image truly represents the original film image: the quality of the scanner and the aliasing artifact. A low-quality scanner with large pixel size and insufficient bits/pixel will yield a bad digitized image (see example in Fig. 2.2). On the other hand, a good-quality scanner may sometimes produce an aliasing artifact in the digitized image because of some special inherent patterns, such as grid lines and edges, in the original film. The aliasing artifact can best be explained with the concept of data sampling. The well-known sampling theorem states that: If the Fourier transform of the image f(x, y) vanishes for all u,v where /u/ ≥ 2fN, /v/ ≥ 2fN, then f(x, y) can be exactly reconstructed from samples of its nonzero values taken (1/2) fN apart or closer. The frequency 2fN is called the Nyquist frequency.
The theorem implies that if the pixel samples are taken more than 1/2fN apart, it will not be possible to reconstruct the image completely from these samples. The difference between the original image and the image reconstructed from these samples is caused by aliasing error. The aliasing artifact creates new frequency components in the reconstructed image called moiré patterns (see, for example, Fig. 2.3B-1).
Ch03.qxd 2/12/04 5:03 PM Page 58
58
DIGITAL RADIOGRAPHY
3.2.2
Video Scanner System and Digital Fluorography
The video scanning system is a low-cost X-ray digitizer that produces either a 512 or 1 K digitized image with 8 bits/pixel. The system consists of three major components: a scanning device with a video or a charge-coupled device (CCD) camera that scans the X-ray film, an A/D converter that converts the video signals from the camera to gray level values, and an image memory to store the digital signals from the A/D converter. The image stored in the image memory is the digital representation of the X-ray film or image in the image intensifier tube obtained by using the video scanning system. If the image memory is connected to a digital-to-analog (D/A) conversion circuitry and to a TV monitor, this image (which is a video image) can be displayed back on the monitor. The memory can be connected to a peripheral storage device for long-term image archiving. Figure 3.6 shows a block diagram of a video scanning system. The digital chain shown is a standard component in all types of scanners. Video scanning system can be connected to an image intensifier tube to form a digital fluoroscopic system. Digital fluorography is a method that can produce dynamic digital X-ray images without changing the radiographic procedure room drastically. This technique requires an add-on unit in the conventional fluorographic system. Figure 3.7 shows a schematic of the digital fluorographic system with the following major components: 1. X-ray source: The X-ray tube and a grid to minimize X-ray scatter. 2. Image receptor: The image receptor is an image intensifier tube. 3. Video camera plus optical system: The output light from the image intensifier goes through an optical system, which allows the video camera to be adjusted
Digital Chain Video Scanner
A/D
Image Memory
D/A
Video Monitor
Image Processor/ Computer
Digital Storage Device
Figure 3.6 Block diagram of a video scanning system. The digital chain is a standard component in all types of scanners.
Ch03.qxd 2/12/04 5:03 PM Page 59
DIGITAL FLUOROGRAPHY AND LASER FILM SCANNER
59
Figure 3.7 Schematic of a digital fluorographic system coupling the image intensifier and the digital chain. See text for key to numbers.
for focusing. The amount of light going into the camera is controlled by means of a light diaphragm. The camera used is usually a plumbicon or a CCD with 512 or 1024 scan lines. 4. Digital chain: The digital chain consists of an A/D converter, image memories, image processor, digital storage, and video display. The A/D converter, the image memory, and the digital storage can handle a 512 ¥ 512 ¥ 8 bit image at 30 frames/s or a 1024 ¥ 1024 ¥ 8 bit image at 7.5 frames/s. Sometimes the RAID (redundant array of inexpensive disks) is used to handle the high-speed data transfer. Fluorography is used to visualize the motion of body compartments (e.g., blood flow, heartbeat) or the movement of a catheter, as well as to pinpoint an organ in a body region for subsequent detailed diagnosis. Each exposure required in a fluorographic procedure is very minimal compared with a conventional X-ray procedure. Digital fluorography is considered to be an add-on system because a digital chain is added to an existing fluorographic unit. This method utilizes the established Xray tube assembly, image intensifier, video scanning, and digital technologies. The output from a digital fluorographic system is a sequence of digital images displayed on a video monitor. Digital fluorography has the advantage over conventional fluorography that it gives a larger dynamic range image and can remove uninteresting structures in the images by performing digital subtraction. When image processing is introduced to the digital fluorographic system, dependent on the application, other names are used, for example, digital subtraction angiography (DSA), digital subtraction arteriography (DSA), digital video angiography (DVA), intravenous video arteriography (IVA), computerized fluoroscopy (CF), and digital video subtraction angiography (DVSA). 3.2.3
Laser Film Scanner
The laser scanner is the gold standard in film digitization. It normally converts a 14 in. ¥ 17 in. X-ray film to a 2 K ¥ 2.5K ¥ 12 bit image. The principle of laser scanning
Ch03.qxd 2/12/04 5:03 PM Page 60
60
DIGITAL RADIOGRAPHY
is shown in Figure 3.8. A rotating polygon mirror system is used to guide a collimated low-power (5 mW) laser beam (usually helium-neon) to scan across a line of the radiograph in a lighttight environment. The radiograph is advanced, and the scan is repeated for the second lines, and so forth. The optical density of the film is measured from the transmission of the laser through each small area (e.g., 175 ¥ 175 mm) of the radiograph with a photomultiplier tube and a logarithmic amplifier. This electronic signal is sent to a digital chain, where it is digitized to 12 bits from the A/D converter. The data are then sent to a computer, where a storage device is provided for the image. Figure 3.9 shows the schematic block diagram of a laser scanner system. Table 3.1 gives the specifications of a generic scanner. Before a scanner is ready for clinical use, it is important to evaluate its specifications and to verify the quality of the digitized image. Several parameters are of importance: • • • •
Relationship between the pixel value and the optical density of the film Contrast frequency response Linearity Flat field response
Standard tests can be set up for such parameter measurement (Huang, 1999).
Figure 3.8 Scanning principle of a laser film scanner.
Ch03.qxd 2/12/04 5:03 PM Page 61
IMAGING PLATE TECHNOLOGY
61
Figure 3.9 Block diagram of a laser film scanner interfacing to a host computer.
TABLE 3.1
Specifications of a Laser Film Scanner
Film size supported, in. ¥ in. Pixel size, mm Sampling distance, mm Optical density range Bits/pixel Hardware interface Laser power Scanning speed Data format
14 ¥ 17, 14 ¥ 14, 12 ¥ 14, 10 ¥ 12, 8 ¥ 10 50–200 50, 75, 100, 125, 150, 175, 200 0–2, 0–4 12 bits SCSII* 5 mW 200 lines/s DICOM**
* SCSII, small computer systems interface. ** See Chapter 7.
3.3
IMAGING PLATE TECHNOLOGY
The imaging plate system, commonly called computed radiography (CR), consists of two components, the imaging plate and the scanning mechanism. The imaging plate (laser-stimulated luminescence phosphor plate) used for X-ray detection is similar in principle to the phosphor intensifier screen used in the screen-film receptor described in Section 3.1.3.2. The scanning of a laser-stimulated luminescence phosphor imaging plate also uses a scanning mechanism (reader) similar to that of a laser film scanner. The only difference is that instead of scanning an X-ray film, the laser scans the imaging plate. This section describes the principle of the imaging plate, the specifications of the system, and system operation.
Ch03.qxd 2/12/04 5:03 PM Page 62
62
DIGITAL RADIOGRAPHY
3.3.1
Principle of the Laser-Stimulated Luminescence Phosphor Plate
The physical size of the imaging plate is similar to that of a conventional radiographic screen; it consists of a support coated with a photostimulable phosphorous layer made of BaFX:Eu2+(X=Cl,Br,I), europium-activated barium fluorohalide compounds. After the X-ray exposure, the photostimulable phosphor crystal is able to store a part of the absorbed X-ray energy in a quasistable state. Stimulation of the plate by a 633-nm-wavelength helium-neon (red) laser beam leads to emission of luminescence radiation of a different wavelength (400 nm), the amount of which is a function of the absorbed X-ray energy (Fig. 3.10A,B).
A Figure 3.10 Physical principle of laser-stimulated luminescence phosphor imaging plate. (A) From the X-ray photons exposing the imaging plate to the formation of the light image.
Ch03.qxd 2/12/04 5:03 PM Page 63
IMAGING PLATE TECHNOLOGY
63
B Figure 3.10 (B) The wavelength of the scanning laser beam (b) is different from that of the emitted light (a) from the imaging plate after stimulation. (Courtesy of J. Miyahara, Fuji Photo Film Co., Ltd.)
The luminescence radiation stimulated by the laser scanning is collected through a focusing lens and a light guide into a photomultiplier tube, which converts it into electronic signals. Figure 3.10 shows the physical principle of the laser-stimulated luminescence phosphor imaging plate. The size of the imaging plate can be 8 ¥ 10, 10 ¥ 12, 14 ¥ 14, or 14 ¥ 17 in.2 The image produced is 2000 ¥ 2500 ¥ 10 bits. 3.3.2 Computed Radiography System Block Diagram and Its Principle of Operation The imaging plate is housed inside a cassette just like a screen-film receptor. Exposure of the imaging plate (IP) to X-ray radiation results in the formation of a latent image on the plate (similar to the latent image formed in a screen-film receptor). The exposed plate is processed through a CR reader to extract the latent image— analogous to the exposed film developed by a film developer. The processed imaging plate can be erased by bright light and be used again. The imaging plate can either be removable or nonremovable. An image processor is used to optimize the display (lookup tables, see Chapter 11) based on types of exam and body regions. The output of this system can be in one of two forms—a printed film or a digital image; the latter can be stored in a digital storage device and displayed on a video monitor. Figure 3.11 illustrates the data flow of an upright CR system with three nonremovable imaging plates. Figure 3.12 shows the FCR-9000 system with a removable imaging plate and its components. 3.3.3
Operating Characteristics of the CR System
A major advantage of the CR system compared with the conventional screen-film system is that the imaging plate is linear and has a large dynamic range between
Ch03.qxd 2/12/04 5:03 PM Page 64
64
DIGITAL RADIOGRAPHY
1
4
2
A/D Converter
3 Semiconductor Laser Stimulable Phospor Detector
CRT Controller To Host Computer
Figure 3.11 Dataflow of an upright CR system with nonremovable imaging plates (IP). (1) Formation of the latent image on the IP. (2) The IP is scanned by the laser beam. (3) Light photons are converted to electronic signals. (4) Electronic signals are converted to digital signals that form a CR image. (Courtesy of Konica Corporation, Japan.)
Figure 3.12 A FCR 9000 CR System. (A) Imaging plate reader. (B) Patient ID card reader. (C) ID terminal. (D) Image processing workstation. (E) QA monitor.
the X-ray exposure and the relative intensity of the stimulated phosphors. Hence, under a similar X-ray exposure condition, the image reader is capable of producing images with density resolution comparable or superior to those from the conventional screen-film system. Because the image reader automatically adjusts the amount of exposure received by the plate, over- or underexposure within a certain limit would not affect the appearance of the image. This useful feature can best be explained by the two examples given in Fig. 3.13. In quadrant A of Figure 3.13, example I represents the plate exposed to a higher relative exposure level but with a narrower exposure range (103–104). The linear response of the plate after laser scanning yields a high-level but narrow light intensity (photostimulable luminescence, PSL) range from 103 to 104. These light photons are converted into electronic output signals representing the latent image stored on
Ch03.qxd 2/12/04 5:03 PM Page 65
IMAGING PLATE TECHNOLOGY
65
Figure 3.13 Two examples, I and II, illustrate the operating characteristics of the CR system and explain how it compensates for over- and underexposures.
the image plate. The image processor senses a narrow range of electronic signals and selects a special lookup table (the linear line in Fig. 3.13B), which converts the narrow dynamic range of 103–104 to a large light relative exposure of 1–50 (Fig. 3.13B). If hard copy is needed, a large-latitude film can be used that covers the dynamic range of the light exposure from 1 to 50, as shown in quadrant C; these output signals will register the entire optical density range from OD 0.2 to OD 2.8 on the film. The total system response including the imaging plate, the lookup table, and the film subject to this exposure range is depicted as curve I in quadrant D of Figure 3.13. The system-response curve, relating the relative exposure on the plate and the OD of the output film, shows a high gamma value and is quite linear. This example demonstrates how the system accommodates a high exposure level with a narrow exposure range. Consider example II of Figure 3.13, in which the plate receives a lower exposure level but a wider exposure range. The CR system automatically selects a different lookup table in the image processor to accommodate this range of exposure so that the output signals again span the entire light exposure range from 1 to 50. The system-response curve is shown as curve II in quadrant D. The key in selecting the
Ch03.qxd 2/12/04 5:03 PM Page 66
66
DIGITAL RADIOGRAPHY
correct lookup table is that the range of the exposure must span the total light exposure of the film, namely, from 1 to 50. Note that, in both examples, the entire useful optical density range for diagnostic radiology is utilized. If a conventional screen-film combination system was used, exposure on example I in Fig. 3.13 would only utilize the higher optical density region of the film, whereas in example II it would utilize the lower region. Neither case would utilize the full dynamic range of the optical density in the film. From these two examples it is seen that the CR system allows the utilization of the full optical density dynamic range, regardless of whether the plate is overexposed or underexposed. Figure 3.14 shows an example comparing the results of using screen-film versus CR under identical Xray exposures. The same effect is achieved if the image signals are for digital output and not for hard copy film. That is, the digital image produced from the image reader and the image processor will also utilize the full dynamic range from quadrant D to produce 10-bit digital numbers. 3.3.4
Background Removal
3.3.4.1 What is Background Removal? Under normal operating conditions, images obtained by projection radiography contain unexposed areas because of Xray collimation, for example, areas outside the circle of the imaging field in digital fluorography (DF) and areas outside the collimator of CR for skeletal and pediatric radiology. In digital images, unexposed areas appearing white on a display monitor will be called “background” in this context. Figure 3.15A is a pediatric CR image with white background as seen on a monitor. Background removal in this context means that the brightness of the background is converted from white to black. Figure 3.15B shows that the white background in Figure 3.15A has been removed automatically. 3.3.4.2 Advantages of Background Removal in Digital Radiography There are four major advantages of using background removal in digital projection radiography. First, background removal immediately provides lossless data compression because the background is no longer in the image, an important cost-effective parameter in digital radiography when dealing with large-size images. Second, a background-removed image has better image visual quality for the following reasons: Diagnosis from radiography is the result of information processing based on visualization by the eyes. Because the contrast sensitivity of the eyes is proportional to the Weber ratio DB/B, where B is the brightness of the background, and DB is brightness difference between the region of interest in the image and the background, removing or decreasing the unwanted background in a projection radiography images makes the image more easily readable and greatly improves its diagnostic effect. Once the background is removed, a more representative lookup table (see Chapter 11) pertinent to only the range of gray scales in the image and not the background can be assigned to the image. Thus it can improve the visual quality of the images. Third, often in portable CR, it is difficult to examine the patient in an anatomical position aligned with the standard image orientation for reading because of the patient’s condition. As a result, the orientation of the image during reading may need to be adjusted. In film interpretation, it is easy to rotate or flip the film. However in soft copy display, automatically recognizing and making orientation
(b)
IMAGING PLATE TECHNOLOGY
Figure 3.14 Comparison of quality of images obtained by using (A) the conventional screen-film method and (B) CR techniques. Exposures were 70 kVp; 10, 40, 160, 320 mAs on a skull phantom. It is seen in this example the CR technique is almost dose independent (courtesy of Dr. S. Balter).
(a)
Ch03.qxd 2/12/04 5:03 PM Page 67
67
Ch03.qxd 2/12/04 5:03 PM Page 68
68
DIGITAL RADIOGRAPHY
A
B Figure 3.15 (A) A pediatric CR image, background is seen as white (arrows) in a video monitor (black outside of the white background). (B) Background-removed image.
Ch03.qxd 2/12/04 5:03 PM Page 69
FULL-FIELD DIRECT DIGITAL MAMMOGRAPHY
69
correction of the digital image is not a simple task; sophisticated software programs are needed. These software algorithms often fail if the background of the image is not removed. A background-removed image will improve the successful rate of automatic image orientation. Fourth, because background removal is a crucial preprocessing step in computer-aided detection and diagnosis (CAD, CADx), a background-removed image can improve the diagnostic accuracy of CAD algorithms as the cost functions in the algorithms can be assigned to the image only rather than to the image and its background combined. For digital fluorography and the film digitizer, the background removal procedure is straightforward. In the former, because the size of the image field is a predetermined parameter, the background can be removed by converting every pixel outside the diameter of the image field to black. In the latter, because the digital image is obtained in a two-step procedure (first a film is obtained, and then the film is digitized), the boundaries between the background and the exposed area can be determined interactively by the user and the corner points may be input during the digitizing step. For CR, background removal is a more complex procedure because it has to be done automatically during image acquisition or preprocessing time. Automatic removal of CR background is difficult because the algorithm has to recognize different body part contours as well as various collimator sizes and shapes. Because the background distribution in CR images is complex and the removal is an irreversible procedure, it is difficult to achieve a high success rate of full background removal and yet ensure that no valid information in the image was removed. Background removal is sometimes called a “shutter” in the commercial arena. Current methods based on a statistical description of the intensity distribution of the CR background can achieve up to a 90% success rate (Zhang, 1997). Background removal is also used in digital radiography as well as in image display.
3.4 3.4.1
FULL-FIELD DIRECT DIGITAL MAMMOGRAPHY Screen-Film and Digital Mammography
Conventional screen-film mammography produces a very high-quality mammogram on an 8 ¥ 10 sq. in. film. Some abnormalities in the mammogram require 50-mm spatial resolution to be recognized. For this reason, it is difficult to use CR or a laser film scanner to convert a mammogram to a digital image, hindering the integration of the modality images to PACS. However, mammography examinations account for about 8% of all diagnostic procedures in a typical radiology department. During the past several years, because of the supports from the U.S. National Cancer Institute and the U.S. Army Medical Research and Development Command, some direct digital mammography systems have been developed through joint efforts between academic institutions and private industry. Some of these systems are in clinical use. In Section 3.4.2 we describe the principle of digital mammography, a very critical component in a totally digital imaging system in a hospital.
Ch03.qxd 2/12/04 5:03 PM Page 70
70
3.4.2
DIGITAL RADIOGRAPHY
Full-Field Direct Digital Mammography: Slot-Scanning Method
There are two methods of obtaining a full-field direct digital mammogram, one is the imaging plate technology described in Section 3.3 but with higher-resolution imaging plates of different materials and higher-quantum-efficient detector systems. The other is the slot-scanning method. This section summarizes the slot-scanning method. The slot-scanning technology modifies the image receptor of a conventional mammography system by using a slot-scanning mechanism and detector system. The slot-scanning mechanism scans the breast by an X-ray fan beam, and the image is recorded by a CCD camera encompassed in the Bucky antiscatter grid of the mammography unit. Figure 3.16 shows a picture of a full-field direct digital mammography (FFDDM) system. The X-ray photons emitted from the X-ray tube are shaped by a collimator to become a fan beam. The width of the fan beam covers one dimension of the image area (e.g., x-axis), and the fan beam sweeps in the other direction (y-axis). The movement of the detector system is synchronous with the scan of the
Figure 3.16 A slot-scanning digital mammography system.The slot with 300-pixel width covering the x-axis (4400 pixels). The X-ray beam sweeps (arrow) in the y-direction, producing over 5500 pixels. X, X-ray and collimator housing; C, breast compressor.
Ch03.qxd 2/12/04 5:03 PM Page 71
FULL-FIELD DIRECT DIGITAL MAMMOGRAPHY
71
fan beam. The detector system of the FFDDM shown is composed of a thin phosphor screen coupled with four CCD detector arrays via a tapered fiber-optic bundle. Each CCD array is composed of 1100 ¥ 300 CCD cells. The gap between any two adjacent CCD arrays requires a procedure called “butting” to minimize the loss of pixels. The phosphor screen converts the penetrated X-ray photons (i.e., the latent image) to light photons. The light photons pass through the fiber-optic bundle, reach the CCD cells, and then are transformed to electronic signals. The more light photons received by each CCD cell, the larger is the signal that is transformed. The electronic signals are quantized by an A/D converter to create a digital image. Finally, the image pixels travel through a data channel to the system memory of the FFDDM acquisition computer. Figure 3.17 shows a 4K ¥ 5K ¥ 12 bit digital mammogram obtained with the system shown in Figure 3.16. A screening mammography examination requires four images, two for each breast, producing a total of 160 Mbytes of image data.
Figure 3.17 A 4K ¥ 5K ¥ 12 bit digital mammogram obtained with the slot-scanning FFDDM shown on a 2K ¥ 2.5K monitor. The window at the upper part of the image is the magnified glass showing a true 4K ¥ 5K region. (Courtesy of Drs. E. Sickles and S. L. Lou.)
Ch03.qxd 2/12/04 5:03 PM Page 72
72
DIGITAL RADIOGRAPHY
3.5
DIGITAL RADIOGRAPHY
3.5.1
Some Disadvantages of the Computed Radiography System
CR has gradually replaced many conventional screen-film projection radiography procedures through the past years and has been successfully integrated in PACS operation. Its advantages are that it produces a digital image, eliminates the use of films, requires a minimal change in the radiographic procedure room, and produces an image quality acceptable for most examinations. However, the technology itself has certain inherent limitations. First, it requires two separate steps to form a digital image, a laser to release the light energy in the latent image from the IP and a photomultiplier to convert the light to electronic signals. Second, although the IP is a good image detector, its signal-to-noise ratio and spatial resolution are still not ideal for some specialized radiographic procedures. Third, the IP requires a high intensity light to erase the residue of the latent image before it can be reused. This procedure adds on an extra step in the imaging acquisition operation. Also, many IPs are needed during the CR installation just like many screen (cassettes) are required in the screen-film detector system. IP has a limited number of exposure expectancy, is breakable, especially in the portable unit, and is expensive to replace. Manufacturers are working diligently for its improvement. 3.5.2
Digital Radiography
During the past five years, research laboratories and manufacturers have devoted tremendous energy and resources to investigating new digital radiography systems other than CR. The main emphases are to improve the image quality and operation efficiency and to reduce the cost of projection radiography examination. Digital radiography (DR) is an ideal candidate. To compete with conventional screen-film and CR, a good DR system should: •
• • • • • • • •
Have a high detector quantum efficiency (DQE) detector with 2–3 or higher line pair/mm spatial resolution and a higher signal-to-noise ratio Produce digital images of high quality Deliver a low dosage to patients Produce the digital image within seconds after X-ray exposure Comply with industrial standards Have an open architecture for connectivity Be easy to operate Be compact in size Offer competitive cost savings
Depending on the method used for the X-ray photon conversion, DR can be categorized into direct and indirect image capture methods. In indirect image capture, attenuated X-ray photons are first converted to light photons by the phosphor or the scintillator, from which the light photons are converted to electronic signals to form the DR image. The direct image capture method generates a digital image without going through the light photon conversion process. Figure 3.18 shows the
Ch03.qxd 2/12/04 5:03 PM Page 73
DIGITAL RADIOGRAPHY
73
X-Rays X-Rays
Selenium + Semiconductor Converts X-rays to Electrical Signals e
Scintillator or Phosphor Converts X-rays to Light Photons Light photons
Light Photons to Electrical Signals
Direct Digital Radiograph
e
Indirect digital Radiograph A. Direct Image Capture
B. Indirect Image Capture
Figure 3.18 Direct and indirect image capture methods in digital radiography.
difference between the direct and the indirect digital capture methods. The advantage of the direct image capture method is that it eliminates the intermediate step of light photon conversion. The disadvantages are that the engineering involved in direct digital capture is more elaborate and that it is inherently difficult to use the detector for dynamic image acquisition because of the necessity of recharging the detector after each readout. The indirect capture method uses either amorphous silicon phosphor or scintillator panels. The direct capture method uses the amorphous selenium panel. It appears that the direct capture method has the advantage over the indirect capture method, because it eliminates the intermediate step of light photon conversion. Two prevailing scanning modes in digital radiography are slot and areal scanning. The digital mammography system discussed in Section 3.4.2 uses the slot-scanning method. Current technology for areal detection mode uses flat-panel sensors. The flat panel can be one large or several smaller panels put together. The areal scan method has the advantage of being fast in image capture, but it also has two disadvantages, one being the high X-ray scattering. The second disadvantage is that the manufacturing of the large flat panels is technically difficult. DR design is flexible, and it can be used as an add-on unit in a typical radiography room or a dedicated system. In the dedicated system, some designs can be used both as a tabletop unit attached to a C-arm radiographic device and as an upright unit as shown in Figure 3.19. Figure 3.20 illustrates the formation of a DR image, comparing it with Figure 3.11 showing that of a CR image. A typical DR unit produces a 2000 ¥ 2500 ¥ 12 bit image instantaneously after the exposure. Figure 3.21 shows the system performance in terms of edge spread function (ESF), line spread function (LSF), and modulation transfer function (MTF) of a DR unit (see Section 2.5.1) (Cao and Huang, 2000). In this system, the 10% MTF at the center is about 2 linepair/mm. 3.5.3
Integration of Digital Radiography with PACS
One major advantage of DR is that it can minimize the number of steps in patient work flow, which translates to a better health care delivery system. To fully utilize
Ch03.qxd 2/12/04 5:03 PM Page 74
74
DIGITAL RADIOGRAPHY
(A) Dedicated CArm System
(B) Dedicated Chest
(C) Add-On
Figure 3.19 Three configurations of digital radiography design.
Digital Image
Emission Light
DR Laser Reader
X-ray 4
6
8 (100nm)
High Intensity Light
Stimulated light
Unused IP
IP with latent image
IP with residue image
Figure 3.20 Steps in the formation of a DR image, comparing it with that of a CR image shown in Fig. 3.11.
this capability of DR, it should be integrated with PACS or teleradiology operation. The main criterion of an effective integration is to have the DR images available for display as soon as they are captured. Figure 3.22 shows a method of integration. Following Fig. 3.22, while the DR image is being generated, the hospital information system (HIS) transmits admission, discharge and transfer (ADT) information in HL-7 standard (see Chapter 7) to the PACS archive. From there, it triggers
Ch03.qxd 2/12/04 5:03 PM Page 75
DIGITAL RADIOGRAPHY
75
the prefetch function to retrieve relevant images/data from the patient’s historical examinations and appends them to the patient folder in the archive. The folder is forwarded to the workstations after the examination. The network used for PACS is a local area network (LAN). For teleradiology, a wide area network (WAN) is used. ESF 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -1
-0.8
-0.6
-0.4
-0.2
0 mm (a)
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
LSF 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -1
-0.8
-0.6
-0.4
-0.2
0 mm (b)
Figure 3.21 The ESF, LSF, and MTF of a digital radiography unit.
Ch03.qxd 2/12/04 5:03 PM Page 76
76
DIGITAL RADIOGRAPHY
MTF 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.5
1
1.5
2 lp/mm (c)
2.5
3
3.5
4
Figure 3.21 Continued
Patient Registration Report
HIS ADT Patient HL-7 Information Standard
Digital Radiography Acquisition Device
Immediate Routing (LAN or WAN)
Workstations Selected Images for Pre-fetch
Image Pre-fetch
Image PreProcessing DICOM Standard
Archive PACS
Figure 3.22 Integration of digital radiography with PACS and teleradiology.
After the DR image is available from the imaging system, the DICOM (digital imaging and communication in medicine, see Chapter 7) standard should be used for system integration. Certain image preprocessing is performed to enhance its visual quality. For example, in digital mammography, preprocessing functions
Ch03.qxd 2/12/04 5:03 PM Page 77
DIGITAL RADIOGRAPHY
77
include the segmentation of the breast from the background and the determination of the ranges of pixel value of various breast tissues for automatic window and level adjustment. In digital radiography, removal of the background in the image due to X-ray collimation (see Section 3.3.4) and automatic lookup table generation for various parts of the anatomical structure are crucial. After preprocessing, the image is routed immediately to proper workstations pertinent to the clinical applications; from there, the image is appended to the patient folder, which has already been forwarded by the archive. The current DR image and historical images can be displayed simultaneously at the workstations for comparison. The current image with the patient folder is also sent back to PACS for longterm archiving. Another critical component in the DR system integration is the display workstation. The workstation should be able to display DR images with the highest quality possible. The image display time should be within several seconds, with preadjustment and instantaneous window and level adjustments. The flat panel liquid crystal display (LCD) should be used because of its excellent display quality, high brightness, light weight and small size, and easy-to-tilt angle to accommodate the viewing environment (see Chapter 11). 3.5.4
Applications of DR in Clinical Environment
The outpatient clinic, emergency room, and ambulatory care clinical environments are perfect for DR applications. Figure 3.23 shows a scenario of patient work flow in a filmless outpatient clinic.
Rm 1
Registration HIS
direct digital
2 Rm 2
Queue-up
4
PACS
1
DR
OUTPATIENT SERVER
5 Expert Center
Teleradiology
3
PACS Archive
Figure 3.23 Work flow in an integrated digital radiography with PACS outpatient operation environment.
Ch03.qxd 2/12/04 5:03 PM Page 78
78
DIGITAL RADIOGRAPHY
1. Patient first registers, changes garments, queues up for the examination, walks to the DR unit, gets exposed, changes back to street clothes, walks to the assigned physician room where the DR images are already available for viewing. 2. In the background, while the patient registers, the HIS sends the patient information to the PACS outpatient server. 3. The server retrieves relevant historical images, waits until the DR images are ready, appends new images to the patient image folder. 4. The server forwards the patient image folder to the assigned physician room (Rm 1, Rm 2), where images are displayed automatically when the patient arrives. 5. Images can also be sent to the off-site expert center through teleradiology for diagnosis. This patient work flow is most efficient and cost-effective. The operation is totally automatic, filmless, and paperless. It eliminates all human intervention, except during the X-ray procedure. We will revisit this scenario in later chapters.
Ch04.qxd 2/12/04 5:08 PM Page 79
CHAPTER 4
Computed Tomography, Magnetic Resonance, Ultrasound, Nuclear Medicine, and Light Imaging
This chapter discusses two categories of medical images: sectional images and light imaging. In sectional images, we consider X-ray computed tomography (XCT), magnetic resonance imaging (MRI), ultrasound (US) imaging, and single-photon and positron emission computed tomography (SPECT and PET). Sectional images are obtained based on image reconstruction theory. For this reason, we first discuss image reconstruction from projection in Section 4.1. In light imaging, we consider microscopic and endoscopic imaging; both methods use the image chain we discussed in Chapter 3. CT, MRI, and US are standard sectional imaging techniques producing sectional 3-D (three-dimensional) data volume. Recent advances in these techniques produce very large 3-D data set. It is not unusual to have hundreds and even thousands of CT or MR images in one examination. Archiving, transmission, and display of these large data volume sets have become a technical challenge. SPECT and PET use tomographic techniques similar to those of XCT except that the energy sources used are different. The reader is referred to Chapter 2, Table 2.1 for the sizes of image and examination of these sectional imaging modalities.
4.1
IMAGE RECONSTRUCTION FROM PROJECTIONS
Because most sectional images, like MRI and CT, are generated based on image reconstruction from projections, we first summarize the Fourier projection theorem, the algebraic reconstruction, and the filtered back-projection method before discussing imaging modalities. 4.1.1
The Fourier Projection Theorem
Let f(x, y) be a two-dimensional (2-D) cross-sectional image of a 3-D object. The image reconstruction theorem states that f(x, y) can be reconstructed from the crosssectional one-dimensional (1-D) projections. In general, 180 consecutive projections in 1-degree increments are necessary to produce a satisfactory quality image, and using more projections always results in a better reconstructed image. PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
79
Ch04.qxd 2/12/04 5:08 PM Page 80
80
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
Mathematically, the image reconstruction theorem can be described with the help of the Fourier transform (FT) discussed in Section 2.4. Let f(x, y) represent the 2-D image to be reconstructed, and let p(x,q) be the 1-D projection of f(x, y) onto the different angle axes, which can be measured experimentally (see Fig. 4.1, the zero-degree and q projection). In the case of X-ray CT, we can consider p(x,q) as the total linear attenuation of tissues traversed by a collimated X-ray beam at location x. Then, when q = 0, p( x, 0) =
+•
Ú f ( x, y)dy
(4.1)
-•
The 1-D Fourier transform of p(x) has the form P (u) =
+•
+•
Ê ˆ Ú ÁË Ú f ( x, y)dy˜¯ exp(-2pux)dx -• -•
(4.2)
Equations (4.1) and (4.2) imply that the 1-D Fourier transform of a 1-D projection of a 2-D image is identical to the corresponding central section of the 2-D Fourier transform of the object. For example, the 2-D image can be a transverse (cross) sectional X-ray image of the body, and the 1-D projections can be the X-ray attenuation profiles (projection) of the same section obtained from a linear X-ray scan at
Frequency Domain
Spatial Domain X-rays
4
2-D IFT
2
f(x, y) f(x,y)
F(u,0)= ¡ (P(x,0))
q = 0'...180' 2
F(u,q )= ¡ (P(x, q)) 2
1-D FT
P(x,q )
q
3
1
2-D FT 1
P(x,0)
q = 0'...180'
Figure 4.1 Principle of the Fourier projection theorem for image reconstruction from projections. F(0, 0) is at the center of the 2-D FT; low-frequency components are represented at the center region. The numerals represent the steps described in the text. P(x, q): X-rays projection at angle q; F(u, q): 1-D Fourier transform of p(x, q); IFT: inverse Fourier transform.
Ch04.qxd 2/12/04 5:08 PM Page 81
IMAGE RECONSTRUCTION FROM PROJECTIONS
81
certain angles. If 180 projections at 1-degree increments are accumulated and their 1-D Fourier transforms are performed, each of these 180 1-D Fourier transforms represents a corresponding central line of the 2-D Fourier transform of the X-ray cross-sectional image. The collection of all these 180 1-D Fourier transforms is the 2-D Fourier transform of f(x, y). The steps of a 2-D image reconstruction from its 1-D projections shown in Figure 4.1 are as follows: (1) Obtain 180 1-D projections of f(x, y), p(x, q) where q = 1, . . . , 180. (2) Perform the FT on each 1-D projection. (3) Arrange all these 1-D FTs according to their corresponding angles in the frequency domain. The result is the 2-D FT of f(x, y). (4) Perform the inverse 2-D FT of (3), which gives f(x, y). The Fourier projection theorem forms the basis of tomographic image reconstruction. Other methods that can also be used to reconstruct a 2-D image from its projections are discussed later in this chapter. We emphasize that the reconstructed image from projections is not always exact; it is only an approximation of the original image. A different reconstruction method will give a slightly different version of the original image. Because all of these methods require extensive computation, specially designed image reconstruction hardware is normally used to implement the algorithm. The term “computerized (computed) tomography” (CT) is often used to represent that the image is obtained from its projections with a reconstruction method. If the 1-D projections are obtained from X-ray transmission (attenuation) profiles, the procedure is called XCT; if obtained from single-photon g-ray emission, positron emission, or ultrasound signals, they are called SPECT, PET, and 3-D US, respectively. In the following sections, we summarize the algebraic and filtered backprojection methods with simple numerical examples. 4.1.2
The Algebraic Reconstruction Method
The algebraic reconstruction method is often used for the reconstruction of images from an incomplete number of projections (i.e., <180°). We use a numerical example to illustrate the method. Let f(x, y) be a 2 ¥ 2 image with the following pixel values:
f ( x, y) =
1
2
3
4
The four projections of this image are as follows: 0° 45° 90° 135°
projection projection projection projection
4, 6 5 (1 and 4 are ignored for simplicity) 3, 7 5 (3 and 2 are ignored for simplicity)
Ch04.qxd 2/12/04 5:08 PM Page 82
82
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
Combining this information, one obtains: 4
the 0° projections:
6
1
2
3
4
5 the 45∞ projection 3 the 90∞ projection 7 5 the 135∞ projection
}
The problem definition is to reconstruct the 2 ¥ 2 image f(x, y), which is unknown, from these four given projections obtained from direct measurements. The algebraic reconstruction of the 2 ¥ 2 image from these four known projections proceeds stepwise as follows: 4
6
0
0
0
0
5
Compute (4 – 0 – 0)/2 (6 – 0 – 0)/2
3
2
3
Compute (5 – 2 – 3)/2
Add results to corresponding pixels
Use 0 as arbitrary starting value
Compute (3 – 2 – 3)/2 (7 – 2 – 3)/2
2
1
2
3
4
Add results to corresponding pixels
Compute (5 – 1 – 4)/2 5
2
3
3
2
3
7
45∞ projection does not change pixel values 1
2
3
4
Add results to corresponding pixels: FINAL FORM
From the last step, it is seen that the result is an exact reconstruction (a pure chance) of the original 2 ¥ 2 image f(x, y). It requires only four projections because f(x, y) is a 2 ¥ 2 image. For a 512 ¥ 512 image, it will require over 180 projections, each with sufficient data points in the projection to render a good quality image. 4.1.3
The Filtered (Convolution) Back-Projection Method
The filtered back-projection method requires two components, the back-projection algorithm, and the selection of a filter to modify the projection data. The selection of a proper filter for a given anatomical region is the key to obtaining a good reconstruction from the filtered (convolution) back-projection method.This is the method of choice for almost all XCT scanners. 4.1.3.1 A Numerical Example Consider the example introduced in Section 4.1.2. We now wish to reconstruct the 2 ¥ 2 matrix f(x, y) from its four known projections with the filtered back-projection method. The procedure is to first select a
Ch04.qxd 2/12/04 5:08 PM Page 83
83
IMAGE RECONSTRUCTION FROM PROJECTIONS
filter function, convolve it with each projection, and then back-project the convoluted data to form an image. For this example, the filter function (-1/2, 1, -1/2) is used. This means that when each projection is convolving with this filter function, the data point of the projection under consideration will be multiplied by “1,” and both points 1 pixel away from the data point under consideration will be multiplied by “-1/2.” Thus, when the projection [4, 6] is convolved with (-1/2, 1, -1/2), the result is (-2, 1, 4, -3), because -2 4 -2 -3 6 -3 -2 1 4 -3 Back-projecting this result to the picture, we have –2
1
4
–3
–2
1
4
–3
The data points -2, -3 outside the domain of the 2 ¥ 2 reconstructed image are called reconstructed noise and are truncated. The result of the following step-bystep illustration of this method is an exact reconstruction (again, by pure chance) of the original f(x, y): 4
6
0
0
0
0
5 Back-project (–2, 1, 4, –3)
Back-project (–3/2, –1/2, 11/2, –7/2)
equivalent to
1
4
1
4
–2
17/2
Back-project (–5/2, 5, –5/2)
23/2 7
1
2
3
4
Back-project (–5/2, 5, –5/2)
3
–3/2 9 6
7
3/2 3
6
9
12
5
4.1.3.2 Mathematical Formulation The mathematical formulation of the filtered back-projection method is given in Equ. (4.3): p
f ( x, y) = Ú h(t ) * m(t , q)dq
(4.3)
0
where m(t, q) is the t sampling point at q angle projection, h(t) is the filtered function, and * is the convolution operator.
Ch04.qxd 2/12/04 5:08 PM Page 84
84
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
4.2 4.2.1
TRANSMISSION X-RAY COMPUTED TOMOGRAPHY (XCT) Conventional XCT
A CT scanner consists of a scanning gantry housing an X-ray tube and a detector unit and a movable bed that can align a specific cross section of the patient with the gantry. The gantry provides a fixed relative position between the X-ray tube and the detector unit. A scanning mode is the procedure of collecting X-ray attenuation profiles (projections) from a transverse (cross) section of the body. From these projections, the CT scanner’s computer program or back-projector hardware reconstructs the corresponding cross-sectional image of the body. Figures 4.2 and 4.3 show the schematic of two most popular XCT scanners (third and fourth generations), both using an X-ray fan beam. These types of XCT take about 5 for one sectional scan and more time for image reconstruction. 4.2.2
Spiral (Helical) XCT
Three other configurations can improve the scanning speed: the helical (spiral) CT, the cine CT (Section 4.2.3), and the multislice CT (Section 4.2.4). The helical CT is
Continuous Rotation
Motion of Detector Array
X-Ray Tube
Fan Beam
Xenon Detector Array
Motion of X-Ray Tube
Figure 4.2 Schematic of the rotation scanning mode using a fan-beam X-ray. The detector array rotates with the X-ray tube as a unit.
Ch04.qxd 2/12/04 5:08 PM Page 85
TRANSMISSION X-RAY COMPUTED TOMOGRAPHY (XCT)
85
X-Ray Tube
Direction of X-Ray Motion
Stationary Scintillation Dectector Array
Figure 4.3 Schematic of the rotation scanning mode with a stationary scintillation detector array. Only the X-ray source rotates.
based on the design of the third- or the fourth-generation scanner, the cine CT uses a scanning electron beam X-ray tube, and multislice CT uses a cone beam instead of a fan beam. The CT configurations shown in Figures 4.2 and 4.3 have one common characteristic: the patient’s bed remains stationary during the scanning; after a complete scan, the patient’s bed advances a certain distance and the second scan resumes. The start-and-stop motions of the bed slow down the scanning operation. If the patient’s bed could assume a forward motion at a constant speed while the scanning gantry rotated continuously, the total scanning time of a multiple section examination could be reduced. Such a configuration is not possible, however, because the scanning gantry is connected to the external high-energy transformer and power supply using electrical cables. The spiral or helical CT design does not involve cables. Figure 4.4 illustrates the principle of spiral CT. There are two possible scanning modes, single helical and cluster helical. In the single helical mode, the bed advances linearly while the gantry rotates in sync for a period of time, say 30 s. In the cluster helical mode, the simultaneous rotation and translation lasts only 15 s, whereupon both motions stop for 7 s before resuming again. The single helical mode is used for patients who can hold their breath for a longer period of time, whereas the cluster helical mode is for patients who need to take a breath after 15 s. The design of the helical XCT introduced in the late 1980s is based on three technological advances: the slip-ring gantry, improved detector efficiency, and greater Xray tube cooling capability. The slip-ring gantry contains a set of rings and electrical components that rotate, slide, and make contact to generate both high energy (to supply the X-ray tube and generator) and standard energy (to supply power to other
Ch04.qxd 2/12/04 5:08 PM Page 86
86
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
Figure 4.4 Helical (spiral) CT scanning modes.
electrical and computer components). For this reason, no electrical cables are necessary to connect the gantry and external components. During the helical scanning, the term “pitch” is used to define the relationship between the X-ray beam collimation and the velocity of the bed movement. pitch = table movement in mm per gantry rotation slice thickness Thus, a pitch equal to 1 means that the gantry rotates a complete 360° as the bed advances 1.5 mm in 1 s, which gives a slice thickness of 1.5 mm. During this time, raw data are collected covering 360° and 1.5 mm. Assuming that one rotation takes 1 s, for the single helical scan mode, 30 s of raw data are continuously collected while the bed moves 45 mm. After the data collection phase, the raw data are interpolated and/or extrapolated to sectional projections. These organized projections are used to reconstruct individual sectional images. In this case, they are 1.5-mm contiguous slices. Reconstruction slice thickness can be from 1.5 mm to 1 cm, depending on the interpolation and extrapolation methods used. The advantages of the spiral CT scans are speed of scanning, allowing the user to select slices from continuous data to reconstruct slices with peak contrast medium, retrospective creation of overlapping or thin slices, and volumetric data collection. The disadvantages are the helical reconstruction artifacts and potential object boundary unsharpness. 4.2.3
Cine XCT
Cine XCT, introduced in the early 1980s, uses a completely different X-ray technology, namely, an electron beam X-ray tube: This scanner is fast enough to capture
Ch04.qxd 2/12/04 5:08 PM Page 87
TRANSMISSION X-RAY COMPUTED TOMOGRAPHY (XCT)
87
Figure 4.5 Schematic of the cine XCT. Source: Diagram adapted from a technical brochure of Imatron, Inc.
the motion of the heart. The detector array of the system is based on the fourthgeneration stationary detector array (scintillator and photodiode). As shown schematically in Figure 4.5, an electron beam (1) is accelerated through the X-ray tube and bent by the deflection coil (2) toward one of the four target rings (3). Collimators at the exit of the tube restrict the X-ray beam to a 30° fan beam, which forms the energy source of scanning. Because there are four tungsten target rings, each of which has a fairly large area (210° tungsten, 90-cm radius) for heat dissipation, the X-ray fan beam can sustain the energy level required for scanning continuously for various scanning modes. In addition, the detector and data collection technologies used in this system allow very rapid data acquisition.Two detector rings (indicated by 4 in Fig. 4.5) allow data acquisition for two consecutive sections simultaneously. For example, in the slow acquisition mode with a 100-ms scanning time and an 8-ms interscan delay, cine XCT can provide 9 scans/s, or in the fast acquisition mode with a 20-ms scanning time, 34 scans/s. The scanning can be done continuously on the same body section (to collect dynamic motion data of the section) or along the axis of the patient (to observe vascular motion). Because of its fast scanning speed, cine XCT is used for cardiac motion and vascular studies and emergency room scans. Until the multislice XCT became available, cine XCT was the fast scanner for dynamic studies. 4.2.4
Multislice XCT
4.2.4.1 Principles In spiral XCT, the patient’s bed moves during scan but the X-ray beam is a fan beam perpendicular to the patient axis and the detector system is built to collect data for the reconstruction of one slice. If the X-ray beam is shaped to a three-dimensional cone beam with the z-axis parallel to the patient’s axis, and if a multiple-detector array (in the z-direction) system is used to collect the data, then we have a multislice XCT scanner (see Fig. 4.6). Multislice XCT, in essence, is also a spiral scan except that the X-ray beam is shaped to a cone beam geometry. Multislice XCT can obtain many images in one examination with a very rapid acquisition time, for example, 160 images in 20 s, or 8 images/s or 4 MB/s of raw data. Figure 4.6 shows the schematic. It is seen from this figure that a full rotation of the cone beam is necessary to collect sufficient projection data to reconstruct the number of slides equal to the z-axis collimation of the detector system (see below for definition). Multislice XCT uses several new technologies:
Ch04.qxd 2/12/04 5:08 PM Page 88
88
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
X
Patient axis Z
Axial View ys ra Ar tor tec De
D Slides (256)
Scout View
Figure 4.6 Geometry of the multislice XCT. The patient axis is in the z-direction. The X-ray (X) shaped as a collimated cone beam rotates around the z-axis 360° in sync continuously with the patient’s bed moving linearly in the z-direction. The detector system (D) is a combination of detector arrays shaped in a concave surface facing the X-ray beam. The number of slices per 360° rotation are determined by two factors: the number of detector arrays in the z-direction and the method used to recombine the cone beam projection data into transverse sectional projections (see Fig. 4.1). The reconstructed images are transverse view perpendicular to the z-axis. If the cone beam does not rotate while the patient’s bed is moving, the reconstructed image is equivalent to a digital fluorographic image.
(1) New detector: A ceramic type detector is used to replace traditional crystal technology. The ceramic detector has the advantages of more light photons in the output, less afterglow time, higher resistance to radiation and mechanical damage, and ability to be shaped much thinner (1/2) for an equivalent amount of X-ray absorption, compared with the crystal scintillators. (2) Real-time dose modulation: This is a method to minimize the dose delivered to the patient using the cone beam geometry by modulating the milliampereseconds (mAs) of the X-ray’s beam during the scan.
Ch04.qxd 2/12/04 5:08 PM Page 89
TRANSMISSION X-RAY COMPUTED TOMOGRAPHY (XCT)
89
(3) Cone beam geometry image reconstruction algorithm: This algorithm provides efficient collection and recombination of cone beam X-ray projections (raw data) for sectional reconstruction. (4) High-speed data output channel: During one examination, say, for 160 images, much more data must be collected during the scanning. Fast I/O data channels from the detector system to image reconstruction are necessary. If the patient bed is moving linearly but the gantry does not rotate, the result is a digital fluorographic image with better image quality than that discussed in Section 3.2.2. 4.2.4.2 Some Standard Terminology Used in Multislice XCT Recall the term “pitch” defined in spiral XCT; with cone beam-multidetector scanning, because of the multidetector arrays in the z-direction (See Fig. 4.6), the table movement can be many times the thickness of an individual slice. For example, take a 16 ¥ 1.5 mm detector system (16 arrays with 1.5-mm thickness per array), and with the slice thickness of an individual image being 1.5 mm, use the definition of pitch in spiral scan: pitch = table movement in mm per gantry rotation slice thickness = 16 = (16 ¥ 1.5 mm rotation) 1.5 mm = (24 mm rotation) 1.5 mm This means the table moving 24 mm/rotation with a reconstructed slice thickness of 1.5 mm would have a pitch of 16 (see Section 4.2.2). This case also represents contiguous scans. Comparing this example with that shown in Section 4.2.2 for a single-slice scan, the definition of “pitch” shows some discrepancy. This discrepancy is due to the size of the multidetector arrays. Because different manufacturers produce different sizes of multidetector arrays, the word “pitch” becomes confusing. For this reason, the International Electrotechnical Commission (IEC) accepts the following definition of pitch (now often referred to as the IEC pitch): z-Axis collimation (T) = the width of the tomographic section along the z-axis imaged by one data channel (array). In multidetector row (multislice) CT scanners, several detector elements may be grouped together to form one data channel (array). Number of data channels (N) = the number of tomographic sections imaged in a single axial scan. Table speed or increment (I) = the table increment per axial scan or the table increment per rotation of the X-ray tube in a helical (spiral) scan. Pitch (P) = table speed (I mm/rotation)/(N · T) Thus, for a 16-detector scanner in a 16 ¥ 1.5 mm scan mode, N = 16 and T = 1.5 mm, and if the table speed = 24 mm/rotation, then P = 1, a contiguous scan. If the table speed is 36 mm/rotation, then the pitch is 36/(16 ¥ 1.5) = 1.5.
Ch04.qxd 2/12/04 5:08 PM Page 90
90
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
4.2.5
Four-Dimensional (4-D) XCT
Referring to Figure 4.6, with the bed stationary but the gantry continuously rotating, we would have a 4-D XCT, with the fourth dimension as time. In this scanning mode, the human body physiological dynamic can be visualized in 3-D. Current multislice XCT with a limited size of detector arrays in the z-direction and a data collection system of 100 MB/s can only visualize a limited segment of the body. To realize the potential clinical applications of 4-D XCT, several challenges are in order: 1. Extend the cone beam X-ray and the length of the detector array in the z-direction. Currently, a detector system with 256 arrays with 912 detectors per array is available in some prototype 4-D XCT systems. 2. Improve the efficiency and performance of the A/D conversion at the detector. 3. Increase the data transfer rate between the data acquisition system to the display system from the 100 MB/s to 1 GB/s. 4. Revolutionize the display method for 4-D images. 4-D XCT can produce images of gigabyte per examination range. Methods of archive, communication, and display are challenging topics in PACS design and implementation. 4.2.6
Components and Data Flow of an XCT Scanner
Figure 4.7 shows the major components and data flow in an XCT. Included are a gantry housing the X-ray tube, the detector system, and signal processing/conditioning circuits; a front-end preprocessor unit for cone/fan beam projection data corrections and recombination to transverse sectional projection data; a high-speed computational processor; a hardware back-projector unit; and a video controller for displaying images. In XCT, the CT number, or pixel/voxel value, or Hounsfield number, represents the relative X-ray attenuation coefficient of the tissue in the pixel/voxel, defined as follows: CT number = K (m - m W ) m W where m is the attenuation coefficient of the material under consideration, mw is the attenuation coefficient of water, and K is a constant set by the manufacturer. 4.2.7
XCT Image Data
4.2.7.1 Slice Thickness Current multislice CT scanners can feature up to 32 detectors in an array. In a spiral scan, multiple slices of data can be acquired simultaneously for different detector sizes, and 0.75-, 1-, 2-, 3-, 4-, 5-, 6-, 7-, 8-, and 10-mm slice thickness can be reconstructed. 4.2.7.2 Image Data Size A standard chest CT exam covering between 300 and 400 mm can yield image sets from 150–200 images all the way up to 600–800 images
Ch04.qxd 2/12/04 5:08 PM Page 91
TRANSMISSION X-RAY COMPUTED TOMOGRAPHY (XCT)
91
GANTRY
Detector Circle Source Circle Scan Circle Reference Circle
X-Ray Tube Detectors Current + Log Amp
A/D Data Buffers
Control Console
Raw Data (Cone/Fan Beam) Short – term Archive OD Tape Preprocessor Recombines Sectional Data
PACS
DICOM Image
Image Data
Back Projector
Convolved Data
LCD Display
Figure 4.7 Components of data flow of a XCT scanner. The scanning and data collection times, in general, are shorter than the image reconstruction time.
Ch04.qxd 2/12/04 5:08 PM Page 92
92
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
depending on the slice thickness, or data sizes from 75 MB up to 400 MB. Performance-wise, that same standard chest CT exam can be acquired in 0.75-mm slices in 10 s. A whole body CT can produce up to 2500 images or 1250 MB (1.25 GB) of data. Each image is 512 ¥ 512 ¥ 2 byte size. 4.2.7.3 Data Flow/Postprocessing The fan/cone beam raw data are obtained by the acquisition host computer. Slice thickness reconstructions are performed on the raw data. Once the set of images is acquired in DICOM format (see Chapter 7), any postprocessing is performed on the DICOM data. This includes sagittal, coronal, and off-axis slice reformats as well as 3-D postprocessing. Sometimes the cone beam raw data are saved for future reconstruction of different slice thicknesses. Some newer scanners feature a secondary computer, which shares the same database as the acquisition host computer. This secondary computer can perform the same postprocessing functions while the scanner is acquiring new patient data. This secondary computer also can perform network send jobs to PACS or another DICOM destination (e.g., highly specialized 3-D processing workstation) and maintains a send queue, thus relieving the acquisition host computer of these functions and improving system throughput.
4.3
EMISSION COMPUTED TOMOGRAPHY
Emission computed tomography (ECT) has many characteristics in common with the transmission X-ray CT. The main difference between these two techniques is the source of radiation used. In ECT, the radionuclide that is administered to a patient in the form of radiopharmaceuticals either by injection or by inhalation is used as the energy source instead of an external X-ray beam. The principle of ECT is based on nuclear medicine scanning, which is discussed in Section 4.5. It is important to select a dose-efficient detector system for an ECT system for two reasons. First, the quantity to be measured in ECT is the distribution of the radionuclide in the body, which changes with time as a result of flow and biochemical kinetics in the body. Thus all the necessary measurements must be made in a short period of time. Second, the amount of isotope administered must be minimal because we want to limit the dose delivered to the patient. Therefore, detector efficiency plays a crucial role in selecting a scintillator for ECT systems. The basic principle of image reconstruction is the same in ECT as in transmission CT except that the signal in ECT is the attenuation of g-rays during their flight from the emitting nuclei to the detectors. To minimize the contribution from scattered radiation, the ECT uses the characteristics of monoenergetic energy in setting up a counting window to discriminate the lower-energy scattered radiation from the high-energy primary radiation. There are two major categories of ECT: singlephoton emission CT (SPECT) and positron emission CT (PET). 4.3.1
Single-Photon Emission CT (SPECT)
There are many different designs for the SPECT, but only rotating gamma camera systems (see Figure 4.8) are commercially available. In a rotating camera system,
Ch04.qxd 2/12/04 5:08 PM Page 93
EMISSION COMPUTED TOMOGRAPHY
93
X Camera System
Y
A/D
Projection Data Storage
Z
Collimator
Reconstruction (Transverse) (Sagittal) (Coronal)
Attenuation Correction Display Rotational Track of the g Camera
Figure 4.8 Schematic of a single-photon emission CT (SPECT).
the gamma camera is rotated around the object and images in a two-dimensional series are reconstructed and stored for processing. The camera is composed of a large scintillation crystal with a diameter of 30–50 cm and a number of photomultiplier tubes (PMTs) attached to the opposite surface of the crystal. When a g-ray photon interacts with the crystal, the light generated from the photoelectric effect is uniformly distributed among the neighboring PMTs. By measuring the relative signal of each PMT, the camera can locate the interaction position for each event. The drawback of this system is the difficulty of maintaining uniform speed of rotation of a rather heavy camera. Figure 4.8 shows the schematic of a SPECT. Because a typical tomographic study takes 15–20 min to complete, it is important to maintain patient immobilization. To provide the best sensitivity and resolution, it is desirable to have the camera as close to the patient as possible. Because the dimension of the body width is greater than its thickness, an elliptical orbit of rotation of the camera tends to produce a higher-resolution image. Different collimators are used for different applications. In general, the reconstruction algorithm must be modified and the attenuation values corrected for each type of collimator. For example, a single-plane converging collimator will need a fan beam reconstruction algorithm and a parallel collimator will need a parallel beam algorithm. The three methods of correcting attenuation values based on the assumption of a constant attenuation value are summarized as follows. 1. Geometric mean modification. Each data point in a projection is corrected by the geometric mean of the projection data, which is obtained by taking the square root of the product of two opposite projection data points.
Ch04.qxd 2/12/04 5:08 PM Page 94
94
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
2. Iterative modification. This method is similar to the iteration reconstruction method for XCT described above. A reconstruction without corrections is first performed, and each pixel in the reconstructed image is compensated by a correction factor that is the inverse of the average measured attenuation from that point to the boundary pixels. The projections of this modified image are obtained, and the differences between each of the corrected projections and the original measured projections are computed. These difference projections are reconstructed to obtain an error image.The error image is then added back to the modified image to form the corrected image. 3. Convolution method. Each data point in the projection is modified by a factor that depends on the distance from a centerline to the edge of the object. The modified projection data points are filtered with a proper filter function and then back-projected with an exponential weighting factor to obtain the image (see Section 4.1.3). Currently, SPECT is mostly used for studies of the brain, including brain blood volume (99mTc-labeled blood cells), regional cerebral blood flow (123I-labeled iodoantipyrine or inhaled 133Xe), and physiological condition measurements. 4.3.2
Positron Emission CT (PET)
In PET, a positron instead of single photon is used as the radionuclide source. The positron emitted from a radionuclide is rapidly slowed down and annihilated by a combination yielding two 511-keV g-ray photons oriented 180° to each other. The PET system utilizes this unique property of positrons by employing a detector system that requires simultaneous detection of both photons from annihilation and thus avoids the need for collimators. A pair of detectors is placed in the two opposite sides of the patient, and only events that are detected in coincidence are recorded. Simultaneous detection of two annihilation photons by the detector system thus signals the decay of a positron anywhere along a line connecting the two points of detection (Fig. 4.9). Because of this multiple coincidence logic, PET systems have higher sensitivity than SPECT. The correction of attenuation is easier in PET than in SPECT because the probability that annihilated photons will reach both detectors simultaneously is a function of the thickness of the body between the two opposite detectors. The correction factor can be obtained by means of a preliminary scan of the body with an external g-ray source, or a correction table based on a simple geometric shape resembling the attenuation medium to be used. Patient movements, oversimplified geometric shape, and nonuniform medium will cause errors in attenuation correction. Thallium-drifted sodium iodide NaI (T1), bismuth germanate (BGO), and cesium fluoride (CsF) are some detector materials being used. Because of the high energy of the annihilation photon, detector efficiency plays a crucial role in selecting a scintillator for a PET system. Bismuth germanate is considered to be the most prominent candidate for PET detector material because of its high detection efficiency, which is due to its high physical density (7.13 g/cm3) and large atomic number (83), as well as its nonhygroscopicity (which makes for easy packing) and its lack of afterglow.
Ch04.qxd 2/12/04 5:08 PM Page 95
ADVANCES IN XCT AND PET
Patient
Coincidence Circuit
95
Coordinate Processor
Reconstruction
Display
Figure 4.9 Block diagram of a PET system; only two array banks are shown.
A typical whole-body PET scanner consists of 512 BGO detectors placed in 16 circular array banks with 32 detectors in each bank. During scanning, the system is capable of wobbling to achieve higher resolution via finer sampling. The image spatial resolution for the stationary and wobbled modes are 5–6 mm and 4.5–5 mm, respectively. With the whole-body imaging technique, PET can produce tomographic images of the entire body with equal spatial resolution in the three orthogonal image planes. Because the body longitudinal axis is, in general, longer than the other two axes, the patient bed is required to advance during the scanning process to permit the entire body length to be scanned. A complicated data acquisition system in synchrony with the bed motion is necessary to monitor the data collection process. This data collection scheme is very similar to that of the multislice XCT. Figure 4.10A shows a PET scan of the chest revealing a hot spot at a lung (arrow). Figure 4.10B illustrates images of the transaxial, coronal, and sagittal orthogonal planes, as well as the anterior-posterior projection image of the whole-body PET image with a fluoride ion isotope (18F-).
4.4
ADVANCES IN XCT AND PET
We discuss two advances in XCT and PET images. The first is the PET/XCT fusion scanner, and the second is microimaging. 4.4.1
PET/XCT Fusion Scanner
XCT is excellent for anatomical delineation with a fast scanning time, whereas PET is slow in obtaining physiological images of poorer resolution but good for the differentiation between benign and malignant tumors. PET requires attenuation correction in image reconstruction, and the fast CT scan time can provide the anatomical tissue attenuation in seconds, which can be used as a base for PET data correction. Thus the combination of a CT and a PET scanner during a scan would give a very powerful tool for improving clinical diagnostic accuracy when neither
Ch04.qxd 2/12/04 5:08 PM Page 96
96
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
Figure 4.10A PET image of the lungs, showing the hot spot (black). (Courtesy of GE Medical Systems http://gemedicalsystems.com/rad/nm_pet/clinical_img/ctpetimagegallery.html.)
Figure 4.10B Images of transaxial, coronal, and sagittal orthogonal planes, as well as the anterior-posterior projection image of the whole-body PET image with fluoride ion (18F-). (Courtesy of Dr. R.A. Hawkins.)
alone would be able to provide such a result. Yet the two scanners must be combined as one system; otherwise, the misregistration between CT and PET images would sometimes give misinformation. The CT/PET fusion scanner is such a hybrid scanner that can obtain CT images as well as PET images during an examination. The PET images so obtained actually have better resolution than those obtained without using the CT attenuation correction. The output of a PET/CT fusion scanner is two sets of images, CT and PET, with the same coordinate system for easy fusing of the images together (see Figure 19.6). 4.4.2
Micro Sectional Images
XCT and PET are used mainly for examination of humans; their design is not suitable for small-animal studies. One recent advance in both PET and XCT is in
Ch04.qxd 2/12/04 5:08 PM Page 97
NUCLEAR MEDICINE
97
Figure 4.11 3-D rendering of a completed set of 1000 slices of a rat scanned by a Micro XCT scanner with 50 m pixel. 500 MB of image data is produced. Skeletal structure is emphasized in the 3-D display. (Courtesy of ORNL.)
the development of a microimaging scanner specially designed for small animals, like rats and mice. Microscanners can have a tremendous impact on the design of animal models for evaluation of the effectiveness of a drug treatment of a certain tumor. With the microscanner, the animal does not have to be killed for validation after each treatment as in the traditional method; it can be kept alive under observation for the complete treatment cycle. The major design difference of such microscanners from the conventional sectional imaging systems is their small bore to house the small animal (20–50 g), lower radiation energy input, and smaller size but more sensitive detector system. For one single complete animal experiment, the micro XCT scanner can produce 1000 images of 50-mm spatial resolution, or 500 MB of image data. Figure 4.11 shows a 3-D display of a rat scanned by a micro XCT scanner with 50-mm resolution; the skeletal structure is emphasized in the display.
4.5 4.5.1
NUCLEAR MEDICINE Principles of Nuclear Medicine Scanning
Although ECT is sectional imaging, a nuclear medicine scan produces a projection image. Nuclear medicine should actually be categorized under projection images, but because its energy source is not X-rays, we group this imaging modality with sectional imaging instead of including it in Chapter 3. The principle of nuclear medicine is needed in explaining the concept of ECT discussed in Section 4.4. The formation of an image in nuclear medicine is done by administering a radiopharmaceutical agent that can be used to differentiate between a normal and an abnormal physiological process. A radiopharmaceutical agent consists of a tracer substance and a radionuclide to highlight the tracer’s position. The tracer typically consists of a molecule that resembles a constituent of the tissue of interest, a colloidal substance that is taken up by reticuloendothelial cells, for example, or a capillary blocking agent. A gamma camera (see Section 4.5.2) is then used to obtain an image of the distribution of the radioactivity in an organ. The gamma emitter is chosen on the basis of its specific activity, half-life, energy spectrum, and ability to bond with the desired tracer molecule. Its activity is important because, in general, one would like to perform scans in the shortest possible time while nevertheless accumulating sufficient statistically meaningful nuclear decay counts. As always, the radionuclide half-life must be reasonably short to
Ch04.qxd 2/12/04 5:08 PM Page 98
98
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
minimize the radiation dose to the patient. The energy spectrum of the isotope is important because if the energy emitted is too low, the radiation would be severely attenuated when passing through the body; hence, the photon count statistics would be poor or the scan times unacceptably long. If the energy is too high, there may not be enough photoelectric interaction, and absorption in the detector crystal will be low. Typical isotopes used in nuclear medicine have g-ray emission energies of 100–400 keV. 4.5.2
The Gamma Camera and Associated Imaging System
As with most imaging systems, a nuclear medicine imager (e.g., a gamma camera) contains subsystems for data acquisition, data processing, data display, and data archiving. A computer is used to control the flow of data and to coordinate these subsystems into a functional unit. The operator interactively communicates with the computer via commands from a computer terminal or predefined push buttons on the system’s control terminal. Figure 4.12 shows a schematic of a typical nuclear medicine gamma camera. Typical matrix sizes of nuclear medicine image are 64 ¥ 64 or 128 ¥ 128 ¥ 8 bits, with a maximum of 30 frames in cardiac imaging. In gated cardiac mode, useful parameter values such as ejection fraction and stroke volume may be calculated. In addition, the frames of a cardiac cycle may be displayed consecutively and rapidly in cine fashion to evaluate heart wall motion. Some nuclear imagers may not be provided with the DICOM standard (see Chapter 7).
4.6
ULTRASOUND IMAGING
Ultrasound imaging is used in many areas of medicine including obstetrics, gynecology, pediatrics, ophthalmology, mammography, abdominal imaging, and cardiology, as well as in the imaging of small organs such as the thyroid, prostate, and testicles and recently in intravascular ultrasound endoscopy (see Section 4.8.2). Its
Front End Control and Acquisition Processor Unit
PMT NA I (T I) CRYSTAL
Patient Host Computer
Storage Device
Console
Display
Figure 4.12 Schematic of a general gamma camera used in nuclear medicine.
Ch04.qxd 2/12/04 5:08 PM Page 99
ULTRASOUND IMAGING
99
wide acceptance is partially due to its noninvasiveness, its use of nonionizing radiation, and its lower equipment and procedural costs compared with XCT and MRI. An ultrasound examination is a widely used first step in attempting to diagnose a presented ailment because of its noninvasive nature. 4.6.1
Principles of B-Mode Ultrasound Scanning
The purpose of B-mode ultrasound imaging is to reconstruct a sectional view of the patient by detecting the amplitudes of acoustical reflections (echoes) occurring at the interfaces of tissues of different acoustical properties. Pulses of high-frequency ultrasonic waves from a transducer are introduced into the structures of interest in the body by pressing it against the skin. A coupling gel is used to provide efficient transfer of acoustical energy into the body. The acoustical wave propagates through the body tissues, and its radiation pattern will demonstrate high directivity in the near field or Fresnel zone close to the body surface (see Fig. 4.13) and begin to diverge in the far field or Fraunhoffer zone. The range of the near and far fields is determined mainly by the wavelength l of the sonic waves used and the diameter of the transducer. In general, it is preferable to image objects that are within the Fresnel zone, where lateral resolving power is better. The fate of the acoustical wave is highly dependent on the acoustical properties of the medium in which the wave is propagating. The speed of the wave in media depends on the elasticity and density of the material and affects the degree of refraction (deviation from a straight path) that occurs at a boundary between tissues. The characteristic impedance of the material, which determines the degree of reflection
q @ sin –1 (0.612l/r) r Transducer
r2/l
Fresnel Zone
Fraunhoffer Zone
l: Wavelength of the sound wave used.
Figure 4.13 Principle of the ultrasound wave produced by a transducer made of piezoelectric material. l: Wavelength of the sound wave used.
Ch04.qxd 2/12/04 5:08 PM Page 100
100
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
that occurs when a wave is incident at a boundary, is dependent on the material’s density and the speed of sound in the material. The larger the difference between the acoustic impedances of two materials forming a boundary, the greater will be the strength of the reflected wave. 4.6.2
System Block Diagram and Operational Procedure
Figure 4.14 shows a general block diagram of a typical B-mode ultrasound scanner. It is composed of a transducer, a high-voltage pulse generator, a transmitter circuit, a receiver circuit with time gain compensation (TGC), a mechanical scanning arm with position encoders, a digital scan converter (DSC), and a video display monitor. The acoustical waves are generated by applying a high-voltage pulse to a piezoelectric crystal, resulting in the creation of a longitudinal pressure sonic wave. The rate at which pulses are supplied by the transmitter circuit to the transducer, as determined by a transmission clock, is called the pulse repetition frequency (PRF). Typical PRF values range from 0.5 to 2.5 kHz. The frequency of the acoustic wave, which is determined by the thickness of the piezoelectric crystal, may range from 1 to 15 MHz. The transducer can serve as an acoustic transmitter as well as a receiver, because mechanical pressure waves interacting with the crystal will result in the creation of an electrical signal. Received echo amplitude pulses, which eventually form an ultrasound image, are transferred into electrical signals by the transducer. A radio frequency receiver circuit then amplifies and demodulates the signal. The receiver circuit, a crucial
HV Pulse Generator
TGC Circuit
Digital Scan Converter Transmitter Circuit
Receiver Circuit
Position Encoder Circuit
Video Display
Figure 4.14 Block diagram of a B-mode ultrasound scanner system.
Ch04.qxd 2/12/04 5:08 PM Page 101
ULTRASOUND IMAGING
101
element in an ultrasound scanner, must have a huge dynamic range (30–40 dB) to be able to detect the wide range of reflected signals, which are typically 1–2 volts at interfaces near the surface and microvolts at deeper structures. In addition, the receiver must introduce little noise and have a wide amplification bandwidth. The time gain compensator circuit allows the operator to amplify the echoed signal according to its depth of origin. This feature helps compensate the higher attenuation of the signal seen from echoes originating from deeper interfaces and results in a more uniform image (i.e., interfaces are not darker closer to the body surface on the image display solely on the basis of being closer to the transducer). The operator is able to obtain the best possible image by controlling the amount of gain at a particular depth. The output of the receiver is fed into the digital scan converter (DSC) and used to determine the depth (Z-direction) at which the echo occurred.The depth at which the echo originated is calculated by determining the time that the echo takes to return to the transducer. The depth of the reflector can be obtained because time and depth are related, and the depth is half the time interval from the transmission of the signal pulse to the signal return times the velocity of sound in the traversal medium. The encoding of the x and y positions of the face of the transducer and angular orientation of the transducer with respect to the normal of the scanning surface is determined by the scanning arm position encoder circuit. The scanning arm is restricted to the movement in one linear direction at a time. The arm contains four potentiometers whose resistance will correspond to the x and y positions and cosine and sine directions (with angle with respect to the normal of the body surface) of the transducer. For example, if the transducer is moved in the y direction while keeping x and the angle of rotation fixed, then only the Y potentiometer will change its resistance. Position encoders on the arm will generate signals proportional to the position of the transducer and the direction of the ultrasound beam. The x, y, and z data are fed into the digital scan converter to generate addresses that will permit the echo strength signals to be stored in the appropriate memory locations. The digital scan converter performs A/D conversions of data, data preprocessing, pixel generation, image storage, data postprocessing, and image display. The analog echo signals from the receiver circuit are digitized by an A/D converter in the DSC, typically to 8 bits (256 gray levels). Fast A/D converters are normally used because most ultrasound echo signals have a wide bandwidth and the sampling frequency should be at least twice the highest frequency of interest in the image.Typical A/D sampling rates range from 10 to 20 MHz. The DSC image memory is normally 512 ¥ 512 ¥ 8 bits; for a color Doppler US image, it can be 512 ¥ 512 ¥ 24 bits. The data may be preprocessed to enhance the visual display of the data and to match the dynamic range of the subsequent hardware components. Echo signals are typically rescaled, and often nonlinear (e.g., logarithmic) circuits often are used to emphasize and deemphasize certain echo amplitudes. 4.6.3
Sampling Modes and Image Display
Three different sampling modes are available on most ultrasound units: the survey mode, in which the data stored in memory are continually updated and displayed,
Ch04.qxd 2/12/04 5:08 PM Page 102
102
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
the static mode, in which only maximum values during a scanning session are stored and displayed, and an averaging mode, in which the average of all scans for a particular scan location are stored and displayed. Once stored in memory, the digital data are subjected to postprocessing operations of several types. These can be categorized according to changes in the gray level display of the stored image, temporal smoothing of the data, or spatial operations. Gray scale mean and windowing and nonlinear gray scale transformations are common. Image display is performed by a video processor and controller unit that can quickly access the image memory and modulate an electron beam to show the image on a video monitor. The digital scan converter allows for echo data to be read continuously from the fast-access image memory. 4.6.4
Color Doppler Ultrasound Imaging
Ultrasound scanning using the Doppler principle can detect the movement of blood inside vessels. In particular, it can detect whether the blood is moving away or toward the scanning plane. When several blood vessels are in the scanning plane, it is advantageous to use different colors to represent the blood flow direction and speed with respect to the stationary anatomical structures. Thus colors coupled with the gray scale ultrasound image results in a duplex Doppler ultrasound image. This coupling permits simultaneous imaging of anatomical structures as well as characterization of circulatory physiology from known reference planes within the body. The resulting image is called color Doppler or color-flow imaging. A color Doppler image requires 512 ¥ 512 ¥ 24 bits. Figure 4.15A shows a color Doppler US blood flow image showing pulmonary vein inflow convergent. 4.6.5
Cine Loop Ultrasound Imaging
One advantage of ultrasound imaging over other imaging modalities is its noninvasive nature, which permits the accumulation of ultrasound images continuously through time without adverse effects on the patient. Such images can be played back in a cine loop, which can reveal the dynamic motion of a body organ, for example, the heartbeat (see also Section 4.2.3, Cine XCT). Several seconds of cine loop ultrasound images can produce a very large image file. For example, a 10-s series of color Doppler cine loop ultrasound images will yield (10 ¥ 30) ¥ 0.75 ¥ 106 bytes (= 225 Mbyte) of image information. In general, unless the study is related to dynamic movement like the cardiac motion, very seldom is the complete cine loop archived. The radiologist or clinician in charge previews all images and discards all but the most relevant few for the patient record. 4.6.6
Three-Dimensional US
If an array of ultrasound transducers with a fixed center of rotation is used to scan the object of interest, the results are echo signals similar to the X-ray projections. These signals can be recombined to form linear US profiles for sectional image reconstruction; the results are 3-D US images. 3-D ultrasound images can be used for breast imaging, obstetrics imaging of fetuses, and other procedures. Figure
Ch04.qxd 2/12/04 5:08 PM Page 103
ULTRASOUND IMAGING
103
Figure 4.15A Color Doppler US blood flow showing pulmonary vein inflow convergent. (Courtesy of Siemens Medical Imaging Systems. http://www.siemensmedical.com/webapp/ wcs/stores/servlet/PSProductImageDisplay?productId=17966&storeId=10001&langId=1&catalogId=-1&catTree=100001,12805,12761*559299136.) (See color insert.)
Figure 4.15B 3-D ultrasound of a 25-week-old fetal face. (Courtesy of Philips Medical Systems. http://www.medical.philips.com/main/products/ultrasound/assets/images/ image_library/3d_2295_H5_C5-2_OB_3D.jpg.) (See color insert.)
Ch04.qxd 2/12/04 5:08 PM Page 104
104
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
4.15B depicts an image of a 25-week-old fetal face reconstructed from 3-D US images.
4.7 4.7.1
MAGNETIC RESONANCE IMAGING MR Imaging Basics
The magnetic resonance imaging (MRI) modality forms images of objects by measuring the magnetic moments of protons using radio frequency (RF) and a strong magnetic field. Information concerning the spatial distribution of nuclear magnetization in the objects is determined from RF signal emission by these stimulated nuclei. The received signal intensity is dependent on five parameters: hydrogen density, spin-lattice relaxation time (T1), spin-spin relaxation time (T2), flow velocity (e.g., arterial blood), and chemical shift. The purpose of MR imaging is to determine spatial (anatomical) information from the returned RF signals through filtered back-projection reconstruction or Fourier analysis and display it in a two-dimensional (2-D) section or a threedimensional (3-D) volume of the objects. There are some distinct advantages in using MRI over other modalities (e.g., XCT) in certain types of examination. 1. The interaction between the static magnetic field, RF radiation, and atomic nuclei is free of ionizing radiation; therefore, the imaging procedure is relatively noninvasive compared with the use of other X-ray sources. 2. The scanning mechanism is completely electronic, requiring no moving parts to perform a scan. 3. It is possible to obtain 2-D slices of the coronal, sagittal, and transaxial planes and any oblique section, as well as a 3-D volume. However, two major disadvantages at present compared with XCT are the lower spatial resolution in general and the inferior image quality in some organs. 4.7.2
Magnetic Resonance Image Production
Figure 4.16 shows a simplified block diagram of a generic MR imaging system illustrating the components necessary for the production and detection of MR signals and the display of MR images. They are: (1) A magnet to produce the static magnetic B0 field (2) RF equipment to (a) produce the magnitude of the RF magnetic field (transmitter, amplifier, and coil for transmitting mode) and (b) detect the free induction decay (FID), which is the response of the net magnetization to an RF pulse (coil for receiving mode, preamplifier, receiver, and signal demodulator) (3) x, y and z gradient power supplies and coils providing the magnetic field gradients needed for encoding spatial position
Ch04.qxd 2/12/04 5:08 PM Page 105
MAGNETIC RESONANCE IMAGING
Image Processor
Display SystemVideo Monitor
D/A
Digital Domain
105
Archive
Computer, Image Reconstruction Control Interface
A/D
Signal Demodulator
RF Transmitter
X-gradient Power Supply
Y-gradient Power Supply
Z-gradient Power Supply
RF
Pre-amp RF Power Amp
Magnetic: B0
Transmitting/Receiving Coil
Y Magnetic: B0
X-gradient Coil
Y-gradient Coil
Z-gradient Coil
Z X
Figure 4.16 Block diagram of a generic MRI system. Dotted line separates the digital domain from the MR signal generation.
(4) An electronics and computer facility to orchestrate the whole imaging process (control interface with computer), convert the MR signals to digital data (A/D converter), reconstruct the image (computer algorithms), and display it (computer, disk storage, image processor and display system). 4.7.3
Steps in Producing an MR Image
An MR image is obtained by using a selected pulse sequence that perturbs the external magnetic field B0 (for example, 1.5 T). For this reason, a set of MR images is named based on the selected pulse sequence. Some useful pulse sequences in radiology applications are spin echo, inversion recovery, gradient echo, and echo planar; each of these pulse sequence highlights certain chemical compositions in the tissues under consideration.
Ch04.qxd 2/12/04 5:08 PM Page 106
106
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
We use a spin-echo pulsing sequence as an example to illustrate how the MR image is produced. First, the object is placed inside an RF coil situated in the homogeneous portion of the main magnetic field, B0. Next, a pulsing sequence with two RF pulses is applied to the imaging volume (hence spin-echo). At the same time a magnetic gradient is applied to the field B0 to identify the relative position of the spin-echo (FID) signals. Note that FID is composed of frequency components. The FID signal is demodulated from the RF signal and sampled with an A/D converter and stored in a digital data array for processing. This set of data is analogous to one set of projection data in XCT. After the repetition time has elapsed, the pulsing sequence is applied and a new FID is obtained and sampled repeatedly with alternate gradient magnitudes until the desired number of projections are acquired. During and after data collection, a selected tomographic reconstruction algorithm as described in Section 4.1, either the filtered back-projection or inverse 2-D fast Fourier transform (Inverse FT), is performed on the acquired projections (digital FID data). The result is a spin-echo image of the localized magnetization in the spatial domain. The pixel values are related to the hydrogen density, relaxation times, flow, and chemical shift. This procedure can be represented as follows: FT inverse FT frequency spectrum Æ FID Æ spatial distribution MR (image) This digital image can then be archived and displayed. Figure 4.17 demonstrates the data flow in forming a MR image. Figure 4.18 shows examples of MR head images in sagittal, transaxial, and coronal planes. 4.7.4
MR Images (MRI)
The MR scanner can reconstruct cross-sectional (like XCT), sagittal, coronal, or any oblique plane images (Fig. 4.18) as well as 3-D images. In addition to reconstructing various plane images, the MR scanner can also selectively emphasize different MR properties of the objects under consideration from the FID. Thus we can reconstruct T1, T2, and proton density MR images or other combinations. The challenge of MRI is to decipher the wealth of information in the FID (rH, T1, T2), flow velocity, and chemical shift and how to display all this information adequately as images. Usually, only the magnetization magnitude is displayed, which is a function of all the aforementioned parameters. 4.7.5
Other Types of Images from MR Signals
This section describes some other types of work-in-progress images that can be generated from MR signals. Some of these images are amended with signals that deviate from the existing DICOM image standard (see Chapter 7). 4.7.5.1 MR Angiography (MRA) Current MRI scanners have the capability to acquire 3-D volumetric MRA data. The Fourier nature of 3-D MR data acquisition involves collecting the entire 3-D data set before reconstruction of any of the individual sections. Phase encoding is used to spatially encode the y-axis as well as the z-axis position information. Section thickness can be reduced to less than 1 mm,
Ch04.qxd 2/12/04 5:08 PM Page 107
MAGNETIC RESONANCE IMAGING
107
Patient equilibrium magnetization
MR excitation via pulsing sequence
Plane selected through patient
Apply gradient fields, FID production
Demodulate FID signal
Sample FID signal with A/D converter
Reconstruction of image from FID using filtered backprojection or inverse 2DFT to produce a MR image in the spatial domain
Image Stored in Computer Memory, display and archive
Figure 4.17 Data flow in forming an MR image.
providing sufficient signal-to-noise ratio. Repetition and echo times are shortened, making it possible to collect large 3-D volumes of high-resolution data or multiple successive lower-resolution volumes in 20–40 s. A standard MRA study can vary from a typical head/neck study of 100 images to a lower extremity runoff study of 2000 images that produces a data size of between 25 and 500 MB. Performance-wise, it depends on the scanner hardware and can yield studies that vary in time from 15 to 40 s. Figure 4.19 shows an MRA study containing abdominal slices; the maximum projection display method is used to highlight the contrast blood vessels in the entire study.
Ch04.qxd 2/12/04 5:08 PM Page 108
108
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
A
B
C Figure 4.18 Examples of MR head images of a 1-year-old child (POST-Cor T1 FSE): (A) sagittal, (B) transaxial, and (C) coronal.
Ch04.qxd 2/12/04 5:08 PM Page 109
MAGNETIC RESONANCE IMAGING
109
Figure 4.19 A 3-D gadolinium MR angiography abdomen study of a 57-year-old woman. The maximum projection method is used for display, highlighting the contrast-enhanced blood vessels.
4.7.5.2 Other Pulse Sequences In the past decade, some 30 pulse sequences have been developed to enhance specific tissue or image contrast. The major focus of enhancing pulse sequences is faster imaging. One problem is to read the signal after RF excitation fast enough before it decays. The latest development in this direction are echo-planar imaging (EPI) and spiral imaging readout techniques, which are used for very fast imaging of the heart in motion (cardiac MRI), imaging of multivoxel chemical profiles, magnetic resonance spectroscopic imaging (MRS, MRSI; Fig. 4.20), and imaging the physiological response to neural activation in the brain with functional MRI (fMRI). Fast data acquisition and image reconstruction allows 30–60 slices to be obtained in 2–4 s. The increase in field strength from today’s 1.5 T toward 3.0 T will increase the signal sensitivity (signal-to-noise ratio, SNR), which results in smaller image matrices and thus higher resolution. On the basis of these pulse sequences certain imaging protocols and image data characteristics have been established for specific application.
Ch04.qxd 2/12/04 5:08 PM Page 110
110
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
Figure 4.20 Representative 2-D MRSI data from a child 6 months after surgical removal of a tumor. Data were acquired with PRESS localization, echo time TE = 144 ms, and 16 ¥ 16 phase encoding steps over a field of view of 20 mm, two averages. Nominal voxel resolution was 1.6 cm3. Total acquisition time was 8 min 30 s. Elevated choline and reduced NAA indicate residual tumor. (Courtesy of Dr. S. Bluml.)
Functional MRI (fMRI) Among these new types of images derived from MR, fMRI is the closest one to daily clinical applications. We discuss its image formation here. 3-D MRI volume acquisition plus selective activation of different functions like motor, sensory, visual, or auditory functions and their location inside the brain with the fMRI technique produces a four dimensional (4-D) fMRI data set. Typically, an fMRI experiment consists of 4 series, each about 5 min with 100 acquisitions. A whole adult brain is covered by 40 slices with a slice thickness of 3 mm, with each slice having 64 ¥ 64 or 128 ¥ 128 voxels. 2-D and 3-D acquisition techniques exist, but in both cases the resulting raw data are a stack of 2-D slices. The average fMRI experiment adds up to 400 volumes, each volume 64 ¥ 64 ¥ 40 ¥ 2 bytes, or ~330 KB or a total of ~130 MB. When 128 ¥ 128 matrices are used the total data volume is increased to ~520 MB per subject. Because the DICOM standard (Chapter 7) is very inefficient for many small images like fMRI, data are mainly reconstructed as a raw data stack and processed off-line on a workstation. Therefore, no standard data communication or processing scheme exists so far. A DICOM standard extension, supplement 49, was introduced in mid-2002, but no vendor has implemented it yet.
Ch04.qxd 2/12/04 5:08 PM Page 111
MAGNETIC RESONANCE IMAGING
111
Figure 4.21A Brain activation map overlaid onto the structural T2 weighted image. Color codes represent the degree of activation. (Courtesy of Dr. S. Erberich.) (See color insert.)
The fMRI data can be most efficiently displayed as a color-coded map on the corresponding anatomical image (Fig. 4.21A). As this is not possible with the existing PACS and display workstation environment, new ways of fMRI display must be developed and fMRI must be restricted to specialized 3-D workstations. Clinicians prefer a 2-D mosaic plot (Fig. 4.21A) of structural MRI overplayed/overlaid with the functional map. Neuroscientists prefer an atlas-based 3-D display in a standard coordinate system or a rendered 3-D projection onto the brain surface (Fig. 4.21B).
Ch04.qxd 3/2/04 5:19 PM Page 112
112
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
Figure 4.21B Brain activation map warped onto 3-D atlas brain. 3-D reconstruction was based on a series of fMRI images. Color codes represent the degree of activation. (Courtesy of Dr. S. Erberich.) (See color insert.)
4.8
LIGHT IMAGING
Two major sources of light imaging for medical diagnosis are in microscopy and endoscopy. Although these modalities are quite different in the nature of their image generation, the former in microscopic scale and the latter in life-size anatomy, they use a similar digital chain after the image is captured by the light camera or video scanning. Microscopic image capture is more involved in instrumentation because it includes a microscope, whereas endoscope sometimes includes treatment, that is, image-guided treatment.
Ch04.qxd 2/12/04 5:08 PM Page 113
LIGHT IMAGING
4.8.1
113
Microscopic Image
4.8.1.1 Instrumentation Digital microscopy is used to extract sectional quantitative information from biomedical microscopic slides. A digital microscopic imaging system consists of six components: • • • • • •
A compound microscope with proper illumination for specimen input A CCD camera for scanning the microscopic slide LCD display of the image An A/D converter An image memory A computer (or image processor) to process the digital image
Figure 4.22 shows the block diagram and the physical setup of the instrumentation. Compare the digital chain with that shown in Figure 3.6. Figure 4.23 depicts a digital microscopic system that can be used for telemicroscopy applications (Chapter 14). To perform effective quantitative analysis, two additional attachments to the microscope are necessary: a motorized stage assembly and an automatic focusing device. 4.8.1.2 Motorized Stage Assembly A motorized stage assembly allows rapid screening and location of the exact position of objects of interest for subsequent detailed analysis.The motorized stage assembly consists of a high-precision x-y stage with a specially designed holder for the slide to minimize the vibration due to the movement of the stage. Two stepping motors are used for driving the stage in the x and y directions. A typical motor step is about 2.5 mm with an accuracy and repeatability to within ±1.25 mm. The motors can move the stage in either direction,
Digital Chain Microscopic specimen on a glass slide
TV Monitor
Microscope with a motorized stage and an automatic focusing device
Vidicon Camera
D/A
A/D
Image Memory
Computer
Figure 4.22 Block diagram showing the instrumentation of a digital microscopy system. The digital chain in enclosed by the dotted rectangle.
Ch04.qxd 2/12/04 5:08 PM Page 114
114
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
A
B Figure 4.23 A digital telemicroscopic system: (A) Left: image acquisition workstation, automatic microscope (1), CCD camera (2), video monitor (3), computer with an A/D converter attached to the CCD, an image memory, a database to manage the patient image file (4), the video monitor (3) showing a captured image from the microscope that is being digitized and shown on the workstation monitor (5). Right: remote diagnostic workstation (6). Thumb nailimages at the bottom of both workstations are images that have been captured and sent to the diagnostic workstation from the acquisition workstation (7). (B) Close-up of the acquisition workstation. Pertinent data related to the exam are shown in various windows. Icons on the bottom right (8) are six simple click-and-play functions: transmit, display, exit, patient information, video capture, digitize, and store. The last captured image (9) is shown on the workstation monitor. (See color insert.) [Prototype tele-microscopic imaging system at the Laboratory for Radiological Informatic Lab, UCSF. Courtesy of Drs. S. Atwater, T. Hamill, and H. Sanchez (images); and Nikon Res. Corp. and Mitra Imag. Inc. (equipment)].
Ch04.qxd 2/12/04 5:08 PM Page 115
LIGHT IMAGING
115
with a maximum speed of 650 steps, or 0.165 cm/s, or better. The two stepping motors can either be controlled manually or automatically by the computer. 4.8.1.3 Automatic Focusing Device The automatic focusing device ensures that the microscope is focusing after the stepping motors move the stage from one field to another. It is essential to have the microscope in focus before the CCD camera starts to scan. Two common methods for automatic focusing are using a third stepping motor in the z direction or an air pump. With the method using a third motor in the zdirection, the motor moves the stage up and down with respect to the objective lens. The z movements are nested in larger +z and -z values initially and gradually to smaller +z and -z values. After each movement, a scan of the specimen is made through the microscope and some optical parameters are derived from the scan. A focused image is defined if these optical parameters are within certain threshold values. The disadvantage of this method is that the focusing process is slow because the nesting movements requires computer processing time. The use of an air pump for automatic focusing is based on the assumption that in order to have automatic focusing, the specimen lying on the upper surface of the glass slide must be on a perfect horizontal plane with respect to the objective lens all the time. The glass slide is not of uniform thickness, however, and when it rests on the horizontal stage, the lower surface of the slide will form a horizontal plane with respect to the object but the upper surface will not, contributing to the imperfect focus of the slide. If an air pump is used to create a vacuum from above, such that the upper surface of the slide is suctioned from above to form a perfect horizontal plane with respect to the objective, then the slide will be focused all the time. Using an air pump for automatic focusing does not require additional time during operation, but it does require precision machinery. 4.8.1.4 Resolution The resolution of a microscope is defined as the minimum distance between two objects in the specimen that can be resolved by the microscope. Three factors control the resolution of a microscope: 1. The angle subtended by the object of interest in the specimen and the objective lens: the larger the angle, the higher the resolution 2. The medium between the objective lens and the coverslip of the glass slide: the higher the refractive index of the medium, the higher the resolution 3. The wavelength of light employed: the shorter the wavelength, the higher the resolution These three factors can be combined into a single equation (Ernst Abbe, 1840–1905): s=
l l = 2(NA) 2 n sin q
(4.4)
where s is the distance between two objects in the specimen that can be resolved (the smaller the s, the greater the resolution), l is the wavelength of the light employed, n is the refractive index of the medium, q is the half-angle subtended by
Ch04.qxd 2/12/04 5:08 PM Page 116
116
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
the object at the objective lens, and NA is the numerical aperture commonly used for defining the resolution (the larger the NA, the higher the resolution). Therefore, to obtain a higher resolution for a microscopic image, use an oil immersion lens (large n) with a large angular aperture and select a shorter wavelength of light source for illumination. 4.8.1.5 Contrast Contrast is the ability to differentiate various components in the specimen with different intensity levels. Black-and-white contrast is equivalent to the range of the gray scale (the larger the range, the better the contrast). Color contrast is an important parameter in microscopic image processing; to bring out the color contrast from the image, various color filters must be used with adjusted illumination. It is clear that the spatial and density resolutions of a digital image are limited by the resolution and contrast of a microscope, respectively. 4.8.1.6 The Digital Chain The digital chain consists of a CCD camera, an A/D converter, image memory, computer, display, and microscopic analysis software. The hardware components are described in Section 3.2.2. Their special specifications may not be the same because of the different nature of image capture. As an example, the method of capturing a color image is quite different from that for a gray scale image. Section 4.8.1.7 describes the principle of color image memories in both microscopy and endoscopy. 4.8.1.7 Color Image and Color Memory Microscopic images are mostly colored. In the digitization process, the image is digitized three times with red, blue, and green filters, in three separate but consecutive steps. The three color-filtered images are then stored in three corresponding image memories, the red, blue, and green, each of which with 8 bits. Thus a true color image has 24 bits/pixel. The computer treats the content of the three image memories as individual microscopic images and processes them separately. The real color image can be displayed back
Color microscopic image
red filter
red memory
blue filter
blue memory
green filter
green memory
video control
real color display
computer
Figure 4.24 Color image processing block diagram. Red, blue, and green filters are used to filter the image before digitization. The three digitized, filtered images are stored in the red, blue, and green memories, respectively. The real color image can be displayed back on the color monitor from these three memories through the composite video control.
Ch04.qxd 2/12/04 5:08 PM Page 117
LIGHT IMAGING
117
Digital Chain
A/D
Image Memory
D/A
Video Monitor
Image Processor/ Computer
Digital Storage Device
B
Eyepiece
To Light Source
Light Guide Assembly
Figure 4.25 Schematics of a generic digital endoscope; the digital chain is standard for all CCD image capture assemblies. The color digitization process is the same as in Fig. 4.24.
on a color monitor from these three memories with a color composite video control. Figure 4.24 shows the block diagram of a true color microscopic imaging system. 4.8.2
Endoscopy
An endoscopic examination is a visual inspection of the inside of the body by inserting an endoscopic tube with a light source and light guides into the lumen.The examiner looks into the light guides through an eyepiece from outside of the apparatus to make the proper diagnosis. If the insertion is, for example, in the throat, tracheobronchial tract, upper gastrointestinal tract, colon, or rectum, the procedure is called laryngoscopy, bronchoscopy, gastroscopy, colonoscopy, or sigmoidoscopy, respectively. If a digital chain is attached to the light guide so that the image can been seen on a display system and archived in a storage device, the system is called a digital endoscope. The digital chain consists of a CCD camera, an A/D converter, image memory, computer, display, and endoscopic analysis software. The hardware
Ch04.qxd 2/12/04 5:08 PM Page 118
118
CT, MR, UL, NUCLEAR MEDICINE, AND LIGHT IMAGING
Figure 4.26 An endoscopic image of the colon. (See color insert.)
components are described in Section 3.2.2. Their special specifications may not be the same because of the different nature of the image capture. Figure 4.25 illustrates the schematic of a generic digital endoscopic system. Figure 4.26 shows a color endoscopic image of the colon.
c05.qxd 2/12/04 5:10 PM Page 119
CHAPTER 5
Image Compression
A compressed medical image would reduce its file size (see Chapter 2, Table 2.1) and hence shorten its transmission time and decrease its storage requirement, but a compressed image may compromise its original quality and hence affect its diagnostic value. This chapter describes some compression techniques that are applicable to medical images and their advantages and disadvantages.
5.1
TERMINOLOGY
The half-dozen definitions that follow are essential to an understanding of image compression/reconstruction. Original image. The original image is a medical image f(x, y) in digital form, where f is a nonnegative integer function, and x and y can be from 0 to 255, 0 to 511, 0 to 1023, and 0 to 2047 (see Section 2.1). In the case of three-dimensional (3-D) imaging, f(x, y, z) is a 3-D data block. The original image is a two-dimensional (2-D) rectangular array or a 3-D data block to be compressed into a one-dimensional (1-D) data file. Transformed image. The transformed image F(u, v) of the original image f(x, y) after a mathematical transformation is another 2-D array. If the transformation is the forward discrete cosine transform, then u, v are nonnegative integers representing the frequencies (see Sections 2.4 and 5.4.2.1). In the case of 3-D imaging, the transformed data block is also a 3-D data block. Compressed image file. The compressed image file is a 1-D array of encoded information derived from the original or the transformed image by an image compression technique. Reconstructed image from a compressed image file. The reconstructed image from a compressed image file is a 2-D rectangular array fc(x, y) or a 3-D data block fc(x, y, z). The technique used for the reconstruction (or decoding) depends on the method of compression (encoding). In the case of error-free (or reversible, or lossless) compression the reconstructed image is identical to the original image, PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
119
c05.qxd 2/12/04 5:10 PM Page 120
120
IMAGE COMPRESSION
whereas in irreversible or lossy image compression the reconstructed image loses some information of the original image. The term “reconstructed image from a compressed image file” should not be confused with the image reconstruction from projections used in computed tomography described in Chapter 4. Thus “reconstructed image” in this chapter would mean the image is reconstructed from a compressed file. Difference image. The difference image is defined as the subtracted image or the subtracted 3-D data block between the original and the reconstructed image, f(x, y) - fc(x, y), or f(x, y, z) - fc(x, y, z). In the case of error-free compression, the difference image is the zero image. In the case of irreversible compression, the difference image is the difference between the original image and the reconstructed image. The amount of the difference depends on the compression technique used as well as the compression ratio: the less the difference, the closer the reconstructed image to the original. Compression ratio. The compression ratio between the original image and the compressed image file is the ratio between computer storage required to save the original image and that of the compressed data. Thus a 4 : 1 compression on a 512 ¥ 512 ¥ 8 = 2,097,152 bit image requires only 524,288 bit storage, 25% of the original image storage required. There is another way to describe the degree of compression by using the term bit per pixel (bpp). Thus, if the original image is 8 bpp, a 4 : 1 compression means the compressed image becomes 2 bpp.
5.2
BACKGROUND
PACS would benefit from image compression for obvious reasons: to speed up image transmission rate and save storage requirement. The number of digital medical images captured per year in the United States alone is over petabytes, that is, 1015, and is increasing rapidly every year. Image compression is a key component in PACS that would alleviate the storage and communication speed requirements for managing this voluminous digital image data. First, it reduces the bit size required to store and represent the images, while maintaining relevant diagnostic information. Second, it enables fast transmission of large medical image files to image display workstations over networks for review. Technically, most image data compression methods can be broadly categorized into two types. One is reversible or “lossless” or “error-free” compression, shown in Figure 5.1. A reversible scheme achieves modest compression ratios of 2 : 1 to 3 : 1 but will allow exact recovery of the original image from the compressed file. An irreversible or “lossy” type would not allow exact recovery after compression but can achieve much higher compression ratios (e.g., ranging from 10 : 1 to 50 : 1 or more). Generally speaking, higher compression is obtained at the expense of image quality degradation; that is, image quality declines as the compression ratio increases. Another type of compression that is used in medical imaging is clinical image compression, which stores only a few medically relevant images, as determined by the physicians, out of a series or multiple series of many real-time obtained images, thus reducing the total number of images in an examination file. The stored images may or may not be further compressed by the reversible scheme. In an ultrasound exam-
c05.qxd 2/12/04 5:10 PM Page 121
ERROR-FREE COMPRESSION
121
Encoder Lossless Image
Image Transformation
Lossy
Entropy Encoding
Compressed Image Data
Quantization
Table Specification
Table Specification
Figure 5.1 The general framework for image data compression. Image transformation can be as simple as a shift of a row and a subtraction with the original image or a more complex mathematical transformation. The decoder is the reverse of the encoder. The quantization determines whether the compression is lossless or lossy.
ination, for example, the radiologist may collect data for several seconds, at 30 images per second, but keep only 4 to 8 frames for recording and discard the rest. In an MR head study, a multiple sequence examination can accumulate up to 200–1000 images, of which only several may be of importance for the diagnosis. Image degradation from irreversible compression may or may not be visually apparent. The term “visually lossless” has been used to characterize lossy schemes that result in no visible loss under normal radiological viewing conditions. An image reconstructed from a compression algorithm that is visually lossless under certain viewing conditions (e.g., a 19-in. video monitor with 1024 ¥ 1024 pixels at a viewing distance of 4 ft) could result in visible degradations under more stringent conditions, for example, viewed at a 2000-line monitor or printed on a 14 ¥ 17 in. film. A related term used by the American College of Radiology and the National Electrical Manufacturing Association (ACR-NEMA), or Digital Imaging and Communication in Medicine (DICOM), is information preserving. The ACR-NEMA standard report defines a compression scheme as “information preserving” if the resulting image retains all of the significant information of the original image. Both “visually lossless” and “information preserving” are subjective terms, and extreme caution must be taken in their interpretation. Currently, lossy algorithms are not used by radiologists in primary diagnoses because physicians and radiologists are concerned with the legal consequences of an incorrect diagnosis based on a lossy compressed image. Indeed, lossy compression has raised legal questions for manufacturers and users alike; the U.S. Food and Drug Administration (FDA) has instituted certain regulatory guidelines, discussed in Section 5.8. 5.3
ERROR-FREE COMPRESSION
This section presents three error-free image compression techniques. The first technique is based on some inherent properties of the image under consideration; the second and third are standard data compression methods.
c05.qxd 2/12/04 5:10 PM Page 122
122
IMAGE COMPRESSION
5.3.1
Background Removal
This technique can be used in sectional and projection images. The idea is to reduce the size of the image file by discarding the background from the image. Figure 5.2A shows that some background can be removed by using a minimum rectangle covering the image. Only the information within the rectangle including the outer boundary of the cross-sectional body CT is retained. The size and relative location of the rectangle with respect to the original image are saved in the image header for later image reconstruction. In Figure 5.2B, the cross-sectional image is segmented; only information inside the segmented image is retained. In the case of computed and digital radiography (CR and DR), the background outside of the anatomical boundary can be discarded through a background removal technique described in Section 3.3.4. Figure 3.15 shows a background-removed pediatric CR image. Figure 5.3 shows compression of digital mammograms with background removal. Figure 5.3D achieves a higher compression ratio than Figure 5.3B because the former has a larger background area. In these cases, background-removed images were compressed by discarding background information that had no diag-
(1, 1) n3
n1
n4
n2
(512,512) Figure 5.2A A simple boundary search algorithm determines n1, n2, n3, and n4, the four coordinates required to circumvent the 512 ¥ 512 CT image by the rectangle with dimensions of (n2 - n1) ¥ (n4 - n3). These coordinates also give the relative location of the rectangle with respect to the original image. Each pixel inside this rectangular area can be compressed further by a lossless procedure.
c05.qxd 2/12/04 5:10 PM Page 123
ERROR-FREE COMPRESSION
123
Figure 5.2B A segmentation algorithm can be used to outline the boundary of the abdominal CT image. Pixels outside the boundary can be discarded, and hence the segmentation process reduces the size of the image file.
nostic value. The background-removed image can be further compressed through lossless compression methods described in Sections 5.3.2 and 5.3.3. 5.3.2
Run-Length Coding
Run-length coding based on eliminating repeated adjacent pixels can be used to compress an image rowwise or columnwise. A run-length code consists of three sequential numbers: the mark, the length, and the pixel value. The compression procedure starts with obtaining a histogram of the image. The histogram of an image is a plot of the frequency of occurrence versus the pixel value of the entire image. The mark is chosen as the pixel value in the image that has the least frequency of occurrence. If more than one pixel value has the same least frequency of occurrence, the higher pixel value will be chosen as the mark. The image is then scanned line by line, and sets of three sequential numbers are encoded. For example, assume that the lowest frequency of occurrence pixel value in a 512 ¥ 512 ¥ 8 image is 128. During the scan, suppose that the scan program encounters 25 pixels and all of them have a value of 10; the run-length code for these numbers would then be:
c05.qxd 2/12/04 5:10 PM Page 124
124
IMAGE COMPRESSION
A
C
B
D
Figure 5.3 Two digital mammograms before and after the background-removal providing immediate image compression. (A) A large breast occupies most of the image. (B) Background-removed image, compression ratio 3.1 : 1. (C) A small breast occupies a small portion of the image. (D) Background-removed image, compression ratio 6.3 : 1. (Courtesy of Jun Wang, 1996.)
c05.qxd 2/12/04 5:10 PM Page 125
ERROR-FREE COMPRESSION
128 mark
25 length
125
10 pixel value
If the length is the same as the mark, the three-number set should be split into two sets, for example, the set 128 128 34 should be split into two sets: 128 4 34 and 128 124 34; the length 4 and 124 are arbitrary but should be predetermined before the encoding. There are two special cases in the run-length code: 1. Because each run-length code set requires three numbers, there is no advantage in compressing adjacent pixels with values repeating less than four times. In this case each of these pixel values is used as the code. 2. The code can consist of two sequential numbers only: 128 128: 128 0:
next pixel value is 128 end of the coding
To decode the run-length coding, the procedure checks the coded data sequentially. If a mark is found, the following two codes must be the length and the gray level, except for the two special cases. In the first case, if a mark is not found, the code itself is the gray level. In the second case, a 128 following a 128 means the next pixel value is 128 and a 0 following a 128 means the end of the coding. A modified run-length coding called run-zero coding is sometimes more practical to use. In this case, the original image is first shifted one pixel to the right and a shifted image is formed. A subtracted image between the original and the shifted image is obtained, which is to be coded. A run-length code on the subtracted image requires only the mark and the length because the third code is not necessary: It is either zero or not required because of the two special cases described above. The run-zero coding requires that the pixel values of the leftmost column of the original image be saved for the decoding procedure.
5.3.3
Huffman Coding
Huffman coding based on the probability (or the frequency) of occurrence of pixel values in the shifted-then-subtracted image described in Section 5.3.2 can be used to compress the image. The encoding procedure requires six steps: 1. From the original image (Fig. 5.4A), form the shifted-then-subtracted image (Fig. 5.4C). 2. Obtain the histograms of the original image (Fig. 5.4B) and the shiftedthen–subtracted image (Fig. 5.4D). 3. Rearrange the histogram in Figure 5.4D according to the probability (or frequency) of occurrence of the pixel values and form a new histogram (Fig. 5.4E). 4. Form the Huffman tree of Figure 5.4E. A Huffman tree with two nodes at each branch is built continuously following certain rules until all possible pixel values have been exhausted.
c05.qxd 2/12/04 5:10 PM Page 126
126
IMAGE COMPRESSION
Figure 5.4 An example of the Huffman coding of a digitized chest X-ray image (512 ¥ 512 ¥ 8). Shifting the image one pixel down and one pixel to the right produces a subtracted image between the original and the shifted image. Huffman coding of the subtracted image yields a higher compression ratio than that of the original image. The first row and the leftmost column of the original image are needed during the decoding process. (A) The original digitized chest image. (B) Histogram of the original image. (C) The shifted-subtracted image. (D) Histogram of the subtracted image. (E) The rearranged histogram. The compression ratio of the shifted-subtracted image is about 2.1 : 1.
c05.qxd 2/12/04 5:10 PM Page 127
TWO-DIMENSIONAL IRREVERSIBLE IMAGE COMPRESSION
127
5. Assign a “1” to the left and a “0” to the right node throughout all branches of the tree, starting from the highest probability branches (See Table 5.1). 6. The last step is to assign contiguous bits to each pixel value according to its location in the tree. Thus each pixel value has a new assigned code that is the Huffman code of the image. Table 5.1 shows the Huffman code of the shifted-then-subtracted image (Fig. 5.4C). Because of the characteristic of the rearranged histogram, the Huffmancoded image always achieves some compression. The compression ratio of Figure 5.4C is 2.1 : 1. To reconstruct the image, the compressed image file is scanned sequentially, bit by bit, to match the Huffman code, and then decoded accordingly. Figure 5.4 presents an example of an error-free image compression using the Huffman coding on a shifted-then-subtracted digitized chest X-ray image (512 ¥ 512 ¥ 8). To obtain higher error-free compression ratios, the run-length method can be used first, followed by the Huffman coding.
5.4 5.4.1
TWO-DIMENSIONAL IRREVERSIBLE IMAGE COMPRESSION Background
Irreversible compression is done in the transform domain and is called transform coding. The procedure of transform coding is to first transform the original image into the transform domain with a 2-D transformation—for example, Fourier, Hadamard, Cosine, Karhunen–Loeve, or wavelet. The transform coefficients are then quantized and encoded (see Fig. 5.1). The result is a highly compressed data file. The image can be compressed in blocks or in its entirety. In block compression, before the image transformation the entire image can be subdivided into equal size blocks (e.g., 8 ¥ 8), and the transformation is applied to each block. A statistical quantitation method is then used to encode the 8 ¥ 8 transform coefficients of each block. The advantages of the block compression technique is that all blocks can be compressed in parallel and it is faster to perform the computation in a small block than for an entire image. However, blocky artifacts are not desirable in a radiological image because the artifacts may compromise the diagnostic value of the image when compression ratios are high. Further image processing on the reconstructed image is sometimes necessary to smooth out such artifacts. The full-frame compression technique, on the other hand, transforms the entire image into the transform domain. Quantitation is applied to all transform coefficients of the entire transformed image. The full-frame technique is computationally tedious, expensive, and time-consuming. However, it does not produce blocky artifacts. 5.4.2
Block Compression Technique
The most popular block compression technique using forward 2-D discrete cosine transform is the JPEG (Joint Photographic Experts Group) standard. Sections
c05.qxd 2/12/04 5:10 PM Page 128
128
IMAGE COMPRESSION
TABLE 5.1
The Huffman Tree of the Chest Image: Compression Ratio 2.1 : 1
Node
111 110 109 108 107 106 105 104 103 102 101 100 99 98 97 96 95 94 93 92 91 90 89 88 87 86 85 84 83 82 81 80 79 78 77 76 75 74 73 72 71 70 69 68 67 66 65
Branch
Gray Level
Left (1)
Right (0)
110 108 106 105 54 103 52 101 50 99 49 97 47 95 45 44 91 42 89 40 87 39 85 37 83 35 81 33 79 78 77 76 75 74 25 73 72 22 70 69 18 67 17 15 14 64 12
109 107 56† 55 104 53 102 51 100 98 48 96 46 94 93 92 43 90 41 88 86 38 84 36 82 34 80 32 31 30 29 28 27 26 24 23 71 21 20 19 68 66 16 65 13 63 11
Left
Right
0† 1 -1
-2
2 3 -3 -4
4
-5
5
-6 6 7 -7 8 -8 -9
9
-10
10
11
-11
12
-12 -13 13 14 -14 -15 15 -16 -17
16
17
18 -19 -18
19 -20 21 -22
-21
22
24
20
Histogram* Left
Right
Sum
157,947 85,399 55,417 46,639 38,546 29,787 25,493 18,400 15,473 11,176 9,244 7,334 5,618 5,173 3,692 3,607 2,645 2,416 1,937 1,699 1,434 1,198 1,031 847 769 613 558 473 404 337 297 249 214 175 150 131 110 98 74 56 53 42 29 26 22 19 13
104,197 72,548 48,780 38,760 34,002 25,630 21,146 15,602 14,314 9,970 9,156 6,980 5,558 4,797 3,642 3,373 2,528 2,381 1,705 1,674 1,211 1,183 906 827 665 598 473 433 365 328 261 224 190 162 147 118 104 77 57 54 51 32 27 25 20 13 12
262,144 157,947 104,197 85,399 72,548 55,417 46,639 34,002 29,787 21,146 18,400 14,314 11,176 9,970 7,334 6,980 5,173 4,797 3,642 3,373 2,645 2,381 1,937 1,674 1,434 1,211 1,031 906 769 665 558 473 404 337 297 249 214 175 131 110 104 74 56 51 42 32 25
c05.qxd 2/12/04 5:10 PM Page 129
TWO-DIMENSIONAL IRREVERSIBLE IMAGE COMPRESSION
129
TABLE 5.1 Continued Node
64 63 62 61 60 59 58 57
Branch
Gray Level
Left (1)
Right (0)
Left
10 61 8 7 58 6 4 2
62 9 60 59 57 5 3 1
-24
Histogram*
Right
Left
23 25 -23 -25 -28 26
-36 -26 66
Right
10 7 5 4 2 2 1 1
9 6 4 3 2 1 1 1
Sum 19 13 9 7 4 3 2 2
* The count of each terminal node is the frequency of occurrences of the corresponding gray level in the subtracted image. The count of each branch node is the total count of all the nodes initiated from this node. Thus the count in the left column is always greater than or equal to the right column by convention, and the count in each row is always greater than or equal to that of the row below. The total count for the last node “111” is 262,144 = 512 ¥ 512, which is the size of the image. † Each terminal node (1–56) corresponds to a gray level in the subtracted image: minus signs are possible because of the subtraction.
5.4.2.1–5.4.2.4 summarize this method, which consists of four steps: 2-D forward discrete cosine transform (DCT), bit-allocation table and quantitation, DCT coding, and entropy coding. 5.4.2.1 Two-Dimensional Forward Discrete Cosine Transform Forward discrete cosine transform, a special case of discrete Fourier transform discussed in Section 2.4.3, has been proven to be an effective method for image compression because the energy in the transform domain is concentrated in a small region. As a result, the forward DCT method can yield larger compression ratios and maintain the image quality compared with other methods. The forward DCT of the original image f( j, k) is given by 2ˆ u( j + 0.5) v(k + 0.5) ˆ Ê N -1 N -1 C (u)C (v)Á Â Â f ( j, k) cos cos ˜ Ë N¯ Ë k =0 j =0 ¯ N N
Ê F (u, v) =
2
(5.1)
The inverse discrete cosine transform of F(u, v) is the original image f ( j, k): N -1 N -1
f ( j, k) = Â Â F (u, v)C (u)C (v) cos v=0 u =0
u( j + 0.5) v(k + 0.5) cos N N
where [1 2 ]
C (O) = (1 2) =1 and N ¥ N is the size of the image.
for u, v π 0 for u, v = 0
(5.2)
c05.qxd 2/12/04 5:10 PM Page 130
130
IMAGE COMPRESSION
Thus, for the block transform, N ¥ N = 8 ¥ 8, whereas for the full-frame compression of a 2048 ¥ 2048 image, N = 2048. 5.4.2.2 Bit Allocation Table and Quantization The 2-D forward DCT of an 8 ¥ 8 block yields 64 DCT coefficients. The energy of these coefficients is concentrated among the lower-frequency components. To achieve a higher compression ratio, these coefficients are quantized to obtain the desired image quality. In doing so, the original values of the coefficients are compromised, hence some information is lost. Quantization of the DCT coefficient F(u, v) can be obtained by Fq (u, v) = NINT[F (u, v) Q(u, v)]
(5.3)
where Q(u, v) is the quantizer step size, and NINT is the nearest integer function. One method of determining the quantizer step size is by manipulating a bit allocation table B(u, v), which is defined by B(u, v) = log 2 [ F (u, v) ] + K if F (u, v) ≥ 1 =K otherwise
(5.4)
where |F(u, v)| is the absolute value of the cosine transform coefficient and K is a real number that determines the compression ratio. Note that each pixel in the transformed image F(u, v) corresponds to one value of the table B(u, v). Each value in this table represents the number of computer memory bits used to save the corresponding pixel value in the transformed image. The value in the bit allocation table can be increased or decreased to adjust for the amount of compression by assigning a certain value to K. Thus, for example, if a pixel located at ( p, q) and F( p, q) = 3822, then B( p, q) = 11.905 + K. If one selects K = +0.095, then B( p, q) = 12 (i.e., 12 bits are allocated to save the value 3822). On the other hand, if one selects K = -0.905, then B( p, q) = 11 [i.e., F( p, q) is compressed to 11 bits]. Following Eq. (5.4), Eq. (5.3) can be rewritten as F (u, v) ˘ È Fq (u, v) = NINT Í(2 B (m ,n )-1 - 1) F (m, n) ˚˙ Î
(5.5)
where F(u, v) is the coefficient of the transformed image, Fq(u, v) is the corresponding quantized value, (m, n) is the location of the maximum value of |F(u, v)| for 0 < u,v < N - 1, and B(m, n) is the corresponding number of bits in the bit allocation table assigned to save |F(m, n)|. With this formula, F(u, v) has been normalized with respect to (2|B(m,n)-1| - 1)/|F(m, n)|. The quantized value Fq(u, v) is an approximate value of F(u, v) because of the value K described in Eq. (5.4). This quantized procedure introduces an approximation to the compressed image file. 5.4.2.3 DCT Coding and Entropy Coding For block quantization with an 8 ¥ 8 matrix, F(0, 0) is the DC coefficient and is normally the maximum value of |F(u, v)|. Starting from F(0, 0), the rest of the 63 coefficients can be coded in a zigzag sequence shown in Figure 5.5. This zigzag sequence facilitates entropy coding by
c05.qxd 2/12/04 5:10 PM Page 131
TWO-DIMENSIONAL IRREVERSIBLE IMAGE COMPRESSION
F(0,0)
131
F(0,7)
F(7,0)
F(7,7)
Figure 5.5 The zigzag sequence of an 8 ¥ 8 matrix used in the cosine transform block quantization.
placing low-frequency components that normally have larger coefficients before high-frequency components. The last step in block compression is the entropy coding, which provides additional lossless compression by using a reversible technique—for example, run-length coding or Huffman coding, as described in Sections 5.3.2 and 5.3.3. 5.4.2.4 Decoding and Inverse Transform The block-compressed image file is a sequential file containing the following information: entropy coding, zigzag sequence, bit allocation table, and the quantization. This information can be used to reconstruct the compressed image. The compressed image file is decoded by using the bit-allocation table as a guide to form a 2-D array FA(u, v), which is the approximate transformed image. The value of FA(u, v) is computed by FA (u, v) = F (m, n) ◊ Fq (u, v) (2 B (m,n )-1 - 1)
(5.6)
Equation (5.6) is almost the inverse of Eq. (5.5), and FA(u, v) is the approximation of F(u, v). Inverse cosine transform Eq. (5.2) is then applied on FA(u, v), which gives fA(x, y), the reconstructed image. Because FA(u, v) is an approximation of F(u, v), some differences exist between the original image f(x, y) and the reconstructed image fA(x, y). The compression ratio is dependent on the amount of quantization and the efficiency of the entropy coding. Figure 5.6 shows the block
c05.qxd 2/12/04 5:10 PM Page 132
132
IMAGE COMPRESSION
(a)
(b)
(c) Figure 5.6 An MR image compressed with the JPEG cosine transform with a compression ratio of 20 : 1. (a) The original image. (b) The reconstructed image. (c) The difference image. Compare this result with that by the 3-D wavelet transform method discussed in Section 5.6.4 (Fig. 5.19).
compression result of a head MRI with a compression ratio of 20 : 1 using the JPEG standard. 5.4.3
Full-Frame Compression
The full-frame bit allocation (FFBA) compression technique in the cosine transform domain was developed primarily for radiological images. It is different from the JPEG block method in that the transform is done on the entire image. Image compression using blocks of the image gives blocky artifacts that might affect diagnostic accuracy. Basically, the FFBA is similar to the block compression technique, as indicated by the following steps. The transformed image of the entire image is first obtained by using the forward DCT [Eq. (5.1)]. A bit allocation table [Eq. (5.4)] designating the number of bits for each pixel in this transformed image is then generated, the
c05.qxd 2/12/04 5:10 PM Page 133
MEASUREMENT OF THE DIFFERENCE
133
value of each pixel is quantized based on a predetermined rule [Eq. (5.5)], and the bit allocation table is used to encode the quantized image, forming a 1-D sequentially compressed image file. The 1-D image file is further compressed by means of lossless entropy coding. The compression ratio between the original image and the compressed image file depends on the information in the bit allocation table, the amount of quantization on the transformed image, and the entropy coding. The compressed image file and the bit allocation table are saved and used to reconstruct the image. During image reconstruction, the bit allocation table is used to decode the 1-D compressed image file back to a 2-D array.An inverse cosine transform is performed on the 2-D array to form the reconstructed image. The reconstructed image does not exactly equal the original image because approximation is introduced in the bit allocation table and in the quantization procedure. Despite this similarity, however, the implementation of the FFBA is quite different from the block compression method for several reasons. First, it is computationally tedious and time-consuming to carry out the 2-D DCT when the image size is large. Second, the bit allocation table given in Eq. (5.4) is large when the image size is large, and therefore it becomes an overhead in the compressed file. In the case of block compression, one 8 ¥ 8 bit allocation table is sufficient for all blocks. Third, zigzag sequencing provides an efficient arrangement for the entropy coding in a small block of data. In the FFBA, zigzag sequencing is not a good method to rearrange the DCT coefficients because of the large matrix size. The implementation of the FFBA is best accomplished by using a fast DCT method and a compact full-frame bit allocation table along with the consideration of computational precision and implemented in a specially designed hardware module. Figure 5.7A shows a chest (CH), a renoarteriogram (RE), the SMPTE phantom (PH), and a body CT (CT) image; and Figure 5.7B is the corresponding cosine transforms of these images. Although the full-frame compression method does not produce blocky artifacts, it is not convenient to use for medical images, especially in the DICOM environment (Chapter 7).
5.5 MEASUREMENT OF THE DIFFERENCE BETWEEN THE ORIGINAL AND THE RECONSTRUCTED IMAGE It is natural to raise the question of how much an image can be compressed and still preserve sufficient information for a given clinical application. This section discusses some parameters and methods used to measure the trade-offs between image quality and compression ratio.
5.5.1
Quantitative Parameters
5.5.1.1 Normalized Mean-Square Error The normalized mean-square error (NMSE) between the original f(x, y) and the reconstructed fA(x, y) image can be used as a quantitative measure of the closeness between the reconstructed image and the original image. The formula for the normalized mean-square error is given by
c05.qxd 2/12/04 5:10 PM Page 134
134
IMAGE COMPRESSION
Figure 5.7A A chest image, a renoarteriogram, an SMPTE phantom, and a body CT image.
NMSE =
N -1 N -1
  [ f (x, y) - f
A
x =0 y =0
(x, y)] 2
N -1 N -1
  f (x, y)
2
(5.7)
x =0 y =0
or NMSE =
N -1 N -1
  [ F (u, v) - F
A
u =0 v=0
(u, v)] 2
N -1 N -1
  F (u, v)
2
(5.8)
u =0 v=0
because the cosine transform is a unitary transformation. The NMSE is a global measurement of the quality of the reconstructed image; it does not provide information on the local measurement. It is obvious that the NMSE is a function of the compression ratio. A high compression ratio will yield a high NMSE value. 5.5.1.2 Peak Signal-to-Noise Ratio Another quantitative measure is the peak signal-to-noise ratio (PSNR) based on the root mean-square error of the reconstructed image, which is very similar to the NMSE:
c05.qxd 2/12/04 5:10 PM Page 135
MEASUREMENT OF THE DIFFERENCE
135
Figure 5.7B The cosine transforms of the entire chest radiograph (CH), the renoarteriogram (RE), the SMPTE phantom (PH), and the CT scan (CT) shown in Fig. 5.7A. The origin is located at the upper left-hand corner of each image. It is seen that the frequency distribution of the cosine transforms of the chest radiograph and the renoarteriogram are quite similar; they concentrate in the upper left corner, representing the lower-frequency components. The frequency distribution of the cosine transform of the CT is spread more toward the higherfrequency region, whereas in the case of the SMPTE phantom the frequency distribution of the cosine transform is all over the transform domain; hence the cosine transform technique is not a good method to compress the phantom.
Ê Ê N -1 N -1 2ˆ PSNR = 20 log( f ( x, y)max ) / Á Á Â Â ( f ( x, y) - fA ( x, y)) ˜ ¯ Ë Ë x =0 y=0
1
2
ˆ / (N ¥ N )˜ ¯
(5.9)
Where f (x, y)max is the maximum value of the entire image, N ¥ N is the total number of pixels in the image, and the denominator is the root mean-square error of the reconstructed image. 5.5.2
Qualitative Measurement: Difference Image and Its Histogram
The difference image between the original and the reconstructed image gives a qualitative measurement that compares the quality of the reconstructed image with that
c05.qxd 2/12/04 5:10 PM Page 136
136
IMAGE COMPRESSION
of the original image. The corresponding histogram of the difference image provides a global qualitative measurement of the difference between the original and reconstructed images.A very narrow histogram means a small difference, whereas a broad histogram means a very large difference. 5.5.3
Acceptable Compression Ratio
Consider the following experiment. Compress a 512 ¥ 512 ¥ 12 bit body CT image using the same compression method with compression ratios 4 : 1, 8 : 1, 17 : 1, 26 : 1, and 37 : 1. The original image and the five reconstructed images are shown in Figure 5.8. It is not difficult to arrange these images in the order of quality, but it is more difficult to answer the question of which compression ratio is acceptable for diagnosis. Reconstructed images with compression ratios less than on equal to 8 : 1 do not exhibit visible deterioration in image quality. In other words, a compression ratio 8 : 1 or less is visually acceptable. But visually unacceptable does not necessarily mean that the ratio is not suitable for diagnosis because this depends on what diseases are under consideration. The receiver operating characteristic (ROC) described in Section 5.5.4 method is a more objective method to address this question. 5.5.4
Receiver Operating Characteristic Analysis
Receiver operating characteristic (ROC) analysis based on the work of Swets and Pickett and Metz can be used to measure the difference between the quality of the original and the reconstructed image. This method was developed for comparing
Figure 5.8 Body CT scan (upper left), followed, clockwise, by reconstructed images with compression ratios of 4 : 1, 8 : 1, 17 : 1, 26 : 1, and 37 : 1 (the full-frame method was used).
c05.qxd 2/12/04 5:10 PM Page 137
MEASUREMENT OF THE DIFFERENCE
137
the image quality between two modalities. To begin, a set of good-quality images of a certain category (e.g., A-P chest radiographs) is selected by a panel of experts. The selection process includes types of diseases, method of determination of the “truth” of the disease, number of images, distribution between normal and abnormal images, and the subtlety of the disease appearing in the images. The images in the set are then compressed to a predetermined compression ratio and reconstructed; the result is two sets of images: the original and the reconstructed. Observers with expertise in diagnosing the subject diseases participate as observers to review all the images. For each image, an individual observer is asked to give an ROC confidence rating scale of 1–5 representing his/her impression of the likelihood of the presence of the disease. A confidence value of 1 indicates that the disease is definitely not present, and a confidence value of 5 indicates that the disease is present. Confidence values 2 and 4 indicate that the disease process is probably not present or probably present, respectively. A confidence value of 3 indicates that the presence of the disease process is equivocal or indeterminate. Every image is read by every observer. The ratings of all images by a single observer is graded based on the “truth.” Two plots are generated showing true positive (TP) versus false positive (FP). The first plot is an ROC curve representing the observer’s performance of diagnosing the selected disease from the original images; the second plot indicates performance on the reconstructed images. The area Az under the ROC curve is an index of quantitative measurement of the observer’s performance on this image. Thus, if the Az (original) and the Az (reconstructed) of the two ROC curves are very close to each other, we can say that the diagnosis of this disease based on the reconstructed image with the predetermined compression ratio would be as good as that made (by this observer) from the original image. In other words, this compression ratio is acceptable for this image type for the given disease. In doing the ROC analysis, the statistical “power” of the study is important: the higher the power, the more confidence can be placed in the result. The statistical power is determined by the number of images and the number of observers used in the study. A meaningful ROC analysis often requires many images (100 or more) and five to six observers to determine one type of image with several diseases. Although performing an ROC analysis is tedious, time-consuming, and expensive, this method is acceptable to the radiology community for determination of the quality of the reconstructed image. As an example, we describe a study based on work by Sayre et al. (1992) entailing the analysis of 71 hand radiographs, of which 45 were normal and 26 had subperiosteal resorption. The images were digitized to 2K ¥ 2K ¥ 12 bit resolution and printed on film (14 in. ¥ 17 in.). The digitized images were compressed to 20 : 1 with the full-frame method and printed on film of the same size. Figures 5.9 and 5.10 show the results from the ROC analysis with five observers. Figure 5.9a, b, and c show an original hand radiograph with evidence of subperiosteal resorption (arrow), a 20 : 1 compression ratio reconstructed image, and the difference image, respectively. Figure 5.10 shows the ROC curves of the original and reconstructed images from the five observers: Statistics demonstrate that there is no significant difference in using the original or the reconstructed images with 20 : 1 compression ratio for the diagnosis of subperiosteal resorption from hand radiographs.
c05.qxd 2/12/04 5:10 PM Page 138
138
IMAGE COMPRESSION
(A)
(B) Figure 5.9 Example of using the full-frame image compression hardware in hand radiographs with evidence of subperiosteal resorption (arrow). (A) A digitized 2048 ¥ 2048 ¥ 12 bit hand image printed on a film. (B) Reconstructed image with a compression ratio of 20 : 1. (C) The difference image (Sayre et al., 1992).
c05.qxd 2/12/04 5:10 PM Page 139
THREE-DIMENSIONAL IMAGE COMPRESSION
139
(C) Figure 5.9 Continued
Figure 5.10 Comparison of five observer ROC curves obtained from a hand image compression study. TP, true positive; FP, false positive; O, original; C, compressed image. Five readers (R1, . . . , R5) were used in the study.
c05.qxd 2/12/04 5:10 PM Page 140
140
5.6 5.6.1
IMAGE COMPRESSION
THREE-DIMENSIONAL IMAGE COMPRESSION Background
So far, we have discussed 2-D image compression; however, acquisition of 3-D and 4-D medical images is becoming more common in CT, MR, and DSA (Chapter 3). The third dimension can be in the spatial domain (e.g., sectional images) or in the time domain (e.g., in an angiographic study). Such processes significantly increase the volume of data gathered per study. To compress 3-D data efficiently, one must consider decorrelation images. Some earlier works done on 3-D compression reported by Sun and Goldberg (1988), Lee (1993), and Koo (1992) considered the correlation between adjacent sections. Chan et al. (1989) reported a full-frame DCT method for DSA, CT, and MR. They found that by grouping four to eight slices as a 3-D volume, compression was twice as efficient as it was with 2-D full-frame DCT for DSA. The 3-D method of compressing CT images was also more efficient than the 2-D method. However, 3-D compression did not achieve very high efficiency in the case of MR images. Recently, the use of wavelet transform for image compression has attracted significant attention since the publication of the works by Daubechies (1988) and Mallat (1989). The primary advantage of the wavelet transform compared with the cosine transform is that the wavelet transform is localized in both the spatial and frequency domains; therefore, the transformation of a given signal contains both the spatial and frequency information of that signal. On the other hand, the cosine transform basis extends infinitely, with the result that the spatial information is spread out over the whole frequency domain. Because of this property, using wavelet transform for image compression can retain certain local properties of the image, which is important in medical imaging application. Recent results have demonstrated that wavelet transform compression out-performs cosine transform (Cohen et al., 1992; Antonini et al., 1992; Villasenor et al., 1995; Lightstone and Majani, 1994; Albanesi and Lotto, 1992). The DICOM standard has actually a supplement including wavelet transform as a method for image compression (see Section 5.8, and Chapter 7). In this section, we present compression with wavelet transform and discuss the selection of wavelet filters. The compression results of 3-D data sets with JPEG and 2-D wavelet transform are also compared with the compression obtained with the 3-D wavelet transform approach (Wang, 1995, 1996, 2000). 5.6.2
Basic Wavelet Theory and Multiresolution Analysis
Image transformation usually relies on a set of basis functions in which the image is decomposed to a combination of these functions. In the cosine transform, the basis functions are a series of cosine functions and the resulting domain is the frequency domain. In the case of wavelet transform, the basis functions are derived from a mother wavelet function by dilation and translation. In the 1-D wavelet transform, the basis functions ya,b(x) are formed by mathematical dilation and translation of the mother wavelet y(x) such that y a ,b ( x) =
1 Ê x - bˆ y a Ë a ¯
(5.10)
c05.qxd 2/12/04 5:10 PM Page 141
141
THREE-DIMENSIONAL IMAGE COMPRESSION
where a and b are the dilation and translation factors, respectively. The continuous wavelet transform of a function f(x) can be expressed as follows: Fw (a, b) =
1 a
•
Ú
f ( x)y *
-•
Ê x - bˆ dx Ë a ¯
(5.11)
where * is the complex conjugate operator. The basis functions given in Eq. (5.10) are redundant when a and b are continuous. It is possible, however, to discretize a and b so as to form an orthonormal basis. One way of discretizing a and b is to let a = 2p and b = 2pq, so that Eq. (5.10) becomes y p ,q ( x) = 2 - p 2 y(2 - p x - q)
(5.12)
where p and q are integers. The wavelet transform in Eq. (5.11) then becomes Fw ( p, q) = 2 - p
•
2
Ú
f ( x)y(2 - p x - q)dx
(5.13)
-•
Because p and q are integers, Eq. (5.13) is called a wavelet series. It is seen from this representation that the transform contains both the spatial and frequency information. Wavelet transform also relies on the concept of multiresolution analysis, which decomposes a signal into a series of smooth signals and their associated detailed signals at different resolution levels. The smooth signal at level m can be reconstructed from the m + 1 level smooth signal and the associated m + 1 detailed signals. 5.6.3
One-, Two-, and Three-Dimensional Wavelet Transform
5.6.3.1 One-Dimensional We use a 1-D case to explain the concept of multiresolution analysis. Consider the discrete signal fm at level m, which can be decomposed into the m + 1 level by convoluting it with the h (low pass) filter to form a smooth signal fm+1, and g (high pass) filter to form a detailed signal f m¢ +1, respectively, as shown in Figure 5.11. When m = 0, it is the original data (or in the case of 2-D, it is the original image to be compressed). This can be implemented in the following equations using the pyramidal algorithm suggested by Mallet.
fm
2
h
2
f m+1
g
2
f 'm+1
Sampled at every other data point
Figure 5.11 Decomposition of a 1-D signal fm into a smooth signal fm+1 and a detailed signal f m¢ +1.
c05.qxd 2/12/04 5:10 PM Page 142
142
IMAGE COMPRESSION
fm +1 (n) = Â h(2 n - k) fm (k) k
fm¢ +1 (n) = Â g(2 n - k) fm (k)
(5.14)
k
where fm+1 is the smooth signal and f m¢ +1 is the detailed signal at the resolution level m + 1. The total number of discrete points in fm is equal to that of the sum of fm+1 and f m¢ +1. For this reason, both fm+1 and f m¢ +1 must be sampled at every other data point after the operation described in Eq. (5.14). The same process can be further applied to fm+1, creating the detailed and smooth signal at the next resolution level, until the desired level is reached. F¢m+1 does not need to be processed any further. Figure 5.12 depicts the components resulting from three levels of decompositions of the signal f0. The horizontal axis indicates the total number of discrete points of the original signal, and the vertical axis is the level m of the decomposition. At the resolution level m = 3, the signal is composed of the detailed signals of the resolution levels f 1¢ , f 2¢ , and f 3¢ plus one smooth signal f3. Thus the original signal f0 can be reconstructed by f0 = f 1¢+ f 2¢ + f 3¢ + f3
(5.15)
Equation (5.15) is actually a lossless reconstruction using the wavelet transform compression of the original signal. Each of f 1¢ , . . . , f 3¢ can be compressed by different quantization and encoding methods to achieve the required compression ratio. Accumulation of these compressed signals at all levels can be used to reconstruct the original signal f0 using Eq. (5.15). In this case, the compression is lossy because each f I¢, I = 1, 2, 3 has been compromised during the quantization. 5.6.3.2 Two-Dimensional In the case of 2-D wavelet transform, the first level will result in four components, the x-direction and the y-direction [see Fig. 5.15, left and middle (x direction and y direction only)]. Figures 5.13 and 5.14 show a twolevel wavelet decomposition of a digital mammogram and a head MR image. Note
m
f3'
f3 f2
f2'
f1'
f2'
f1' f1'
f1 f0
n
Figure 5.12 Three-level (1, 2, and 3) wavelet decomposition of a signal. Note that the sum of all pixels n in each level is the same.
(B)
(C)
THREE-DIMENSIONAL IMAGE COMPRESSION
Figure 5.13 A digital mammogram with two-level 2-D wavelet transformation. (A) original image. (B) one-level decomposition. (C) two-level decomposition. The total number of pixels after two levels of decomposition is the same as the original image. The smooth image is at the upper left corner of each level. There is not much visible information in the detailed images at each level (compare with Fig. 5.14).
(A)
c05.qxd 2/12/04 5:10 PM Page 143
143
c05.qxd 2/12/04 5:10 PM Page 144
144
IMAGE COMPRESSION
(A)
(B)
(C)
Figure 5.14 Two-level 2-D wavelet decomposition of a MR head sagittal image. (A) Original. (B) One-level decomposition. (C) Two-level decomposition. In each level, the left upper corner shows the smooth image and the other three quadrants are the detailed images. Observe the differences in the characteristics of the detailed images between the mammogram and the head MRI. In the latter, all detailed images in each level contain visible anatomical information.
Figure 5.15 One-level three-dimensional wavelet decomposition in x-, y-, and z-directions. The resulting signal has eight components: fm+1 is the smooth image and all other seven f m¢ +1 are detailed images. If only x- and y-directions are decomposed, it is a 2-D wavelet decomposition.
the different characteristics between the mammogram and MRI at each level. In the former not much information is visible in the detailed images at each intermediate level, whereas in the MRI detailed images at each level contain various anatomical properties of the original image. 5.6.3.3 Three-Dimensional 3-D wavelet transform is a very effective method for compressing 3-D medical image data that can yield a good-quality image even
c05.qxd 2/12/04 5:10 PM Page 145
THREE-DIMENSIONAL IMAGE COMPRESSION
145
at very high compression ratios. The 3-D method can be extended from the 1-D and 2-D pyramidal algorithm. Figure 5.15 shows one level of the decomposition process from fm to fm+1. First, each line in the x-direction of the 3-D image data set is convoluted with filters h and g, followed by subsampling every other voxel in the x-direction to form the smooth and detailed data line. The resulting voxels are convoluted with h and g in the y-direction, followed with subsampling in the ydirection. Finally the same procedure is applied to the z-direction. The resulting signal has eight components. Because h is a low pass filter, only one component contains all low frequency information, fm+1. The rest of the seven components have been convoluted at least once with the high-pass filter g and therefore contain the detailed signals f m¢ +1 in different directions. The same process can be repeated for the low-frequency signal fm+1, to form the next level of wavelet transform, and so forth, until the desired level is reached. 5.6.4
Three-Dimensional Image Compression with Wavelet Transform
5.6.4.1 The Block Diagram Wavelet transform is a very effective method for compressing a 3-D medical image data set, yielding a high-compression ratio image with good quality. Figure 5.16 shows the block diagrams of 3-D wavelet transform compression and decompression procedures. In the compression process, a 3-D wavelet transform is first applied to the 3-D image data set with the scheme shown in Figure 5.15, resulting in a 3-D multiresolution representation of the image. Then the wavelet coefficients are quantized with scalar quantization. Finally, run-length and then Huffman coding are used to impose entropy coding on the quantized data. These steps are described in Sections 5.6.4.4 and 5.6.4.5. The decompression process is the inverse of the compression process. The compressed data are first entropy decoded, a dequantization procedure is applied to the decoded data, and the inverse 3-D wavelet transform is used, resulting in the reconstructed 3-D image data. 5.6.4.2 Mathematical Formulation of the Three-Dimensional Wavelet Transform For the 3-D case, a scaling function F and seven wavelet functions Ys are chosen such that the 3-D scaling and wavelet functions are mathematically separable. The scaling function has the form
Forward 3D wavelet transform
Scalar quantization
Entropy coding
(a) Compression process
Entropy decoding
Scalar dequantization
Inverse 3D wavelet transform
(b) Decompression process
Figure 5.16 Procedure of 3-D image block data compression and decompression procedures using the 3-D wavelet transform.
c05.qxd 2/12/04 5:10 PM Page 146
146
IMAGE COMPRESSION
F = f( x) f( y) f(z)
(5.16)
where f(x), f(y), and f(z) contain the low-pass filter h in the x-, y-, and z-direction, respectively [see Eq. (5.14)]. The seven wavelet functions have the form: Y1 (x, y, z) = f(x) f( y) y (z), Y 2 (x, y, z) = f(x) y ( y) f(z), Y3 (x, y, z) = y (x) f( y) f(z), Y 4 (x, y, z) = f(x) y ( y) y (z), Y5 (x, y, z) = y (x) f( y) y (z), Y 6 (x, y, z) = y (x) y ( y) f(z), Y 7 (x, y, z) = y (x) y ( y) y (z)
(5.17)
where Y(x), Y(y), and Y(z) contain the high-pass filter g in the x-, y-, and z-direction, respectively [see Eq. (5.14)]. During each level of the transform, the scalar function and the seven wavelet functions are applied, respectively, to the smooth (or the original) image at that level, forming a total of eight images, one smooth and seven detailed. The wavelet coefficients are the voxel values of the eight images after the transform. Figure 5.17 shows two levels of 3-D wavelet transform on an image volume data set. The first level decomposes the data into eight components: f1 is the low (smooth)-resolution portion of the image data, and the remaining blocks are high (detailed)-resolution components. As Figure 5.17 indicates, f1 can be further decomposed into eight smaller volumes labeled f2 (smooth) and f 2¢ s (detailed). The detailed images f 1¢ on level 1 contain higher-frequency components than those f 2¢ of level 2. With properly chosen wavelet functions, the low-resolution component in the m level is 1/(23)m of the original image size after the transformation but contains about 90% of the total energy in the m level, where m is the level of the decomposition. It is clear that the high-resolution components are spread into different decomposition levels. For these reasons, the wavelet transform components provide a better representation of the original image for compression purposes. Different levels of representation can be encoded differently to achieve a desired compression ratio. 5.6.4.3 Wavelet Filter Selection Wavelet filter selection is a very important step for high-performance image compression. A good filter bank should have a
f 1'
f 1' f 1'
f1
f 1'
f 1'
f 1'
f0
f1'
f 1' f 2 f 2' f '2 f 2' f 1'
f 1'
f 1'
f 1'
Figure 5.17 Representations of a volume data set after two-level decomposition using the 3-D wavelet transform. f1 and f2 are the smooth images at each level, respectively. Others are all detailed images.
c05.qxd 2/12/04 5:10 PM Page 147
THREE-DIMENSIONAL IMAGE COMPRESSION
147
finite length, so that the implementation is reasonably fast and provides a transform with most of the energy packed in the fewest coefficients. Today, many filter functions have been accumulated in public filter banks that have been tested to yield good-quality compressed images of different categories, and it is a matter of choice by the user for different applications (Strang and Nguyen, 1995). 5.6.4.4 Quantization After the wavelet transformation, the next step is quantization of the wavelet transform coefficients. The purpose of quantization is to map a large number of input values into a smaller set of output values by reducing the precision of the data. This is the step in which information may be lost. Wavelet-transformed data are floating point values and consist of two types: low-resolution image components (smooth image), which contain most of the energy, and high-resolution image components (detailed images), which contain the information of sharp edges. Because the low-resolution components contain most of the energy, it is better to maintain the integrity of these data. To minimize data loss in this portion, each floating point value can be mapped to its nearest integer neighbor (NINT). In the high-resolution components of the wavelet coefficients, there are many coefficients of small magnitude that correspond to the flat areas in the original image. These coefficients contain very little energy, and we can eliminate them without creating significant distortions in the reconstructed image. A threshold number Tm is chosen, such that any coefficients less than Tm will be set to zero. Above the Tm, a range of floating point values are mapped into a single integer. If the quantization number is Qm, high-frequency coefficients can be quantized as follows: È a(i, j, k) - Tm ˘ aq (i, j, k) = NINT Í ˙˚ Qm Î aq (i, j, k) = 0 È a(i, j, k) + Tm ˘ aq (i, j, k) = NINT Í ˙˚ Qm Î
a(i, j, k) > Tm -Tm £ a(i, j, k) £ Tm
(5.18)
a(i, j, k) < -Tm
where a(i, j, k) is the wavelet coefficient, aq(i, j, k) is the quantized wavelet coefficient, m is the number of the level in the wavelet transform, and Tm and Qm are functions of the wavelet transform level. The function Tm can be set as a constant, and Qm = Q2m-1, where Q is a constant. 5.6.4.5 Entropy Coding The quantized data are subjected to run-length coding followed by Huffman coding. Run-length coding is effective when there are pixels with the same gray level in a sequence. Because thresholding of the high-resolution components results a large number of zeros, run-length coding can be expected to significantly reduce the size of data. Applying Huffman coding after run-length coding can further improve the compression ratio. 5.6.4.6 Some Results This section presents some compression results of using a 3-D MR data set with 124 images with the size of 256 ¥ 256 ¥ 16 bits per image. A 2-D wavelet compression is also applied to the same data set, and the results are compared with the 3-D compression results. The 2-D compression algorithm is
c05.qxd 2/12/04 5:10 PM Page 148
148
IMAGE COMPRESSION
Figure 5.18 Performance comparison between 3-D versus 2-D wavelet compression of a 3-D MR head image set. 3-D wavelet transform is clearly superior to 2-D for the same peak signal-to-noise ratio [PSNR, Eq. (5.9)].
similar to that of the 3-D compression algorithm except that a 2-D wavelet transform is applied to each slice. Figure 5.18 compares the compression ratios for the 3-D and 2-D algorithms; the horizontal axis is the peak signal-to-noise ratio (PSNR) defined in Eq. (5.9), and the vertical axis represents the compression ratio. At the same PSNR, compression ratios of the 3-D method are about 40–90% higher than those of the 2-D method. Figure 5.19 depicts one slice of MR volume image data compressed with the 3-D wavelet method. We can see that for the compression ratio of 20 : 1 the decompressed image quality is nearly the same as that of the original image because the difference image contains no anatomical residue of the original image. 3-D wavelet compression is also superior to the JPEG cosine transform result shown in Fig. 5.6. 5.6.4.7 Wavelet Compression in Teleradiology The reconstructed image from wavelet transform compression has four major properties. It 1. 2. 3. 4.
Retains both spatial and frequency information of the original image Preserves certain local properties Results in a good-quality image even at high compression ratios Can be implemented by software using the multiresolution scheme
For these reasons, wavelet transform compression is used extensively in viewing images at the workstation (Chapter 11) of teleradiology (Chapter 14). The detailed implementation of the method is discussed in Chapter 11. 5.7 5.7.1
COLOR IMAGE COMPRESSION Examples of Color Image Used in Radiology
Color images are seldom used in radiology because radiological images do not traditionally use a light source to generate diagnostic images. Color, when used, is
c05.qxd 2/12/04 5:10 PM Page 149
COLOR IMAGE COMPRESSION
(A)
149
(B)
(C)
Figure 5.19 One slice of 3-D MR volume data compressed at a compression ratio of 20 : 1 with the 3-D wavelet compression method: (A) original image. (B) Reconstructed image. (C) Difference image. No residual anatomical features are visible in the difference image. (Courtesy of Jun Wang, 1995.) Compare this result with that of Fig. 5.6, which is compressed by JPEG using the same 20 : 1 compression ratio.
mostly for enhancement purposes; a range of gray levels is converted to colors to enhance visual appearance of features within this range. Examples are in nuclear medicine, SPECT, and PET. In these cases, compression is rarely used because image files in these images are relatively small. However, other color images used in some medical diagnoses can have very large image files per examination. Example are Doppler ultrasound (US) (see Fig. 4.15A), which can produce image files as large as 225 Mbytes in 10 s. Other light imaging like microscopy (Fig. 4.23) and endoscopy (Fig. 4.26) can also yield large color image files. For these reasons, we need to have a means to compress color images. Color image compression requires a different approach because a color image is composed of three image planes, red, green, and blue, which yields 24 bits per pixel. And these three planes have certain correlation, allowing a special compression technique to be used, which is beneficial for color images.
c05.qxd 2/12/04 5:10 PM Page 150
150
IMAGE COMPRESSION
5.7.2
The Color Space
A color image with 512 ¥ 512 ¥ 24 bits is decomposed into a red, a green, and a blue image in the RGB color space, each with 512 ¥ 512 ¥ 8 bits (see Fig. 4.24, Section 4.8.1.7). Each image is treated independently as an individual image. For display, the display system combines the three images through a color composite video control and displays them as a color image on the monitor. This scheme is referred to as the color space, and the three-color decomposition is determined by drawing a triangle on a special color chart developed by the Commission Internationale de L’Eclairag (CIE) with each of the base colors as an end point. The CIE color chart is characterized by isolating the luminance (or brightness) from the chrominance (or hue). With on this characteristic as a guideline, the National Television System Committee (NTSC) defined a new color space YIQ, representing the luminance, in-phase chrominance, and quadrature chrominance coordinates, respectively. In digital imaging a color space called YCbCr is used, where Cr and Cb represent two chrominance components. The conversion of the standard RGB space to YCbCr is given by ÈY˘ ÍCb˙ = Í ˙ ÎÍ Cr ˚˙
È 0.2990 Í -0.1687 Í ÍÎ 0.5
0.587 0.114 ˘ ˙ -0.3313 0.5 ˙ -0.4187 -0.0813˙˚
ÈR ˘ ÍG ˙ Í ˙ ÎÍB ˚˙
(5.19)
where R, G, and B pixel values are between 0 and 255. There are two advantages of using the YCbCr system. First, it distributes most of the image’s information into the luminance component (Y), with less going to chrominance (Cb and Cr). As a result, the Y element and the Cb, Cr elements are less correlated and therefore can be compressed separately without loss in efficiency. Second, through field experience, the variations in the Cb and Cr planes are known to be less than that in the Y plane. Therefore, Cb and Cr can be subsampled in both the horizontal and the vertical direction without losing much of the chrominance. The immediate compression from converting the RGB to YCbCr is 2 : 1. This can be computed as follows: Original color image size: 512 ¥ 512 ¥ 24 bits YCbCr image size: 512 ¥ 512 ¥ 8 + 2 ¥ (0.25 ¥ 512 ¥ 512 ¥ 8) bits (Y) (Cb and Cr) subsampling That is, after the conversion, each YCbCr pixel is represented by 12 bits: 8 bits for the luminance (Y) and 8 bits for each of the chrominances (Cb and Cr) for every other pixel and every other line. The Y, Cb, and Cr image can be compressed further as three individual images by using error-free compression. JPEG uses this technique for color image compression. 5.7.3
Compression of Color Ultrasound Images
Normal US Doppler studies generate an average of 20 Mbytes per image file. There are cases that can go up to 80–100 Mbytes. To compress a color Doppler image (see
c05.qxd 2/12/04 5:10 PM Page 151
DICOM STANDARD AND FOOD AND DRUG ADMINISTRATION (FDA) GUIDELINES
151
Fig. 4.15A), the color RGB image is first transformed to the YCbCr space with Eq. (5.19). But instead of subsampling the Cb and the Cr images as described above, all three images are subject to a run-length coding independently. Two factors favor this approach. First, a US image possesses information within a sector. Outside the sector, it contains only background information. Discarding the background information can yield a very high compression ratios (Section 5.3.1). Second, the Cb and Cr images contain little information except at the blood flow regions under consideration, which are very small compared with the entire anatomical structures in the image. Thus run-length coding of Cb and Cr can give very a highcompression ratios, eliminating the need for subsampling. On average, 2-D, error-free run-length coding can give a 3.5 : 1 compression ratios, and it can be as high as 6 : 1. Even higher compression ratios can result if the third dimension (time) of a temporal US study is considered.
5.8 DICOM STANDARD AND FOOD AND DRUG ADMINISTRATION (FDA) GUIDELINES Lossless compression provides modest reduction in image size, with a compression ratio of about 2 : 1. On the other hand, lossy compression can give a very high compression ratio and still retain good image quality, especially with the recently developed wavelet transform method. However, lossy compression may face legal issues because some information in the original image has been discarded. The use of image compression in clinical practice is influenced by two major organizations: the ACR/NEMA (American College of Radiology/National Electrical Manufacturers Association), which issues the DICOM 3.0 (Digital Imaging and Communication in Medicine) Standard (See Chapter 7), and the Center for Devices and Radiological Health (CDRH) of the U.S. Food and Drug Administration (FDA).
5.8.1
FDA
The FDA, as the regulator, has chosen to place the use of compression in the hands of the user. The agency, however, has taken steps to ensure that the user has the information needed to make the decision by requiring that the lossy compression statement as well as the approximate compression ratio be attached to lossy images. The manufacturers are required to provide in their operator’s manuals a discussion on the effects of lossy compression on image quality. Data from laboratory tests are required as a premarket notification only when the medical device uses new technologies and asserts new claims, however. The PACS guidance document from the FDA (1993) allows manufacturers to report the normalized mean-square error (NMSE) of their communication and storage devices using lossy coding techniques. This measure was chosen because it had often been used by the manufacturers themselves and there is some objective basis for comparisons. However, as discussed in Section 5.5.1.1, NMSE does not provide any local information regarding the type of loss (e.g., spatial location or spatial frequency).
c05.qxd 2/12/04 5:10 PM Page 152
152
IMAGE COMPRESSION
5.8.2
DICOM
When DICOM first introduced compression, it used JPEG based on sequential block DCT followed by Huffman coding for both lossless and lossy (with quantization) compression. Recently, DICOM adds support for JPEG 2000 (ISO/IS 15444) (International Standards Organization) based on wavelet transform with both lossless and lossy compression. Currently, features defined in JPEG 2000 part 1 are supported, which specify the coding representation of a compressed image. The representations support both color and gray scale still images. Inclusion of a JPEG 2000 image in the DICOM image file is indicated by the transfer syntax in the header of DICOM file. Two new DICOM transfer syntaxes are specified for JPEG 2000. One is for lossless only, and the other is for both lossless and lossy. DICOM 3.5—2003 Part 5 (DICOM3.5–2003) Annex A4.1 is dedicated to JPEG image compression. Annex A4.2 is dedicated to JPEG-LS compression, which is an ISO standard for digital compression and coding of continuous-tone still images. Annex A4.3 is dedicated to run-length encoding (RLE) compression, which is used for black-and-white images or graphics (RLE compression). Annex A4.4 is dedicated to JPEG 2000 compression. Annex F gives a description of all the JPEG compression discussed above, and Annex G gives a description of RLE compression.
c06.qxd 2/12/04 5:11 PM Page 153
PART II
PACS FUNDAMENTALS
c06.qxd 2/12/04 5:11 PM Page 155
CHAPTER 6
Picture Archiving and Communication System Components and Work Flow
This chapter discusses five topics that provide an overview of the picture archiving and communication system (PACS). The first topic is the basic concept of PACS and its components, which gives a general architecture and requirements of the system. An example of PACS work flow in radiography highlights the functionalities of these components. Three current clinical PACS architectures, stand-alone, clientserver, and Web-based, illustrate the three prevailing PACS operation concepts. The last two topics are teleradiology and PACS and enterprise PACS.
6.1
PACS COMPONENTS
A PACS should be DICOM compliant (Chapter 7). It consists of a image and data acquisition gateway, a PACS controller and archive, and display workstations integrated together by digital networks as shown in Figure 1.3. This section introduces these components, which will be discussed in more detail in subsequent chapters. 6.1.1
Data and Image Acquisition Gateway
PACS requires that images from imaging modalities (devices) and related patient data from the hospital information system (HIS) and the radiology information system (RIS) be sent to the PACS controller and archive server. A major task in PACS is to acquire images reliably and in a timely manner from each radiological imaging modality and relevant patient data including study support text information of the patient, description of the study, and parameters pertinent to image acquisition and processing. Image acquisition is a major task for three reasons. First, the imaging modality is not under the auspices of the PACS. Many manufacturers supply various imaging modalities, each of which has its own DICOM-compliant statement (see Chapter 7). Worse, some older imaging modalities may not even be DICOM compliant. To connect many imaging modalities to the PACS requires tedious laboring work and the cooperation of modality manufacturers. Second, image acquisition is a slower PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
155
c06.qxd 2/12/04 5:11 PM Page 156
156
PICTURE ARCHIVING AND COMMUNICATION SYSTEM COMPONENTS AND WORK FLOW
operation than other PACS functions because patients are involved, and it takes the imaging modality some time to acquire the necessary data for image reconstruction (Section 4.1). Third, images and patient data generated by the modality sometime may contain format information unacceptable to the PACS operation. To circumvent these difficulties, an acquisition gateway computer is usually placed between the imaging modality(s) and the rest of the PACS network to isolate the host computer in the radiological imaging modality from the PACS. Isolation is necessary because traditional imaging device computers lack the necessary communication and coordination software that is standardized within the PACS infrastructure. Furthermore, these host computers do not contain enough intelligence to work with the PACS controller to recover various errors. The acquisition gateway computer has three primary tasks: It acquires image data from the radiological imaging device, converts the data from manufacturer specifications to a PACS standard format (header format, byte ordering, matrix sizes) that is compliant with the DICOM data formats, and forwards the image study to the PACS controller or display workstations. Connecting a general-purpose PACS acquisition gateway computer with a radiological imaging modality are interfaces of two types. With peer-to-peer network interfaces, which use the TCP/IP (Transmission Control Protocol/Internet Protocol) Ethernet protocol (Chapter 9), image transfers can be initiated either by the radiological imaging modality (a “push” operation) or by the destination PACS acquisition gateway computer (a “pull” operation). The pull mode is advantageous because if an acquisition gateway computer goes down, images can be queued in the radiological imaging modality computer until the gateway computer becomes operational again, at which time the queued images can be pulled and normal image flow resumed. Assuming that sufficient data buffering is available in the imaging modality computer, the pull mode is the preferred mode of operation because an acquisition computer can be programmed to reschedule study transfers if failure occurs (due to itself or the radiological imaging modality). If the designated acquisition gateway computer is down, and a delay in acquisition is not acceptable, images from the examination can be rerouted to another networked designated backup acquisition gateway computer or a workstation. The second interface type is a master-slave device-level connection such as the de facto old industry standard, DR-11W. This parallel-transfer direct-memory access connection is a point-to-point board-level interface. Recovery mechanisms again depend on which machine (acquisition gateway computer or imaging modality) can initiate a study transfer. If the gateway computer is down, data may be lost. An alternative image acquisition method must be used to acquire these images (e.g., the technologist manually sends individual images stored in the imaging modality computer after the gateway computer is up again, or the technologist digitizes the digital hard copy film image). These interface concepts are described in more detail in Chapter 8. 6.1.2
PACS Controller and Archive Server
Imaging examinations along with pertinent patient information from the acquisition gateway computer, the HIS, and the RIS are sent to the PACS controller. The PACS controller is the engine of the PACS consisting of high-end computers or
c06.qxd 2/12/04 5:11 PM Page 157
PACS COMPONENTS
TABLE 6.1 • • • •
•
• • • • • • •
•
157
Major Functions of a PACS Controller and Archive Server
Receives images from examinations (exams) via acquisition gateway computers Extracts text information describing the received exam Updates a network-accessible database management system Determines the destination workstations to which newly generated exams are to be forwarded Automatically retrieves necessary comparison images from a distributed cache storage or long-term library archive system Automatically corrects the orientation of computed radiography images Determines optimal contrast and brightness parameters for image display Performs image data compression if necessary Performs data integrity check if necessary Archives new exams onto long-term archive library Deletes images that have been archived from acquisition gateway computers Services query/retrieve requests from workstations and other PACS controllers in the enterprise PACS Interfaces with PACS application servers
TABLE 6.2
Major Functions of a PACS Workstation
Function Case preparation Case selection Image arrangement Interpretation Documentation Case presentation Image reconstruction
Description Accumulation of all relevant images and information belonging to a patient examination Selection of cases for a given subpopulation Tools for arranging and grouping images for easy review Measurement tools for facilitating the diagnosis Tools for image annotation, text, and voice reports Tools for a comprehensive case presentation Tools for various types of image reconstruction for proper display
servers; its two major components are a database server and an archive system. Table 6.1 lists some major functions of a PACS controller. The archive system consists of short-term, long-term, and permanent storage. These components are explained in detail in Chapter 10. 6.1.3
Display Workstations
A workstation includes communication network connection, local database, display, resource management, and processing software. The fundamental workstation operations are listed in Table 6.2. There are four types of display workstations categorized by their resolutions: (1) high-resolution (2.5 K ¥ 2 K) liquid crystal display (LCD) for primary diagnosis at the radiology department, (2) medium-resolution (2000 ¥ 1600 or 1600 ¥ 1 K) LCD for primary diagnosis of sectional images and at the hospital wards, (3) physician desktop workstation (1 K to 512) LCD, and (4) hard copy workstations for printing images on film or paper. In a stand-alone primary diagnostic workstation (Section 6.4.1), current and historical images are stored in local high-speed magnetic disks
c06.qxd 2/12/04 5:11 PM Page 158
158
PICTURE ARCHIVING AND COMMUNICATION SYSTEM COMPONENTS AND WORK FLOW
for fast retrieval. It also has access to the PACS controller database for retrieving longer-term historical images if needed. Chapter 11 elaborates on the concept and applications of workstations. 6.1.4
Application Servers
Application servers are connected to the PACS controller and archive server. Through these application servers, PACS data can be filtered to servers tailored for different applications, for example, Web-based image viewing (Chapter 13), radiation therapy server (Chapter 20), and education server (Chapter 21). 6.1.5
System Networks
A basic function of any computer network is to provide an access path by which end users (e.g., radiologists and clinicians) at one geographic location can access information (e.g., images and reports) at another location. The important networking data needed for system design include location and function of each network node, frequency of information passed between any two nodes, cost for transmission between nodes with various-speed lines, desired reliability of the communication, and required throughput. The variables in the design include the network topology, communication line capacities, and flow assignments. At the local area network level, digital communication in the PACS infrastructure design can consist of low-speed Ethernet (10 megabits/s signaling rate), medium-speed (100 megabits/s) or fast (1 gigabit/s) Ethernet, and high-speed asynchronous transfer mode technology (ATM, 155–622 megabits/s and up). In a wide area network, various digital service (DS) speeds can be used, which range from DS-0 (56 kilobits/s) and DS-1 (T1, 1.544 megabits/s) to DS-3 (45 megabits/s) and ATM (155–622 megabits/s). There is a trade-off between transmission speed and cost. The network protocol used should be standard, for example, the TCP/IP (Transmission Control Protocol/Internet Protocol; Chapter 9) and DICOM communication protocol (a higher level of TCP/IP). A low-speed network is used to connect the imaging modalities (devices) to the acquisition gateway computers because the time-consuming processes during imaging acquisition do not require high-speed connection. Sometimes several segmented local area Ethernet branches may be used in transferring data from imaging devices to acquisition gateway computers. Medium- and high-speed networks are used on the basis of the balance of data throughput requirements and costs. A faster image network is used between acquisition gateway computers and the PACS controller because several acquisition computers may send large image files to the controller at the same time. High-speed networks are always used between the PACS controller and workstations. Process coordination between tasks running on different computers connected to the network is an extremely important issue in system networking. This coordination of processes running either on the same computer or on different computers is accomplished by using interprocessor communication methods with socket-level interfaces to TCP/IP. Commands are exchanged as American Standard Code for Information Interchange (ASCII) messages to ensure standard encoding
c06.qxd 2/12/04 5:11 PM Page 159
PACS INFRASTRUCTURE DESIGN CONCEPT
159
of messages. Various PACS-related job requests are lined up into disk resident priority queues, which are serviced by various computer system DAEMON (agent) processes. The queue software can have a built-in job scheduler that is programmed to retry a job several times by using either a default set of resources or alternative resources if a hardware error is detected. This mechanism ensures that no jobs will be lost during the complex negotiation for job priority among processes. Communications and networking are presented in more detail in Chapter 9.
6.2
PACS INFRASTRUCTURE DESIGN CONCEPT
The four major ingredients in the PACS infrastructure design concept are system standardization, open architecture and connectivity, reliability, and security. 6.2.1
Industry Standards
The first important rule in building a PACS infrastructure is to incorporate as many industry de facto standards as possible that are consistent with the overall PACS design scheme. The philosophy is to minimize the development of customized software. Furthermore, using industry standard hardware and software increases the portability of the system to other computer platforms. For example, the following industry standards should be used in the PACS infrastructure design: (1) UNIX operating system, (2) WINDOWS NT/XP operating system, (3) TCP/IP and DICOM communication protocols, (4) SQL (Structured Query Language) as the database query language, (5) DICOM standard for image data format and communication, (6) C and C++ programming languages, (7) X WINDOWS user interface, (8) ASCII text representation for message passing, (9) HL7 for health care database information exchange, and (10) XML (Extensible Markup Language) for data representation and exchange on the World Wide Web. The implications of using standards in PACS implementation are several. First, implementation and integration of all future PACS components and modules becomes standardized. Second, system maintenance is easier because the concept of operation of each module looks logically similar to the others. Moreover, defining the PACS primitive operations serves to minimize the amount of redundant computer code within the PACS system, which in turn makes the code easier to debug, understand, and search. It is self-evident that using industrial standard terminology, data format, and communication protocols in PACS design facilitates system understanding and documentation among all levels of PACS developers. Among all standards, HL7 and DICOM are the most important; the former allows interfaces between PACS and HIS/RIS, the latter interfaces images among various manufacturers. These are discussed in more detail in Chapter 7. 6.2.2
Connectivity and Open Architecture
If PACS modules in the same hospital cannot communicate with each other, they become isolated systems, each with its own images and patient information, and it would be difficult to combine these modules to form a total hospital-integrated PACS.
c06.qxd 2/12/04 5:11 PM Page 160
160
PICTURE ARCHIVING AND COMMUNICATION SYSTEM COMPONENTS AND WORK FLOW
Open network design is essential, allowing a standardized method for data and message exchange between heterogeneous systems. Because computer and communications technology changes rapidly, a closed architecture would hinder system upgradability. For example, suppose an independent imaging workstation from a given manufacturer would, at first glance, make a good addition component to an MRI scanner for viewing images. If the workstation has a closed proprietary architecture design, however, no components except those specified by the same manufacturer can be augmented to the system. Potential overall system upgrading and improvement would be limited. Considerations of connectivity are important even when a small-scale PACS is planned. To be sure that a contemplated PACS is well designed and allows for future connectivity, the following questions should be kept in mind all the time: Can we transmit images from this PACS module to other modules and vice versa? Does this module use a standard data and image format? Does the computer in the module use a standard communication protocol? 6.2.3
Reliability
Reliability is a major concern in a PACS for two reasons. First, a PACS has many components; the probability of a component failing is high. Second, because the PACS manages and displays critical patient information, extended periods of downtime cannot be tolerated. In designing a PACS, it is therefore important to use faulttolerant measures, including error detection and logging software, external auditing programs (i.e., network management processes that check network circuits, magnetic disk space, database status, processer status, and queue status), hardware redundancy, and intelligent software recovery blocks. Some fail recovery mechanisms that can be used include automatic retry of failed jobs with alternative resources and algorithms and intelligent bootstrap routines (a software block executed by a computer when it is restarted) that allow a PACS computer to automatically continue operations after a power outage or system failure. Improving reliability is costly; however, it is essential to maintain high reliability of a complex system. This topic is considered in depth in Chapter 15. 6.2.4
Security
Security, particularly the need for patient confidentiality, is an important consideration because of medicolegal issues and the HIPAA (Health Insurance Portability and Accountability Act) mandate. The violation of data security is mainly of three kinds: physical intrusion, misuse, and behavioral violations. Physical intrusion relates to facility security, which can be handled by building management. Misuse and behavioral violations can be minimized by account control and privilege control. Most sophisticated database management systems have identification and authorization mechanisms that use accounts and passwords. Application programs may supply additional layers of protection. Privilege control refers to granting and revoking the user’s access to specific tables, columns, or views from the database. These security measures provide the PACS infrastructure with a mechanism for controlling access to clinical and research data. With these mechanisms, the system
c06.qxd 2/12/04 5:11 PM Page 161
161
A GENERIC PACS WORK FLOW
designer can enforce policy as to which persons have access to what clinical studies. In some hospitals, for example, referring clinicians are granted image study access only after a preliminary radiology reading has been performed and attached to the image data. An additional security measure is the use of the image digital signature at archive and during data communication. If implemented, this feature would increase the system software overhead, but data transmission through open communication channels is more secure. Image security is discussed in Chapter 16.
6.3
A GENERIC PACS WORK FLOW
This chapter emphasizes PACS work flow. For this reason, whenever appropriate, a data work flow scenario is presented when a PACS component is introduced. This section discusses a generic PACS work flow starting from the patient registering in the HIS to the RIS ordering examination, the technologist performing the exam, image viewing, reporting, and archiving. Comparing this PACS work flow with the PACS components and work flow in Figure 1.3 and the radiology workflow in Figure 3.1, it should be clear that PACS has replaced many manual steps in the filmbased workflow. Figure 6.1 shows the PACS workflow. Following the numerals in Figure 6.1:
A Generic PACS Workflow PACS Reading Workstation
PACS QC Workstation
Modality
(7)
(6)
(13)
(4,8)
(5)
(10)
(1) (3) (2)
RIS
(14)
(12) PACS Broker/ Interface Engine
(11)
PACS Archive (9)
Dictation System
PACS Review Workstations
Figure 6.1 A generic PACS work flow. Compare the PACS work flow with the PACS components and work flow shown in Fig. 1.3 and radiology work flow depicted in Fig. 3.1. See text for explanation of work flow steps in numerals.
c06.qxd 2/12/04 5:11 PM Page 162
162
PICTURE ARCHIVING AND COMMUNICATION SYSTEM COMPONENTS AND WORK FLOW
PACS/HIS/RIS Work Flow 1. Patient registers in HIS. Radiology exam ordered in RIS. Exam accession number automatically assigned. 2. RIS outputs HL7 messages of HIS and RIS demographic data to PACS broker/interface engine. 3. PACS broker notifies archive server of scheduled exam for patient. 4. Following on prefetching rules, historical PACS exams of the scheduled patient are prefetched from the archive server and sent to the radiologist reading workstation. 5. Patient arrives at modality. Modality queries PACS broker/interface engine for DICOM worklist. 6. Technologist acquires images and sends PACS exam of images acquired by modality and patient demographic data to QC (Quality Control) workstation in DICOM format. 7. Technologist prepares PACS exam and sends to the radiologist reading workstation as prepared status. 8. On arrival of PACS exam at the radiologist reading workstation, it is immediately sent automatically to the archive server. Archive server database is updated with PACS exam as prepared status. 9. Archive server automatically distributes PACS exam to the review workstations in the wards based on patient location received from HIS/RIS HL7 message. 10. Reading radiologist dictates report with exam accession number on dictation system. Radiologist signs off on PACS exam with any changes. Archive database is updated with changes and marks PACS exam as signed off status. 11. Transcriptionist fetches the dictation and types report that corresponds to the exam accession number within RIS. 12. RIS outputs HL7 message of results report data along with any previously updated RIS data. 13. Radiologist queries PACS broker/IE (Interface Engine) for previous reports of PACS exams on reading workstations. 14. Referring physicians query broker/IE for reports of PACS exams on review workstations.
6.4
CURRENT PACS ARCHITECTURES
There are three basic PACS architectures: 1) stand-alone, 2) client-server, and 3) Web-based. From these three basic PACS architectures, there are variations and hybrid design types. 6.4.1
Stand-Alone PACS Model
The three major features of the stand-alone model are:
c06.qxd 2/12/04 5:11 PM Page 163
CURRENT PACS ARCHITECTURES
163
(1) Images are automatically sent to designated reading and review workstations from the archive server. (2) Workstations can also query/retrieve images from the archive server. (3) Workstations have short-term cache storage. Data work flow of the stand-alone PACS model is shown in Figure 6.2. Following the numerals: 1. Images from an examination (exam) acquired by the imaging modality are sent to the PACS archive server. 2. PACS archive server stores the exam. 3. Copy of the images is distributed to selected end-user workstations for diagnostic reading and review. The server performs this automatically. 4. Historical exams are prefetched from the server, and a copy of the images is sent to selected end-user workstations. 5. Ad hoc requests to review PACS exams are made via query/retrieve from the end-user workstations. In addition, if automatic prefetching fails, end-user workstations can query and retrieve the exam from the archive server. 6. End-user workstations contain a local storage cache of a finite amount of PACS exams. Selected Reading Workstation
3,
4
6 5, Imaging Modality
1
Reading Workstation
PACS Server
2 5, 6
Review Workstation
3, 4 Review Workstation Selected
Figure 6.2 Stand-alone architecture general data flow. Major features are images sent from server (2) to reading workstation automatically along with prefetched images (singledirection arrows, 3,4), images can also be query/retrieve (double-direction arrows, 5,6), and workstations with cache storage. See text for explanation of work flow steps in numerals.
c06.qxd 2/12/04 5:11 PM Page 164
164
PICTURE ARCHIVING AND COMMUNICATION SYSTEM COMPONENTS AND WORK FLOW
Advantages: (1) If the PACS server goes down, imaging modalities or acquisition gateways have the flexibility to send directly to the end-user workstation so that the radiologist can continue reading new cases. (2) Because multiple copies of the PACS exam are distributed throughout the system, there is less risk of losing PACS data. (3) Some historical PACS exams will be available in workstations, because they have a local storage cache. (4) The system is less susceptible to daily changes in network performance because PACS exams are preloaded onto the local storage cache of end-user workstations and available for viewing immediately. (5) Exam modification to the DICOM header for quality control can be made before archiving. Disadvantages: (1) End-users must rely on correct distribution and prefetching of PACS exams, which is not possible all the time. (2) Because images are sent to designated workstations, each workstation may have a different worklist, which makes it inconvenient to read/review all examinations at any workstation in one setting. (3) End-users depend on the query/retrieve function to retrieve ad hoc PACS exams from the archive, which can be a complex function compared with the client/server model. (4) Radiologists can be reading the same PACS exam at the same time from different workstations because the exam may be sent to several workstations. 6.4.2
Client/Server Model
The three major features of the client/server model are: (1) Images are centrally archive at the PACS server. (2) From a single worklist at the client workstation, an end-user selects images via the archive server. (3) Because workstations have no cache storage, images are flushed after reading. Data work flow of the client/server PACS model is shown in Figure 6.3. Following the numerals: 1. Images from an exam acquired by the imaging modality are sent to the PACS archive server. 2. PACS archive server stores the exam. 3. End-user workstations, or client workstations, have access to entire patient/study database of archive server. End-user may select preset filters on the main worklist to shorten the number of worklist entries for easier navigation.
c06.qxd 2/12/04 5:11 PM Page 165
CURRENT PACS ARCHITECTURES
165
Client Workstation with Worklist
3,
4,
3, Imaging Modality
1
5 4,
5
Client Workstation with Worklist
5
Client Workstation with Worklist
PACS Server
2
3,
4,
3, 4, 5
Client Workstation with Worklist
Figure 6.3 Client/server architecture general data flow. Major features are images are centrally archived in the server, images are requested from the server through a worklist at the workstation, workstations have no cache storage, and images are flushed after reading. See text for explanation of work flow steps in numerals.
4. Once exam is located on worklist and selected, images from the PACS exam are loaded from the server directly into the memory of the client workstation for viewing. Historical PACS exams are loaded in the same manner. 5. Once end-user has completed reading/reviewing the exam, the image data are flushed from memory, leaving no image data in local storage on the client workstation. Advantages: (1) Any PACS exams are available on any end-user workstation at any time, making it convenient to read/review. (2) No prefetching or study distribution is needed. (3) No query/retrieve function is needed. End-user just selects the exam from the worklist on the client workstation and images are loaded automatically. (4) Because the main copy of a PACS exam is located on the PACS server and is shared by the client workstations, radiologists will be aware of when they are reading the same exam at the same time and thus avoid duplicate readings. Disadvantages: (1) The PACS server is a single point of failure; if it goes down, the entire PACS is down. In this case, end-users will not be able to view any exams on the client workstations. Newly acquired exams must be held back from archival at the modalities until server is back up.
c06.qxd 2/12/04 5:11 PM Page 166
166
PICTURE ARCHIVING AND COMMUNICATION SYSTEM COMPONENTS AND WORK FLOW
(2) Because there are more database transactions in the client-server architecture, the system is exposed to more transaction errors, making it less robust compared with the stand-alone architecture. (3) The architecture is very dependent on network performance. (4) Exam modification to the DICOM header for quality control is not available before archiving. 6.4.3
Web-Based Model
The Web-based model PACS is similar to the client/server architecture with regard to data flow. However, the main difference is that the client software is a Web-based application. Additional advantages as compared with client/server: (1) The client workstation hardware can be platform independent as long as the web browser is supported. (2) The system is a completely portable application that can be used both onsite and at home with an Internet connection. Additional disadvantages as compared with client/server: (1) The system may be limited in the amount of functionality and performance by the web browser.
6.5
PACS AND TELERADIOLOGY
This section discusses the relationship between teleradiology and PACS. Two topics, the pure teleradiology model and PACS and teleradiology combined models are presented. A more detailed treatise on various models and functionalities is found in Chapter 14. 6.5.1
Pure Teleradiology Model
Teleradiology can be an independent system operated by itself in a pure teleradiology model as shown in Figure 6.4. This model serves better for several imaging centers and smaller hospitals with a radiological examination facility, but no or not enough in-house radiologists to cover the reading. In this model the teleradiology management center serves as the monitor. It receives images from different imaging centers, 1, . . . , N, keeps a record but not the images, and routes images to different expert centers, 1, . . . , M for reading. Reports come back to the management center, it records the reading, and forwards reports to the appropriate imaging centers. The management center is also responsible for the billing and other administrative functions like image distribution and workload balancing. The networks used for connection between image centers, the management center, and expert centers can be mixed, with various performances dependent on requirements and costs. This model is used most for night and weekend coverage.
c06.qxd 2/12/04 5:11 PM Page 167
PACS AND TELERADIOLOGY
Site 1
Site N
RIS
RIS
Imaging Center
Imaging Center
Image/Data
Report
Image/Data
167
Report
Tele-Radiology Management Center
Image/Data
Report
Image/Data
Report
Expert Center
Expert Center
WS
WS
Site 1
Site M
Figure 6.4 Pure teleradiology model. The management center monitors the operation to direct work flow between imaging centers and expert centers. See text for explanation of work flow steps.
6.5.2
PACS and Teleradiology Combined Model
PACS and teleradiology can be combined together as shown in Figure 6.5. The two major components are the PACS shown inside the upper dotted rectangle and the pure teleradiology model shown in the lower dotted rectangle. The workflow of this model is as follows: A. The PACS can read exams from outside imaging centers (1). B. In-house radiologists read outside images from in-house workstations (2), reports are sent to the database gateway for its own in-house record (3) and to the expert center where the report is sent to the imaging center (4). C. The PACS can also send exams directly to the outside expert center for reading (5). The expert center returns the report to the PACS database gateway (6). D. The image center can send images to the expert center for reading as in the pure teleradiology model (7). The combined teleradiology and PACS model is mostly used in a health care center with satellite imaging centers or in back-up radiology coverage between the hospital and imaging centers.
c06.qxd 2/12/04 5:11 PM Page 168
168
PICTURE ARCHIVING AND COMMUNICATION SYSTEM COMPONENTS AND WORK FLOW
HIS Database
Co
Generic PACS Generic PACS mponents & Data Reports
Database Gateway Imaging Modalities
Acquisition Gateway
Flow 3
PACS CONTROLLER & Archive Server
Application Servers
2 Workstations
1
6
5 (image)
4 (report)
Report
RIS Expert Center with Management Image
Imaging Center
1
WS
Report 4, 7
Image
Teleradiology
Figure 6.5 PACS and radiology combined model. The PACS supports imaging centers or PACS and teleradiology support each other. See text for explanation of work flow steps in numerals.
6.6
ENTERPRISE PACS AND ePR WITH IMAGES
Enterprise PACS is for very large-scale PAC systems integration. It is becoming more and more popular in today’s enterprise health care delivery system. Figure 6.6 shows the generic architecture. In the generic architecture, the three major components are PACS at each hospital in the enterprise, the enterprise data center, and the enterprise ePR. The general work flow is as follows: A. The enterprise data center supports all PAC systems in the enterprise. B. Patient images and data from PAC systems are sent to the enterprise data center for long-term archive (1). C. Filtered patient images and data from the web server at each site are sent to the electronic patient record system (ePR) in the primary data center (2). The ePR system is Web based with filtered images.
7
Web Server
Acquisition Gateway
2
Long Term Storage
6
Application Servers
2
1
3
3
Enterprise Data Center
Secondary Data Center (CA) (SPOF)
5
Application Servers
PACS CONTROLLER & Archive Server
4 Long Term Storage
6
2
Acquisition Gateway
Reports
electronic Patient Record (ePR)
1
Imaging Modalities
Database Gateway
electronic Patient Record ( ePR)
Workstations
Primary Data Center (CA) (SPOF)
PACS CONTROLLER & Archive Server
Reports
HIS Database
CA: Continuous Available SPOF: Single Point of Failure
EPR Web Client
7
Web Server
Workstations
Site N Generic PACS Components & Data Flow
ENTERPRISE PACS AND ePR WITH IMAGES
Figure 6.6 Enterprise PACS and ePR with images. The enterprise data center supports all sites in the enterprise. The primary data center has a secondary data center for backup. The enterprise ePR system allows patient electronic records with images in the enterprise to be accessible from any ePR web clients. See text for explanation of work flow steps in numerals.
EPR Web Client
Imaging Modalities
Database Gateway
HIS Database
Site 1 Generic PACS Components & Data Flow
c06.qxd 2/13/04 5:39 PM Page 169
169
c06.qxd 2/12/04 5:11 PM Page 170
170
PICTURE ARCHIVING AND COMMUNICATION SYSTEM COMPONENTS AND WORK FLOW
D. The data center has a primary data center that is the single point of failure (SPOF), backed up by the secondary data center (3). E. In the primary data center, the ePR (4) is responsible for combining patient electronic records with images from all sites of the enterprise. The ePR has a back up at the secondary data center (5). Details of the ePR are presented in Chapter 13. F. ePR Web clients throughout the enterprise can access patient electronic records with images from any sites in the enterprise through the data center ePR system (6) or its own site patient through its own web server (7). Details and examples of enterprise PACS and ePR with images are found in Chapter 23.
Ch07.qxd 2/12/04 5:13 PM Page 171
CHAPTER 7
Industrial Standards (HL7 and DICOM) and Work Flow Protocols (IHE)
7.1
INDUSTRIAL STANDARDS AND WORK FLOW PROTOCOL
Transmission of images and textual information between health care information systems has always been difficult for two reasons. First, information systems use different computer platforms, and second, images and data are generated from various imaging modalities by different manufacturers.With the emergent health care industry standards, Health Level 7 (HL7) and Digital Imaging and Communications in Medicine (DICOM), it has become feasible to integrate all these heterogeneous, disparate medical images and textual data into an organized system. Interfacing two health care components requires two ingredients, a common data format and a communication protocol. HL7 is a standard textual data format, whereas DICOM includes data format and communication protocols. In conforming to the HL7 standard, it is possible to share health care information between the hospital information systems (HIS), the radiology information systems (RIS), and PACS. By adapting the DICOM standard, medical images generated from a variety of modalities and manufacturers can be interfaced as an integrated health care system. These two standards are topics to be discussed first. The third topic to be covered is Integrating the Healthcare Enterprise (IHE), which is a model for driving the adaption of standards. With all the good standards available, it takes a champion, IHE, to persuade the users to adapt and to use. The last topic is some computer operating systems and programming languages commonly used in medical imaging and health care information technology.
7.2 7.2.1
THE HEALTH LEVEL 7 STANDARD Health Level 7
Health Level 7 (HL7), established in March 1987, was organized by a user-vendor committee to develop a standard for electronic data exchange in health care environments, particularly for hospital applications. The HL7 standard, the “Level Seven,” refers to the highest level, the application level, in the Open Systems Interconnection (OSI) seven communication levels model (see Chapter 9). The common PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
171
Ch07.qxd 2/12/04 5:13 PM Page 172
172
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
goal is to simplify the interface implementation between computer applications from multiple vendors. This standard emphasizes data format and protocol for exchanging certain key textual data among health care information systems, such as HIS, RIS, and PACS. HL7 addresses the highest level (level 7) of the OSI model of the International Standards Organization (ISO), but it does not conform specifically to the defined elements of the OSI’s seventh level (see Section 9.1). It conforms to the conceptual definitions of an application-to-application interface placed in the seventh layer of the OSI model. These definitions were developed to facilitate data communication in a health care setting by providing rules to convert abstract messages associated with real-world events into strings of characters comprising an actual message. 7.2.2
An Example
Consider the three popular computer platforms used in HIS, RIS, and PACS, namely, the IBM mainframe computer running the VM operating system, the PC Windows operating system, and the Sun workstation running UNIX, respectively. Interfacing involves the establishment of data links between these three operating systems via TCP/IP communication protocol (Section 9.1) with HL7 data format at the application layer. When an event occurs, such as patient admission, discharge, or transfer (ADT), the IBM computer in the HIS responsible for tracking this event would initiate an unsolicited message to the remote UNIX or Windows server in the RIS that takes charge of the next event. If the message is in HL7 format, UNIX or Windows parses the message, updates its local database automatically, and sends a confirmation to the IBM. Otherwise, a “rejected” message would be sent instead. In the HL7 standard, the basic data unit is a message. Each message is composed of multiple segments in a defined sequence. Each segment contains multiple data fields and is identified by a unique, predefined three-character code. The first segment is the message header segment with the three-letter code MSH, which defines the intent, source, destination, and some other relevant information, such as message control identification and time stamp. The other segments are event dependent. Within each segment, related information is bundled together based on the HL7 protocol. A typical message, such as patient admission, may contain the following segments: MSH—Message header segment EVN—Event type segment PID—Patient identification segment NK1—Next of kin segment PV1—Patient visit segment In this patient admission message, the patient identification segment may contain the segment header and other demographic information, such as patient identification, name, birth date, and gender. The separators between fields and within a field are defined in the message header segment. Here is an example of transactions of admitting a patient for surgery in HL7:
Ch07.qxd 2/12/04 5:13 PM Page 173
THE HEALTH LEVEL 7 STANDARD
173
(1) Message header segment MSH||STORE|HOLLYWOOD|MIME|VERMONT|200305181007|security|ADT|MSG00201|||
(2) Event type segment EVN|01|200305181005|| (3) Patient identification segment PID|||PATID1234567||DoeŸJohnŸBŸII||19470701|M||C| 3976 Sunset BlvdŸLos Angeles ŸCAŸ90027||323-681-2888|||||||| (4) Next of kin segment NK1|DoeŸLindaŸE||wife| (5) Patient visit segment PV1|1|I|100 Ÿ 345 Ÿ 01||||00135 Ÿ SMITH Ÿ WILLIAM Ÿ K|||SUR|ADM| Combining these five segments, these messages translate to:“Patient John B. Doe, II, male, Caucasian, born on July 1, 1947, lives in Los Angeles was admitted on May 18, 2003 at 10:05 a.m. by Doctor William K. Smith (#00135) for surgery. The patient has been assigned to Room 345, bed 01 on nursing unit 100. The next of kin is Linda E. Doe, wife. The ADT (admission, discharge, and transfer) message 201 was sent from system STORE at the Hollywood site to system MIME at the Vermont site on the same date two minutes after the admission.” The “|” is the data file separator. If no data are entered in a field, a blank will be used, followed by another “|.” The data communication between a HIS and a RIS is event driven. When an ADT event occurs, the HIS would automatically send a broadcast message, conformed to HL7 format, to the RIS. The RIS would then parse this message and insert, update, and organize patient demographic data in its database according to the event. Similarly, the RIS would send an HL7-formatted ADT message, the examination reports, and the procedural descriptions to the PACS. When the PACS had acknowledged and verified the data, it would update the appropriate databases and initiate any required follow-up actions. As an example, the HIS at the UCLA Healthcare Center, consisting of an IBM mainframe computer and a web-based system for user access to clinical data (e.g., EMR, electronic medical record), uses an interface engine like DataGate (now called “SeeBeyond e*Gate”) running on UNIX Solaris OS to distribute ADT data using a TCP/IP protocol. It generates HL7 bundled messages and transfers them over the local area network to a RIS running on Windows 2000based Server Cluster. A RIS can use the programming language environment SQL 2000 (Microsoft) for the interface. This interface works for point-to-point communication. On receiving HL7 messages from the HIS, the RIS triggers appropriate events and transfers patient data over the network to the PACS UNIX-based controller using SQL2000 messages to update the PACS database directly. The message can be transmitted between HIS, RIS, and PACS with a communication protocol—most commonly, TCP/IP through a network (see Section 9.1). In Chapter 12 we present the mechanism of interfacing PACS with HIS and RIS using the HL7 standard.
Ch07.qxd 2/12/04 5:13 PM Page 174
174
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
7.2.3
New Trend in HL7
The most commonly used HL7 today is Version 2.X, which has many options and is thus flexible. During the past years Version 2.X has been developed continuously, and it is widely and successfully implemented in health care environment. Version 2.X and other older versions use a “bottom-up” approach, beginning with very general concepts and adding new features as needed. These new features become options to the implementers so that the standard is very flexible and easy to adapt to different sites. However, these options and flexibility also make it impossible to have reliable conformance tests of any vendor’s implementation. This forces vendors to spend more time in analyzing and planing their interfaces to ensure that the same optional features are used in both interfacing parties. There is also no consistent view of the data when HL7 moves to a new version or that data’s relationship to other data. Therefore, a consistently defined and object-oriented version of HL7 is needed, which is Version 3. The initial release of HL7 Version 3 was in December 2001. The primary goal of HL7 Version 3 is to offer a standard that is definite and testable. Version 3 uses an object-oriented methodology and a reference information model (RIM) to create HL7 messages. The object-oriented method is a “top-down” method. The RIM is an all-encompassing, open architecture design at the entire scope of health care IT, containing more than 100 classes and more than 800 attributes. RIM defines the relationships of each class. RIM is the backbone of HL7 Version 3, as it provides an explicit representation of the semantic and lexical connections between the information in the fields of HL7 messages. Because each aspect of the RIM is well defined, very few options exist in Version 3. Through object-oriented method and RIM, HL7 Version 3 will improve many of the shortcomings of previous 2.X versions. Version 3 uses XML (extensible markup language; Section 7.6.5) for message encoding to increase interoperability between systems. This version has developed the Patient Record Architecture (PRA), an XML-based clinical document architecture. It can also certify vendor systems through HL7 Message Development Framework (MDF). This testable criterion will verify vendors’ conformance to Version 3. In addition, Version 3 will include new data interchange formats beyond ASCII and support of component-based technology, such as ActiveX and CORBA. As the industry moves to Version 3, providers and vendors will face some impact now or in the future, such as: Benefits: 1. It will be Less complicated and less expensive to build and maintain the HL7 interfaces. 2. HL7 messages will be less complex, and therefore analysts and programmers will require less training. 3. HL7 compliance testing will become enabled. 4. It will be easier to integrate different HL7 software interfaces from different vendors. Challenges: 1. Adaption of Version 3 will be more expensive than the previous version.
Ch07.qxd 2/12/04 5:13 PM Page 175
FROM ACR-NEMA TO DICOM AND DICOM DOCUMENT
2. 3. 4. 5.
175
Adaption of Version 3 will take time to replace the existing version. Retraining and retooling will be necessary. Vendors will eventually be forced to adapt Version 3. Vendors will have to support both Versions 2.X and 3 for some time.
HL7 Version 3 will offer tremendous benefits to providers and vendors as well as analysts and programmers, but complete adaption of the new standard will take time and effort.
7.3 7.3.1
FROM ACR-NEMA TO DICOM AND DICOM DOCUMENT ACR-NEMA and DICOM
ACR-NEMA, formally known as the American College of Radiology and the National Electrical Manufacturers Association, created a committee to develop a set of standards to serve as the common ground for various medical imaging equipment vendors. The goal was that newly developed instruments be able to communicate and participate in sharing medical image information, in particular within the PACS environment. The committee, which focused chiefly on issues concerning information exchange, interconnectivity, and communications between medical systems, began work in 1982. The first version, which emerged in 1985, specified standards in point-to-point message transmission, data formatting, and presentation and included a preliminary set of communication commands and a data format dictionary. The second version, ACR-NEMA 2.0, published in 1988, was an enhancement to the first release. It included both hardware definitions and software protocols, as well as a standard data dictionary. Networking issues were not addressed adequately in either version. For this reason, a new version aiming to include network protocols was released in 1992. Because of the magnitude of changes and additions, it was given a new name: Digital Imaging and Communications in Medicine (DICOM 3.0). In 1996, a new version was released consisting of 13 published parts that form the basis of future DICOM new versions and parts. Manufacturers readily adapted this version to their imaging products. Each DICOM document is identified by title and standard number in the form: PS 3.X-YYYY where “X” is the part number and “YYYY” is the year of publication. Thus PS 3.1-1996 means DICOM 3.0 document part 1 (Introduction and Overview) released in 1996. Although the complexity and involvement of the standards were increased by many fold, DICOM remains compatible with the previous ACR-NEMA versions. The two most distinguished new features in DICOM are adaptation of the object-oriented data model for message exchange and utilization of existing standard network communication protocols. For a brief summary of the ACR-NEMA 2.0, refer to the first edition of this book. This chapter only discusses DICOM 3.0. 7.3.2
DICOM Document
The current DICOM Standard (2003) includes 16 parts following the ISO (International Standardization Organization) directives:
Ch07.qxd 2/12/04 5:13 PM Page 176
176
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
Part 1: Part 2: Part 3: Part 4: Part 5: Part 6: Part 7: Part 8: Part 9: Part 10: Part 11: Part 12: Part 13: Part 14: Part 15: Part 16:
Introduction and Overview Conformance Information Object Definitions Service Class Specifications Data Structures and Encoding Data Dictionary Message Exchange Network Communication Support for Message Exchange Point-to-Point Communication Support for Message Exchange (Retired) Media Storage and File Format for Media Interchange Media Storage Application Profiles Media Formats and Physical Media-for-Media Interchange Print Management Point-to-Point Communication Support (Retired) Gray Scale Standard Display Function Security Profiles Content Mapping Resource
Figure 7.1 summarizes the various parts of the DICOM document. There are two routes of communications between parts: network exchange on-line communication (left) and media storage interchange off-line communication (right).
7.4
THE DICOM 3.0 STANDARD
Two fundamental components of DICOM are the information object class and the service class. Information objects define the contents of a set of images and their relationship, and the service classes describe what to do with these objects. Tables 7.1 and 7.2 list some service classes and object classes. The service classes and information object classes are combined to form the fundamental units of DICOM, called service-object pairs (SOPs). This section describes these fundamental concepts and provides some examples. 7.4.1
DICOM Data Format
In this section, we discuss two topics in DICOM data format: the DICOM Model of the Real World and the DICOM file format. The former is used to define the hierarchical data structure from patient, to studies, series, and images and waveforms. The latter describes how to encapsulate a DICOM file ready for a DICOM SOP service. 7.4.1.1 DICOM Model of the Real World The DICOM Model of the Real World defines several real-world objects in the clinical image arena (e.g., Patient, Study, Series, Image, etc.) and their interrelationships within the scope of the DICOM standard. It provides a framework for various DICOM Information Object
Ch07.qxd 2/12/04 5:13 PM Page 177
THE DICOM 3.0 STANDARD
177
Medical Information Application Application Entity
Upper Layers
Upper Layers
Part 4 Service Class Specifications
Part 3 Information Objects Definitions
Parts 5 & 6 Data Set Structure and Encoding – Data Dictionary
Parts 7 Message Exchange
DICOM Upper Layer Services Boundary
Part 8 DICOM Upper Layer
Parts 10 File Format
DICOM Basic File Boundary
Security Layer (Optional)
Security Layer (optional)
Part 8 TCP/IP Transport Layer
Network Exchange On-line communication
Part 11 & 12 Physical Media and Media
Media Storage Interchange Off-line communication
Figure 7.1 Architecture of DICOM Data and Communication Model and DICOM Parts (Section 7.3.2). There are two Communication models, the network layers model (left) and the media storage interchange model (right). Both models share the upper-level data structure described in DICOM Parts 3, 4, 5, 6. Part 7: Message Exchange is used for communication only, whereas Part 10: File format is used for media exchange. Below the upper levels, the two models are completely different.
Ch07.qxd 2/12/04 5:13 PM Page 178
178
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
TABLE 7.1
DICOM Service Classes
Service Class
Description
Image storage Image query Image retrieval Image print Examination Storage resource
Provides storage service for data sets Supports queries about data sets Supports retrieval of images from storage Provides hard copy generation support Supports management of examinations (which may consist of several series of management images) Supports management of the network data storage resource(s)
TABLE 7.2
DICOM Information Object Classes
Normalized Patient Study Results Storage resource Image annotation
Composite Computed radiograph Computed tomogram Digitized film image Digital subtraction image MR image Nuclear medicine image Ultrasound image Displayable image Graphics Curve
Definitions (IOD). The DICOM Model defines four level objects: Patient; Study; Series and Equipment; Image, Waveform, and SR (Structured Report) Document. Each of the above levels can contain several (1–n or 0–n) sublevels. A Patient is a person receiving, or registering to receive, health care services. He could have several previous (1–n) Studies already, and he may visit the health care facility and register to have more (1–n) Studies. For example, a Patient has one historical CT chest Study and two historical MR brain Studies. He is also visiting the hospital to have a new CR chest study. A Study can be a historical study, a currently performed study, or a study to be performed in the future. A Study can contain a few (1–n) Series or several study components (1–n), each of which can include a few (1–n) Series. For example, an MR brain study may include three series: transaxial, sagittal, and coronal. A Study or several (1–n) Studies can also have several scheduled Procedure Steps to be performed in different Modalities. For example, an MRI brain Study is scheduled in the MRI machine, or one MRI study and one CT study are scheduled in an MRI and a CT scanner, respectively. A Study can also include some Results. The Results can be a Report or an Amendment. A Series or several Series can be created by Equipment. Equipment is a modality (e.g., MRI scanner) used in the health care environment. A Series can include several (0–n) Images or Waveforms, SR (Structured Report), Documents, or Radiotherapy Objects (see Section 21.3), etc. For example, an MR transaxial brain Series includes 40 MR brain Images.
Ch07.qxd 2/12/04 5:13 PM Page 179
THE DICOM 3.0 STANDARD
179
An Image can be any image from all kinds of modalities, for example, a CT image, MRI image, CR image, DR image, US image, NM image, or light image. A Waveform is from the modality generating waveform output, for example, an ECG waveform from an ECG device. A SR Document is a new type of document for Structured Reporting. DICOM defines many SR templates to be used in the health care environment, for example, a Mammography CAD SR template. These topics are discussed in more detail in Section 7.4.6. Contents of this data model are encoded with necessary header information and tags in specified format to form a DICOM file. 7.4.1.2 DICOM File Format DICOM File Format defines how to encapsulate the DICOM data set of a SOP instance in a DICOM file. Each file usually contains one SOP instance. The DICOM file starts with the DICOM File Meta information (optional), followed by the bit stream of Data Set, and ends with the image pixel data if it is a DICOM image file. The DICOM File Meta information includes file identification information. The Meta information uses Explicit VR (Value Representations) Transfer Syntax for encoding. Therefore, the Meta information does not exist in the Implicit VR-encoded DICOM File. Explicit VR and Implicit VR are two coding methods in DICOM. Vendors or implementers have the option of choosing either one for encoding. DICOM files encoded by both coding methods can be processed by most of the DICOM compliant software. The difference between Explicit VR and Implicit VR is that the former has VR encoding whereas the latter has no VR encoding. For example, an encoding for element “Modality” of “CT’ value in Implicit VR and Explicit VR would be (the first 4 bytes, 08 00 60 00, is a tag): 08 00 60 00 02 00 00 00 43 54 08 00 60 00 43 53 02 00 43 54
Implicit VR Explicit VR
In the above encodings, the first 4 bytes (08 00 60 00) is a tag. In Implicit VR, the next 4 bytes (02 00 00 00) are for the length of the value field of the data element and the last 2 bytes (43 54) are element value (CT). In Explicit VR, the first 4 bytes are also a tag, the next 2 bytes (43 53) are for VR representing CS: Code String, one type of VR in DICOM, the next 2 bytes (02 00) are for length of element value, and the last 2 bytes (43 54) are element value. One Data Set represents a single SOP Instance. A Data Set is constructed of Data Elements. Data Elements contain the encoded Values of the attributes of the DICOM object. (See DICOM Part 3 and Part 5 for construction and encoding of a Data Element and a Data Set.) If the SOP instance is an image, the last part of the DICOM file is the image pixel data. The tag for Image Pixel Data is 7FE0 0010. Figure 7.2 shows an example of Implicit VR little-endian (byte swapping) encoded CT DICOM file. 7.4.2
Object Class and Service Class
7.4.2.1 Object Class The DICOM object class consists of normalized objects and composite objects. Normalized information object classes include those attributes inherent in the real-world entity represented. The left hand column of Table
Ch07.qxd 2/12/04 5:13 PM Page 180
180
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
Element Tag and Value 0008,0000, 726 0008,0005, ISO.IR 100 0008,0016, 1.2.840.10008.5.1.4.1.1.2. 0008,0060, CT 0008,1030, Abdomen.1abdpelvis …
Binary Coding 08 00 00 00 04 00 00 00 D6 02 00 00 08 00 05 00 0A 00 00 00 49 53 4F 5F 49 52 20 31 30 30 08 00 16 00 1A 00 00 00 31 2E 32 2E 38 34 30 2E 31 30 30 30 38 2E 35 2E 31 2E 34 2E 31 2E 31 2E 32 00 08 00 60 00 02 00 00 00 43 54 08 00 30 10 12 00 00 00 41 62 64 6F 6D 65 6E 5E 31 61 62 64 70 65 6C 76 69 73 ... E0 7F 10 00 00 00 00 00 00 00 00 00 00 00 ....................................................................... ....................................................................... ....................................................................... 20 00 25 00 1° 00 19 00 1C 00 14 00 2D 00 ....................................................................... ....................................................................... ....................................................................... ....................................................................... ....................................................................... ....................................................................... ....................................................................... .......................................................................
Figure 7.2 “0008,0000” in the “Element Tag and Value” column is the tag for 0008 Group. “726” is the value for the Group length and means there are 726 bytes in this Group. The corresponding binary coding of this tag and value are in the same line in “Binary Coding” column. The next few lines are the tags and values as well as the corresponding coding for “Specific Character Set,” “SOP Class UID,” “Modality,” and “Study Description.” The image pixel data is not in 0008 Group. Its tag is “7FE0 0010,” and following the tag are the coding for pixel data. The Element Tag and Value “0008 . . . ” becomes “08 00 . . . ” in binary coding because of the little-endian “byte swapping.”
7.2 shows some normalized object classes. Let us consider two normalized object classes: study information and patient information. In the study information object class, the study date and image time are attributes in this object, because these attributes are inherent whenever a study is performed. On the other hand, patient name is not an attribute in the study information object class but an attribute in the patient object class. This is because the patient’s name is inherent in the patient information object class on which the study was performed and not the study itself. The use of information object classes can identify objects encountered in medical imaging applications more precisely and without ambiguity. For this reason, the objects defined in DICOM 3.0 are very precise. However, sometimes it is advantageous to combine normalized object classes together to form composite information object classes for facilitating operations. As an example, the computed radiography image information object class is a composite object because it contains attributes from the study information object class (image date, time, etc.) and patient information object class (patient’s name, etc.). The right hand column of Table 7.2 shows some composite information object classes.
Ch07.qxd 2/12/04 5:13 PM Page 181
THE DICOM 3.0 STANDARD
181
DICOM uses a unique identifier (UID), 1.2.840.10008.X.Y.Z, to identify a specific part of an object, where the numerals are called the organizational root and X, Y, Z are additional fields to identify the parts. Thus, for example, the UID for the DICOM explicit values representing little-endian transfer syntax is 1.2.840.10008.1.2.1. Note that the UID is used to identify a part of an object; it does not carry information. 7.4.2.2 DICOM Services DICOM services are used for communication of imaging information objects within a device and for the device to perform a service for the object, for example, to store the object, to display the object, etc. A service is built on top of a set of “DICOM message service elements” (DIMSEs). These DIMSEs are computer software programs written to perform specific functions. There are two types of DIMSEs, one for the normalized objects and the other for the composite objects, given in Tables 7.3 and 7.4, respectively. DIMSEs are paired in the sense that a device issues a command request and the receiver responds to the command accordingly. The composite commands are generalized, whereas the normalized commands are more specific. DICOM services are referred to as “service classes” because of the objectoriented nature of its information structure model. If a device provides a service, it is called a service class provider; if it uses a service, it is a service class user. Thus, for example, a magnetic disk in the PACS controller server is a service class provider for the server to store images. On the other hand, a CT scanner is the service class user of the magnetic disk in the PACS server to store images. Note that a device can be either a service class provider or a service class user or both, depending on how it is used. For example, in its routing process that receives images from the scanners and distributes these images to the workstations, the PACS controller server takes on the roles of both a storage service class provider and a storage service class TABLE 7.3
Normalized DICOM Message Service Element (DIMSE)
Command
Function
N-EVENT-REPORT N-GET N-SET N-ACTION N-CREATE N-DELETE
TABLE 7.4
Notification of information object-related event Retrieval of information object attribute value Specification of information object attribute value Specification of information object-related action Creation of an information object Deletion of an information object
Composite DICOM Message Service Element (DIMSE)
Command
Function
C-ECHO C-STORE C-FIND C-GET
Verification of connection Transmission of an information object instance Inquiries about information object instances Transmission of an information object instance via third-party application processes Similar to GET, but end receiver is usually not the command initiator
C-MOVE
Ch07.qxd 2/12/04 5:13 PM Page 182
182
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
user. As a service class provider, it accepts images from the scanners by providing a storage service for these images. On the other hand, the PACS server is a service class user when it sends images to the workstation by issuing service requests to the workstation for storing the images. 7.4.3
DICOM Communication
DICOM uses existing network communication standards based on the International Standards Organization Open Systems Interconnection (ISO-OSI; see Section 9.1 for details) for imaging information transmission. The ISO-OSI consists of seven layers from the lowest physical (cables) layer to the highest application layer. When imaging information objects are sent between layers in the same device, the process is called a service. When objects are sent between two devices, it is called a protocol. When a protocol is involved, several steps are invoked in two devices; We say that two devices are in “association” using DICOM. Figure 7.3 illustrates the movement of the CT images from the scanner to the workstation with DICOM. The numerals are the steps as follows. (1) The CT scanner encodes all images into a DICOM object. (2) The scanner invokes a set of DIMSEs to move the image object from a certain level down to the physical layer in the ISO-OSI model. (3) The workstation uses a counterset of DIMSEs to receive the image object through the physical layer and move it up to a certain level. (4) The workstation decodes the DICOM image object.
CT Scanner encodes images from an examination into a DICOM object
Workstation decodes DICOM object into images for display
DICOM
DICOM
Service
Service Network Connection
DIMSEs SEND
DIMSEs RECEIVE DICOM TCP/IP
Protocol
Image Acquisition
Workstation
Figure 7.3 Movement of a set of CT images from the scanner to the workstation. Within a device the movement is called a service; between devices it is called a protocol.
Ch07.qxd 2/12/04 5:13 PM Page 183
THE DICOM 3.0 STANDARD
183
This movement of the image object from the CT scanner to the workstation uses communication protocols; the most commonly used is the TCP/IP. If an imaging device transmits an image object with a DICOM command, the receiver must use a DICOM command to receive the information. On the other hand, if a device transmits a DICOM object with a TCP/IP communication protocol through a network without invoking the DICOM communication, any device connected to the network can receive the data with the TCP/IP protocol. However, a decoder is still needed to convert the DICOM object for proper use. This method in used to send a full resolution image from the PACS server to the Web server discussed in Section 13.5. 7.4.4
DICOM Conformance
DICOM conformance PS 3.2-1996 is Part 2 of the DICOM document instructing manufacturers how to conform their devices to the DICOM standard. In a conformance statement the manufacturer describes exactly how the device or its associate software conforms to the standard. A conformance statement does not mean that this device follows every detail required by DICOM; it only means that this device follows a certain subset of DICOM. The extent of the subset is described in the conformance statement. For example, a laser film digitizer needs only to conform to the minimum requirements for the digitized images to be in DICOM format, and the digitizer should be a service class user to send the formatted images to a second device like a magnetic disk, which is a DICOM service class provider. Thus, if a manufacturer claims that its imaging device is DICOM conformant, it means that any system integrator who follows the manufacturer’s conformance document will be able to interface this device with other DICOM-compliant components from other manufacturers. In general, the contents of the conformance statement include (cited from DICOM 2003): “(1) The implementation model of the application entities (AEs) in the implementation and how these AEs relate to both local and remote real-world activities. (2) The proposed (for association initiation) and acceptable (for association acceptance) presentation contexts used by each AE. (3) The SOP classes and their options supported by each AE, and the policies with which an AE initiates or accepts associations. (4) The communication protocols to be used in the implementation and (5) A description of any extensions, specializations, and publicly disclosed privatizations to be used in the implementation. (6) A description of any implementation details which may be related to DICOM conformance or interoperability (DICOM PS3.2 1996).” 7.4.5
Examples of Using DICOM
To an end user, the two most important DICOM services are send and receive images and query and retrieve images. In this section, we use two examples to explain how DICOM accomplishes these services. Note that the query and retrieve services are built on top of the send and receive services.
Ch07.qxd 2/12/04 5:13 PM Page 184
184
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
7.4.5.1 Send and Receive Let us consider the steps involved in sending a CT examination with multiple images from the scanner to the PACS controller server using DICOM. Each individual image is transmitted from the CT scanner to the server by utilizing DICOM’s C-STORE service. In this transmission procedure, the scanner takes on the role of a client as the C-STORE service class user (SCU) and the server assumes the role of the C-STORE service class provider (SCP). The following steps illustrate the transmission of a CT examination with multiple images from the scanner to the PACS controller server (Fig. 7.4). (0) The CT scanner and the PACS controller first establish the connection through DICOM communication “association request and response” commands. (1) The invoking scanner (SCU) issues a C-STORE service request to the PACS controller (SCP). (2) The PACS controller receives the C-STORE request and issues a C-STORE response to the invoking scanner.
CLIENT CT SCANNER
SERVER PACS CONTROLLER
(C-STORE SCU)
(C-STORE SCP)
Requests for (0) establishing association
Association Request Association Response
Multiple Image Transfer Service Loop
Requests for C-STORE service
Sends image data
(1)
C-STORE Request C-STORE Response
(3) (6)
(1) – (7)
First Data Packet Confirmation
(0) Association granted
(2) (2)
C-STORE service granted
(4) Receives image data (5)
∑ ∑ (7) ∑ Last Data Packet
Sends image data
Confirmation
(7)
Receives image data
(8) (9) Requests for dropping association
Dropping Association Request Dropping Association Response
(9)
Drops association
Figure 7.4 DICOM send and receive operations. Numerals are steps described in text.
Ch07.qxd 2/12/04 5:13 PM Page 185
THE DICOM 3.0 STANDARD
185
(3) The CT scanner sends the first data packet of the first image to the PACS controller. (4) The PACS controller performs the requested C-STORE service to store the packet. (5) On completion of the service, the PACS controller issues a confirmation to the scanner. (6) After receiving the confirmation from the PACS controller on the completion of storing the packet, the scanner sends the next packet to the PACS controller. (7) Processes (4) to (6) repeat until all packets of the first image have been transmitted. (8) The scanner issues a second C-STORE service request to the PACS controller for transmission of the second image. Steps (1) to (7) repeat until all images from the study have been transmitted. (9) The scanner and the PACS controller issue DICOM communication command “dropping association request and response” to disconnect. 7.4.5.2 Query and Retrieve The send and receive service class using the CSTORE is relatively simple compared with the query and retrieve service class. Let us consider a more complicated example in which the workstation queries the PACS controller to retrieve a historical CT examination to compare with a current study already available at the workstation. Note that this composite service class involves three DIMSEs, C-FIND, C-MOVE (Table 7.4), and C-STORE, described in Section 7.4.5.1. In performing the Query/Retrieve SOP service, there is one user and one provider in the workstation and also in the PACS controller server:
Query/Retrieve C-STORE
Workstation
PACS Controller Server
User Provider
Provider User
Thus the workstation takes on the roles of Query/Retrieve (Q/R) SCU and CSTORE SCP, whereas the PACS controller has the roles of Q/R SCP and C-STORE SCU. Referring to Figure 7.5, after the association between the workstation and the PACS controller has been established, then (1) The workstation’s Q/R application entity (AE) issues a C-FIND service request to the PACS controller server. (2) The PACS controller’s Q/R AE receives the C-FIND request from the querying workstation (2a); performs the C-FIND service (2b) to look for studies, series, and images from the PACS database; and issues a C-FIND response to the workstation (2c). (3) The workstation’s Q/R AE receives the C-FIND response from the PACS controller. The response is a table with all the requests. (4) The user at the workstation selects interesting images from the table (4a) and issues a C-MOVE service request for each individual selected image to the PACS controller (4b).
Ch07.qxd 2/12/04 5:13 PM Page 186
186
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
WORKSTATION
SERVER PACS CONTROLLER
(Q/R SCU)
(Q/R SCP) Association Request
Requests for establishing association
Association granted
Association Response
(1) (3)
Multiple Image Retrieval Service Loop
Requests for C-FIND service
C-FIND Request
(2a) C-FIND (2c) service granted
C-FIND Response receive a table with all requests
Selects images from the table for retrieval
(2b) Perform C-FIND to look for studies, series, and images
(4a)
(4b) Requests for C-MOVE service
C-MOVE Request
(5a)
C-MOVE service granted
C-MOVE Response
(5b)
Invokes C-STORE SCU
(9) Requests for dropping association
Dropping Association Request Dropping Association Response
Receives images
Retrieve images from archive
Drops association
(7) (8) (8)
(6 )
Sends images
C-STORE SOP SERVICES*
WORKSTATION
PACS CONTROLLER
(C-STORE SCP)
(C-STORE SCU) (* Refer to Figure 7.4, “DICOM send and receive operations)
Figure 7.5 DICOM query-retrieve operation. Numerals are steps described in text.
(5) The PACS controller’s Q/R AE receives the C-MOVE request from the workstation (5a) and issues an indication to the PACS controller’s C-STORE SCU (5b). (6) The PACS controller’s C-STORE SCU retrieves the requested images from the archive device and (7) Issues a C-STORE service request to the workstation’s C-STORE SCP.
Ch07.qxd 2/12/04 5:13 PM Page 187
THE DICOM 3.0 STANDARD
187
(8) The workstation receives the C-STORE request and issues a C-STORE response to the PACS controller. From this point on, the C-STORE SOP service is identical to the example given in Figure 7.4. (9) After the workstation retrieves the last image, it issues a “dropping association request” and terminates the association. 7.4.6
New Features in DICOM
Over the last three years, several new features have been added to the DICOM that are important for system integration with other inputs not in the realm of conventional radiological images. These are visible light Image (Section 4.8), structured reporting object, content mapping resource, mammography CAD, JPEG 2000 Compression (Section 5.6), waveform Information object definition (IOD) [e.g., ECG IOD and Cardiac Electrophysiology IOD], and security profiles (Chapter 16). 7.4.6.1 Visible Light (VL) Image The Visible Light (VL) Image Information Object Definition (IOD) for Endoscopy, Microscopy, and Photography has become available. It includes definitions of VL Endoscopic Image IOD, VL Microscopic Image IOD, VL Slide-Coordinates Microscopic Image IOD, VL Photographic Image IOD, and VL image module. 7.4.6.2 Structured Reporting (SR) Object Structured Reporting is for radiologists to shorten their reporting time. SOP Classes are defined for transmission and storage of documents that describe or refer to the images or waveforms or the features they contain. SR SOP Classes provide the capability to record the structured information to enhance the precision and value of clinical documents and enable users to link the text data to particular images or waveforms. 7.4.6.3 Content Mapping Resource This defines the templates and context groups used in other DICOM parts. The templates are used to define or constrain the content of structured reporting documents or the acquisition context. Context groups specify value set restrictions for given functional or operational contexts. For example, Context Group 82 is defined to include all units of measurement used in DICOM IODs. 7.4.6.4 Mammography CAD (Computer-Aided Detection) One application of the DICOM structured report is Mammography computer-aided detection. It uses the mammography CAD output for analysis of mammographic findings. The output is in DICOM structured report format. 7.4.6.5 Waveform IOD DICOM Waveform IOD is mainly developed for cardiology waveform, for example, ECG and Cardiac Electrophysiology (EP). The ECG IOD defines the digitized electrical signals acquired by an ECG modality or an ECG acquisition function within an imaging modality. Cardiac EP IOD defines the digitized electrical signals acquired by an EP modality.
Ch07.qxd 2/12/04 5:13 PM Page 188
188
7.5
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
IHE (INTEGRATING THE HEALTHCARE ENTERPRISE)
This section is excerpted from IHE: A Primer from Radiographics 2001 (Siegel and Channin, 2001; Channin, 2001; Channin et al., 2001a; Henderson et al., 2001; and Channin, 2001b) and a recent paper by Carr and Moore (2003). Further information can be obtained from: [email protected] www.rsna,org/IHE www.himss.org 7.5.1
What is IHE?
Even with the DICOM and HL7 standards available, there is still a need of common consensus on how to use these standards for integrating heterogeneous healthcare information systems smoothly. IHE is not a standard nor a certifying authority, instead it is a high-level information model for driving adaption of HL7 and DICOM standards. IHE is a joint initiative of RSNA (Radiological Society of North America) and HIMSS (Healthcare Information and Management Systems Society) started in 1998. The mission was to define and stimulate manufacturers to use DICOM- and HL7-compliant equipment and information systems to facilitate daily clinical operation. The IHE technical framework defines a common information model and vocabulary for using DICOM and HL7 to complete a set of well-defined radiological and clinical transactions for a certain task. These common vocabulary and model would then facilitate health care providers and technical personnel in understanding each other better, which then would lead to a smoother system integration. The first large-scale demonstration was held at the RSNA annual meeting in 1999 and thereafter in 2000 and 2001 RSNA and HIMSS 2001, 2002. In these demonstrations manufacturers came together to show how actual products could be integrated based on certain IHE protocols. It is the belief of RSNA and HIMSS that with successful adaption of IHE, life would become more pleasant in health care systems integration for both the users and the providers. 7.5.2
The IHE Technical Framework and Integration Profiles
There are three key concepts in the IHE technical framework: data model, actors, and integration profiles. Data Model: The data model is adapted from HL7 and DICOM and shows the relationships between the key frames of reference, for example, Patient, Visit, Order, and Study defined in the framework. IHE Actor: An actor is one that exchanges messages with other actors to achieve specific tasks or transactions. An actor, not necessarily a person, is defined at the enterprise level in generic, product-neutral terms. Integration Profile: An Integration Profile is the organization of functions segmented into discrete units. It includes actors and transactions required to address a particular clinical task or need. An example is the Scheduled Work-
Ch07.qxd 2/12/04 5:13 PM Page 189
IHE (INTEGRATING THE HEALTHCARE ENTERPRISE
189
flow Profiles, which incorporate all the process steps in a typical scheduled patient encounter from registration, ordering, image acquisition, and examination to viewing. IHE Integration Profiles provide a common language, vocabulary, and platform for health care providers and manufacturers to discuss integration needs and the integration capabilities of products. As of early 2004 implementation, there are 12 Integration Profiles; this number will grow over time. Figure 7.6 shows an example of a sample IHE use case where the actors and their roles are given. 7.5.3
IHE Profiles
The 10 implemented IHE profiles are: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Scheduled work flow Patient information reconciliation Consistent presentation of images Presentation-grouped procedures Access to radiology information Key image note Simple image and numeric report Basic security Charge posting Postprocessing work flow
Use Case Roles
Order Placer
ADT Patient Registration
Patient Registration
Department System
MPI
Actor: ADT Role: Adds and modifies patient demographic and encounter information.
Actor: Order Placer Role: Receives patient and encounter information for use in order entry. Actor: Department System Role: Receives and stores patient and encounter information for use in fulfilling orders by the Department System Scheduler. Actor: MPI (Master Person Index) Role: Receives patient and encounter information from multiple ADT systems. Maintains unique enterprise-wide identifier for a patient.
Figure 7.6 A Sample IHE use case. The four actors and their respective roles are described.
Ch07.qxd 2/12/04 5:13 PM Page 190
190
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE) Report Diagnostic image and demographic info presented to radiologist
Report Repository
Registration (Capture patient demographic information)
Report
(Radiology report stored for network access)
HIS: Patient
Report
Information
Radiology report viewed by referring physician
Images retrieved
Film Lightbox
Fime folder
PACS
Orders Placed (Referring physician orders radiology procedure)
Diagnostic Workstation
Procedure scheduled Prefetch any relevant prior studies
(Diagnostic image and demographic info manager and archive) Acquisition completed
Examination orders
Images stored
Film RIS: Orders Filled
Modality worklist Patient and procedure information retrieved by the modality
(Break down order to procedure steps)
Acquisition completed
Modality (Position patient and acquire image)
Images printed
Figure 7.7 IHE Scheduled Workflow Profile including HIS, RIS, PACS, and conventional films. The goal is for accurate, complete report accessible when needed by referring physician.
11. Reporting workflow 12. Evidence documents As an example, the Scheduled Workflow Profile provides a flow of health care information that supports efficient patient care work flow in a typical imaging examination as shown in Figure 7.7 (Carr and Moore, 2003). 7.5.4
The Future of IHE
7.5.4.1 Multidisciplinary Effort So far the main concentration of the IHE initiative has been mainly in radiology. The IHE Strategic Development Committee was formed in September 2001; its members include representatives from multiple clinical and operational personnel, like cardiology, laboratory, pharmacy, medication administration, and interdepartmental information sharing. Work to identify key problems and expertise in these fields has progressed well. 7.5.4.2 International Expansion IHE has expanded internationally. Demonstrations have been held in Europe and Japan with enthusiastic supports from health care providers and vendors alike. Three additional goals have emerged: 1) develop a process to enable US-based IHE initiative technology distributed globally; 2) document nationally based differences in health care policies and practices, and 3) seek the highest possible level of uniformity in medical information exchange.
Ch07.qxd 2/12/04 5:13 PM Page 191
OTHER STANDARDS
7.6
191
OTHER STANDARDS
The following five industrial software standards are used commonly in PACS operation. 7.6.1
UNIX Operating System
The first UNIX operating system (System V) was developed by AT&T and released in 1983. Other versions of UNIX from different computer vendors include BSD (Berkeley Software Distribution by the University of California at Berkeley), Solaris (Sun Microsystems), HP-UX (Hewlett-Packard), Xenix (Microsoft), Ultrix (Digital Equipment Corporation), AIX (International Business Machines), and A/UX (Apple Computers). Despite its many varieties, the UNIX operating system provides an opensystem architecture for computer systems to facilitate the integration of complex software systems within individual systems and between different systems. UNIX offers great capability and high flexibility in networking, interprocess communication, multitasking, and security, which are essential to medical imaging applications. UNIX is mostly used in the server, PACS controller, gateway, and high-end workstations. 7.6.2
Windows NT/XP Operating Systems
Microsoft Windows NT and XP operating systems run on desktop personal computers (PCs) and are a derivative of the University of California at Berkeley’s BSD UNIX. Windows NT and XP, like UNIX, support TCP/IP communications and multitasking, and therefore provide a low-cost software development platform for medical imaging applications in the PC environment. Windows NT is mostly used in workstations and low-end servers. 7.6.3
C and C++ Programming Languages
The C programming language was first introduced by Brian Kernighan and Dennis Ritchie in 1970. The language was simple and flexible, and it became one of the most popular programming languages in computing. C++, created on top of the C programming language, was first created by Bjarne Stroustrup in 1980. C++ is an objectoriented language that allows programmers to organize their software and process the information more effectively than most other programming languages. These two programming languages are used extensively in PACS application software packages. 7.6.4
Structural Query Language
SQL (Structured Query Language) is a standard interactive and programming language for querying, updating, and managing relational databases. SQL is developed from SEQUEL (Structured English Query Language), developed by IBM. The first commercially available implementation of SQL was from Relational Software, Inc. (now Oracle Corporation).
Ch07.qxd 2/12/04 5:13 PM Page 192
192
INDUSTRIAL STANDARDS (HL7 AND DICOM) AND WORK FLOW PROTOCOLS (IHE)
SQL is an interface to a relational database such as Oracle and Sybase. All SQL statements are instructions to operate on the database. Hence, SQL is different from general programming languages such as C and C++. SQL provides a logical way to work with data for all types of users, including programmers, database administrators, and end users. For example, to query a set of rows from a table, the user defines a condition used to search the rows. All rows matching the condition are retrieved in a single step and can be passed as a unit to the user, to another SQL statement, or to a database application. The user does not need to deal with the rows one by one, or to worry about how the data are physically stored or retrieved. The following are the commands SQL provides for a wide variety of database tasks: • • • • •
Querying and retrieving data Inserting, updating, and deleting rows in a table Creating, replacing, altering, and dropping tables or other objects Controlling access to the database and its objects Guaranteeing database consistency and integrity
SQL is adopted by both ANSI (American National Standards Institute, 1986) and ISO (International Standards Organization, 1987), and many relational database management systems, such as IBM DB2 and ORACLE, support SQL. Therefore, users can transfer all skills they have gained with SQL from one database to another. In addition, all programs written in SQL are portable among many database systems. Moreover, these database products also have their proprietary extensions to the standard SQL, so very little modification usually is needed for the SQL program to be moved from one database to another. SQL is used most often in the DICOM query/retrieve operation. 7.6.5
XML (Extensible Markup Language)
XML, a system-independent markup language, is becoming the industry standard for data representation and exchange on World Wide Web, intranet, and elsewhere. As a simple, flexible, extensible text format language, XML can describe information data in a standard or common format so that it makes data portable. Although like HTML (Hypertext Markup Language, which was used extensively over the past 10 years for easy data representation and display) XML uses tags to describe the data, it is significantly different from HTML. First, HTML mainly specifies how to display the data, whereas XML describes both the structure and the content of the data. This means that XML can be processed as data by programs, exchanged among computers as a data file, or displayed as web pages, as HTML does. Second, there is a limit in HTML, where only those predefined tags can be used. However, XML is extensible, as mentioned above. The following are some advantages of XML: 1. Plain Text Format: Because XML is a plain text, both programs and users can read and edit it. 2. Data Identification: XML describes the content of data, but not how to display it. It can be used in different ways by different applications.
Ch07.qxd 2/12/04 5:13 PM Page 193
OTHER STANDARDS
193
3. Reusability: XML entities can be included in an XML document as well as linking to other documents. 4. Easily Processed: Like HTML, XML also identifies the data with tags (identifiers enclosed <. . .>), which are known as “markup.” Because of these tags, it is easy to build programs to parse and process XML files. 5. Extensibility: XML uses the concept of DTD (Document Type Definition) to describe the structure of data and thus has the ability to define an entire database schema. It can be used to translate between different database schema such as from Oracle schema to Sybase schema. Users can define their own tags to describe a particular type of document and can even define a schema, a file to define the structure for the XML document. The schema specifies what kind of tags are used and how they are used in XML document. The best-known schema now is DTD, which is already integrated into XML1.0. Because of these advantages, XML is increasingly popular among enterprises for the integration of data to be shared among departments within the enterprise and with those outside the enterprise. As an example, PACS includes different kinds of data, such as images, waveforms, and reports. A recently approved supplement part, “Structured Reporting Object” (Section 7.4.6), is used for transmission and storage of documents that describe or refer to the images or waveforms or the features they contain. XML is almost naturally fit for building this Structure Report because of its structured text format, portability, and data identification mode. With a traditional file format, it is almost impossible to include an image and a waveform in one file, because this is very hard for applications to parse and process. However, the XML-built Structured Report can easily link images, waveforms, and other type of reports together by simply including their link address. Therefore, most application programs can parse and process it, making the XML-built Structured Report portable. XML is also good for Electronic Patient Record (ePR) (Section 6.6) and Content Mapping Resource (Section 7.4.6) applications, which are similar to the Structured Report as they require multiple forms of data and structured organization of data.
Ch08.qxd 2/12/04 5:14 PM Page 195
CHAPTER 8
Image Acquisition Gateway
HIS Database
Generic PACS Components & Data Flow Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS Controller & Archive Server
Application Servers
Workstations
Web Server
Figure 8.0 Acquisition gateway.
8.1
BACKGROUND
The image acquisition gateway computer (gateway) with a set of software programs is used as a buffer between image acquisition and the PACS controller server. Figure 8.0 shows the Database Gateway and the Acquisition Gateway (no shade). Several acquisition devices can share one gateway computer. It has three primary tasks: It acquires image data from the radiological imaging device, converts the data from manufacturer specifications to PACS standard format (header format, byte-ordering, matrix sizes) that is compliant with the DICOM data formats, and forwards the image study to the PACS controller or display workstations. Additional tasks in the gateway are some image preprocessing, compression, and data security. In this chapter, the terms acquisition gateway computer, gateway computer, acquisition gateway, and gateway have the same meaning. An acquisition gateway has the following characteristics: PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
195
Ch08.qxd 2/12/04 5:14 PM Page 196
196
IMAGE ACQUISITION GATEWAY
(1) It preserves the integrity of image data transmitted from the imaging device. (2) Its operation is transparent to the users and totally or highly automatic. (3) It delivers images in a timely manner to the archive and workstations. (4) It performs some image preprocessing functions.
Among all the PACS components, establishing a reliable acquisition gateway in the PACS is the most difficult task for the following reasons. First, an acquisition gateway must interface with many imaging modalities and PACS modules made by different imaging manufacturers. These modalities and modules have their own image format and communication protocols that make the interface task difficult. Even much imaging equipment now follows the DICOM standard; the PACS integrator must negotiate several DICOM conformance statements (see Section 7.4.4) for a successful interface. Second, performing radiological examinations with an imaging device requires the operator’s input, like entering the patient’s name, identification, accession no., transmitting images, etc. During this process, the potential for human error is unavoidable. A very minor error from input may have a severe impact on the integrity of the PACS data. We discuss this issue in later sections and the chapter on “PACS Pitfalls and Bottlenecks.” Third, ideally the acquisition gateway should be 100% automatic for efficiency and minimizing system errors. To achieve a totally automatic component without much human interaction and with equipment from varied manufacturers is very challenging. The degree of difficulty and the cost necessary to achieve this have been focal issues in PACS design. Automated image acquisition from imaging devices to the PACS controller plays an important role in a PACS infrastructure. The word “automatic” is important here, because relying on labor-intensive manual acquisition methods would defeat the purpose of the PACS. An important measure of the success of an automatic acquisition is its effectiveness in ensuring the integrity and availability of patient images in a PACS system. Because most imaging devices are now DCIOM compliant, this chapter only discusses methods related to the DICOM gateway. For imaging devices that still use an older interface method, refer to the first edition of this book. This chapter first discusses a group of topics related to DICOM interface: the DICOMcompliant gateway, the automatic image recovery scheme for DICOM conformance imaging devices, interface with other existing PACS modules, and the DICOM broker. Because an imaging acquisition gateway also performs certain image preprocessing functions, the second group of topics considers some image preprocessing functions commonly used in PACS. The third topic is the concept of multilevel adaptive processing control in the gateway, which ensures the reliability of the acquisition gateway as well as image integrity. This is important, because when the acquisition gateway has to deal with DICOM formatting, communication, and many image preprocessing functions, multiple-level processing with a queuing mechanism is necessary. The last topic is clinical experience with the acquisition gateway.
Ch08.qxd 2/12/04 5:14 PM Page 197
DICOM-COMPLIANT IMAGE ACQUISITION GATEWAY
8.2 8.2.1
197
DICOM-COMPLIANT IMAGE ACQUISITION GATEWAY Background
DICOM conformance (compliance) by manufacturers is a major factor contributing to the interoperability of different medical imaging systems in clinical environment. We describe the DICOM standard in Chapter 7, in which it defines the basis of the interface mechanism allowing image communication between different manufacturers’ systems. With the standardized interface mechanism, the task of acquiring images from the DICOM-compliant imaging systems becomes simpler. Figure 8.1 shows the connection between the imaging device and the acquisition gateway computer. The DICOM-compliant imaging device on the left is the C-Store Client, and the image acquisition gateway on the right is the C-Store Server (see Fig. 7.4). They are connected by a network running the DICOM TCP/IP protocols (see Section 9.1.2, Fig. 9.2). Regardless of whether a “push” from the scanner or a “pull” operation from the gateway is used (see Section 6.1.1), one image is transmitted at one time, and the order of transmission depends on the database architecture of the scanner. After these images have been received, the gateway computer must know how to accumulate them accordingly, to form series and studies so that image data integrity is not compromised. In the gateway computer, a database management system serves three functions. First, it supports the transaction of each individual image received by the acquisition computer. Second, it monitors the status of the patient studies and their associated series during the transmission. Third, it provides the basis for the automatic image recovery scheme to detect unsuccessfully transferred images. Three database tables are used: study, series, and image. Each table contains a group of records, and
Network
DICOM TCP/IP Upper layer protocol
DICOM C-Store Client Image/Data Storage
Imaging System
DICOM C-Store Server Acquisition Gateway Computer
Image/Data Storage
Figure 8.1 Schematic showing the DICOM-compliant PACS image acquisition gateway using the DICOM C-STORE server and client connecting the imaging device with the acquisition gateway computer.
Ch08.qxd 2/12/04 5:14 PM Page 198
198
IMAGE ACQUISITION GATEWAY
each record contains a set of useful data elements. For the study and series database tables, the study name and the series name are the primary keys for searching. In addition, the following major data elements are also recorded: (1) Patient name and hospital identification number (2) Dates and times when the study and series were created in the imaging device and acquired in the gateway (3) The number of acquired images per series (4) The time stamp of each image when it is acquired by the gateway (5) The acquisition status (6) The DICOM unique identification (UID) value for the study and series By using the DICOM standard to transmit images, one image is transmitted at a time; the order of transmission does not necessarily follow the order of scan, series, or study. The image device’s job queue priority dictates what job needs to be processed next in the scanner’s computer and is always in favor of the scanning and image reconstruction rather than the communication. For this reason, an image waiting in the queue to be transmitted next can be bumped and can lose its priority and be placed in the lower-priority waiting queue for a long period of time without being discovered. The result is a temporary loss of images in a series and in a study. If this is not discovered early enough, the images may be lost permanently because of various system conditions, for example, a system reboot, disk maintenance, or prematurely closing the patient image file at the acquisition gateway. Neither a temporary nor a permanent loss of images is acceptable in PACS operation. We must consider an error recovery mechanism to recover images that are temporarily lost. 8.2.2
DICOM-Based PACS Image Acquisition Gateway
8.2.2.1 Gateway Computer Components and Database Management The acquisition gateway shown in Figure 8.2 consists of four elementary software components, database management system (DBMS), local image storage, storage service class provider (SCP), and storage service class user (SCU), and three error handling and image recovery software components, Q/R SCU, Integrity Check, and Acquisition Delete. The elementary components are discussed in this section; error handling and image recovery components are discussed in Section 8.3. Database Management System (DBMS) The local database of the acquisition gateway records structurally the textual information about images, problematic images, queues, and imaging devices. It is controlled and managed by DBMS. Because the textual information will be deleted after the archiving is completed, small-scale DBMS, such as Access and MySQL commercial products, is adequate to support normal operation of image acquisition to the gateway. The DBMS mounts extendable database file(s) in which the textual data is actually stored. The information about image data can be basically stored in four tables: Patient, Study, Series, and Image. As shown in Figure 8.3, these tables are linked hierarchi-
Ch08.qxd 2/12/04 5:14 PM Page 199
DICOM-COMPLIANT IMAGE ACQUISITION GATEWAY
199
Local Image Storage Storage SCP
Acquisition Delete
Integrity Check
St o qu rag eu e_ e in
Storage SCU
ut _o e ag ue or ue St q
DBMS Q/R SCU
Update
Image dataflow Textual data communication/system command Elementary component Optional component
Figure 8.2 Acquisition gateway components and their work flow. The four elementary components are Storage SCP, Storage SCU, Local Image Storage, and DBMS. The three error handling and image recovery software components are Q/R SCU, Integrity Check, and Acquisition Delete.
Patient
* Patient No. # Patient ID Name Sex Birth Date
. . .
1 to n
Study
* Study No.
1 to n
Series
* Series No.
# Patient No. Study UID No. of Series
# Study No. Series UID No. of Images
. . .
. . .
1 to n
Image
* Image No. # Series No. Image UID File path File name Window Level
. . .
Figure 8.3 Gateway computer database management hierarchies for Patient, Study, Series, and Image tables. *, Primary key; #, foreign key.
cally with primary and foreign key pairs. The primary keys uniquely identify each record in these tables. The primary and foreign keys are denoted by (*) and (#), respectively, in Figure 8.3. The Patient table records some demographic data such as patient ID, name, sex, and birth date. The Study and Series tables record infor-
Ch08.qxd 2/12/04 5:14 PM Page 200
200
IMAGE ACQUISITION GATEWAY
mation such as the date and time when each study and series are created in the imaging device and acquired in the acquisition gateway and the instance unique identification (UID) of each study and series. The study and series instance UIDs are important for image recovery as discussed in Section 8.3. The Image table records generic image information such as orientation, offset, window, and level values of the images stored before and after image preprocessing. The file name and path are defined for each record in image table, providing a link to the local image storage, which can be used to check the integrity of image stored. The records of problematic images are stored in a set of wrong-image tables, which can be displayed to alert the system administrator through a graphical interface. A set of queue tables are used to record the details of each transaction, such as status (executing, pending, fail, or succeed), date in, time in, date out, and time out. Transactions including storage/routing and query/retrieve can be traced with these tables. The records of the modalities are defined in a set of imaging device tables, which provide the generic information of the modalities including AE title, host name, IP address, and physical location.
Local Image Storage Local image storage is storage space in the hard disk of the acquisition computer. It is a folder that is supported by the operating system and allows full access from any background services so that the local storage SCP and SCU (see Section 7.4.5) acting as automatic background services can deposit and fetch images from this folder. Images from imaging devices are accepted and stored one by one into this storage space. However, the images are not necessarily received in the order in the series. Because of this reason, there exists the possibility of image loss during the transmission from imaging devices to the acquisition gateway (discussed in Section 8.3). The location of the folder is prespecified during the installation of the acquisition gateway and can be changed during the operation. The change of storage space location will not affect the integrity of the image storage because the file path and name of every image is individually defined at each record in the Image table of DBMS. Configuring the storage SCP to automatically create subfolders under the storage folder is optional. The subfolders can make a clear classification of the images transparent to the file system. Also, the records in the Patient, Study, Series, and Image tables of the local database can be easily recovered from DBMS failure without any database backup or decoding of the image files. It is also advantageous for the system administrator to trace the images in the file system during troubleshooting. Figures 8.4 and 8.5 show two examples of folder structures used for image storage in PCs. The folder structure of the first example uses one layer of subfolders named by autonumbers. Image files in the same series accepted within the same time interval will be grouped into the same subfolder. Each subfolder can be considered as a transaction, and thus the system administrator can easily trace the problem images based on the creation date and time of the subfolders. In the second example, a complete folder structure is used, which specifies patient ID, modality, accession number, study date, and study time as the names of subfolders. This structure is very useful for database recovery and storage integrity check.
Ch08.qxd 2/12/04 5:14 PM Page 201
DICOM-COMPLIANT IMAGE ACQUISITION GATEWAY
201
Figure 8.4 “Images” folder is the storage space and subfolders named by auto-numbers group the images of the same series stored at the same time interval.
Figure 8.5 “DicomImages” folder is the storage space and the hierarchy of the subfolders is according to the order of “Patient ID,” “Modality,” “Accession number_study date_study time,” and an auto-number. The file names of the images are their image instance UIDs.
The disk storage capacity of the acquisition gateway is expected to be capable of storing images from an imaging device temporarily until these images are archived in the PACS Controller and Server. Because of the limited capacity, those temporary images in the hard disk must be deleted so as to free up space for new incoming images. Storage SCP The purpose of the C-Store server class of the Storage SCP is to receive the C-Store request from the imaging device or the PACS module. The image data will be accepted by the acquisition gateway and then temporarily stored in the local image storage. The server class also inserts the corresponding records into the “storage_in” queue table of the local database of the acquisition gateway. The completion of the storing action will update the status of these records to “completed.” New records will also be inserted into the “storage_out” queue table to
Ch08.qxd 2/12/04 5:14 PM Page 202
202
IMAGE ACQUISITION GATEWAY
prepare for routing images to the PACS Server. The storage SCP is implemented in the system background. Storage SCU The purpose of the C-Store client class of the Storage SCU is to send the C-Store request to the PACS Server when new records are found in the “storage_out” queue table of the local database. The images stored in the local image storage will be routed to the PACS Server by this client class.After the routing is completed, the status of the corresponding records in the “storage_out” queue table will be changed to “completed.” 8.2.2.2 Determination of the End of Image Series If DICOM transmits images one by one from the scanner to the gateway, but not necessarily in order, how does the gateway computer know when a series or a study is completed and when it should close the study file for archive or display? The images of a series and/or a study can only be grouped together by a formatting process when the end of series and/or the end of study is determined, respectively, by the image receiving process. To algorithmically determine the end of a series in a manner both accurate and efficiently is not trivial. We present three algorithms for determining the end of series and discuss the advantages and drawbacks of these algorithms. Algorithm 1: Presence of the Next Series The first algorithm for detecting the end of a series is based on the presence of the next series. This algorithm assumes that the total number of images for the current series would have been transferred to the gateway computer before the next series began. In this case, the presence of the first image slice of the next series indicates the termination of the previous series. The success of the method depends on the following two premises: (1) The imaging device transfers the images to the gateway computer in the order of the scan, and (2) no images are lost during the transfer. Note that the first assumption depends on the specific order of image transmission by the imaging system. If the imaging system transmits the image slices in an ordered sequence (for example, GE Medical Systems MR Signa 5.4 or up) this method can faithfully group the images of a series without errors. On the other hand, if the imaging system transfers the image slices at random (e.g., Siemens MR Vision), this method may conclude the end of a series incorrectly. Even though one could verify the second assumption by checking the order of the image, whether or not the last image has been transferred remains unknown to the gateway computer. Another drawback of this method relates to the determination of the last series of a particular study, which is based on the presence of the first image of the next study. The time delay for this determination could be lengthy because the next study may not begin immediately after the first series. Algorithm 2: Constant Number of Images in a Given Time Interval The second method for determining the end of a series is based on a time interval criterion. The hypothesis of this method assumes that an image series should be completed within a certain period of time. With this method, the end of a series is determined when the acquisition time of the first image plus a designated time interval has elapsed. This method is obviously straightforward and simple, but a static time interval criterion is not practical in a clinical environment. Thus an alternative recourse uses the concept of the constant number of images in a time interval.
Ch08.qxd 2/12/04 5:14 PM Page 203
DICOM-COMPLIANT IMAGE ACQUISITION GATEWAY
203
This method requires recording the number of acquired images for a given series at two different times, time t1 and time t2 = t1 + Dt, for some predetermined constant Dt. By comparing the number of images acquired at time t1 versus the number of images acquired at time t2, a premise is constructed for determining whether or not the series is completed. If, for example, the number of images is a constant, we conclude that the series is completed; otherwise, the series is not yet completed. This process (Dt with no. of images verification) is iterated until a constant number of images has been reached. Next, let us consider how to select Dt: Should it be a static number or dynamic? A short Dt may result in missing images, whereas a long Dt may result in lengthy and inefficient image acquisition. Usually, the first Dt chosen for this method is empirical, depending on the imaging protocols used by the imaging system. For example, if the imaging protocol frequently used in a scanner generates many images per series, then Dt should be long; otherwise, a shorter Dt is preferred. However, it is possible that for a shorter Dt this method may conclude a series prematurely. This is because in some rare cases the technologist or clinician may interrupt the scanning process in the middle of a series to conduct a patient position alignment or to inject a contrast agent. If the time it takes to conduct such procedures is longer than Dt, then the images scanned after the procedure will not be grouped into the current series. In a poorly designed PACS, this could result in a severe problem—missing images in a series. Should Dt be dynamic during the iteration? One thought is that the number of images transferred from the imaging device to the gateway computer decreases while the iteration cycle increases. Therefore, it seems reasonable that Dt may be reduced proportionally to the number iterations. On the other hand, the number of images transferred to the acquisition gateway computer may vary with time depending on the design of the imaging device. For example, an imaging device may be designed to transfer images according to the imaging system workload and priority schedule. If the image transfer process has a low priority, then the number of images transferred during a period when the system workload is heavy will be lower compared with when the workload is light. In this case, Dt is a variable. Algorithm 3: Combination of Algorithm 1 and Algorithm 2 Given the previous discussions, a combination of both methods seems to be preferable. Algorithm 3 can be implemented in three steps: (1) Identify and count the acquired images for a particular series. (2) Record the time stamp whenever the number of acquired images has changed. (3) Update the acquisition status of the series. The acquired images can be tracked by using a transaction table designed in association with the series database table discussed in Section 8.2.2.1. We first start with the time interval method. Whenever an image is acquired from the imaging device, a record of the image is created in the transaction table identified by the modality type, the imaging system identification, the study number, the series number, the image number, and the acquisition time stamp. A tracking system can be developed based on these records.
Ch08.qxd 2/12/04 5:14 PM Page 204
204
IMAGE ACQUISITION GATEWAY
Three major events during the current iteration are recorded in the transaction table: (1) The number of acquired images (2) The t2 value (3) The acquisition status declaring the series as standby, ongoing, completed, or image missing Here, the information regarding the number of acquired images and acquisition status is useful for the maintenance of the image receiving process. If the series is still ongoing, the comparison time is updated for the verification of the number of images during the next iteration. After the interval method detects the end of a series, it can be further verified by the “presence of the next series” method. If the next series exists in the series database table or the next study exists in the study table, the image series is determined as complete. Otherwise, one more iteration is executed, and the series remains in standby status. In this case, the end of the series will be concluded in the next iteration regardless of the existence of the next series or study. In general, Dt can be set at 10 minutes. In this way, the completeness of an image series is verified by both methods and the potential lengthy time delay problem of the first method is minimized.
8.3 AUTOMATIC IMAGE RECOVERY SCHEME FOR DICOM CONFORMANCE DEVICE 8.3.1
Missing Images
Images can be missing at the gateway computer when they are transmitted from the imaging device. As an example, consider an MRI scanner using the DICOM C-Store client to transfer images with one of the following three modes: auto-transfer, autoqueue, or manual-transfer. Only one transfer mode can be in operation at a time. The auto-transfer mode transmits an image whenever it is available, the auto-queue mode transfers images only when the entire study has been completed, and the manual-transfer mode allows the transfer of multiple images, series, or studies. Under normal operation, the auto-transfer mode is routinely used and the manualtransfer mode is used only when a retransmission is required. Once the DICOM communication between the scanner and the gateway computer has been established, if the technologist changes the transfer mode for certain clinical reasons in the middle of the transmission, some images could be temporarily lost. In Section 8.3.2, we discuss an automatic recovery method for these images. 8.3.2
Automatic Image Recovery Scheme
8.3.2.1 Basis for the Image Recovery Scheme The automatic image recovery scheme includes two tasks: identifying the missing studies, series, or images and recovering them accordingly. This scheme requires the accessibility of the scanner image database from the gateway computer. These two tasks can be accomplished
Ch08.qxd 2/12/04 5:14 PM Page 205
AUTOMATIC IMAGE RECOVERY SCHEME FOR DICOM CONFORMANCE DEVICE
205
by using the DICOM Query-Retrieve service class (Section 7.4.5.2) in conjunction with the acquisition gateway database described in Section 8.2.2.1. The operation mechanism of the Query-Retrieve service involves three other DICOM service classes: C-Find, C-Move, and C-Store. The task of identifying missing studies is through the C-Find by matching one of the image grouping hierarchies, such as study level or series level, between the gateway database and the scanner database. The missing image(s) can be recovered by the C-Move and C-Store service classes. The Query-Retrieve operation is executed via a client process at the gateway computer and the MRI scanner computer as the server. 8.3.2.2 The Image Recovery Algorithm Figure 8.6 shows the steps of the automatic image recovery algorithm. There are two steps: recover studies and recover series or images. Recovery of Missing Study In this step, the Query-Retrieve client encodes a CFind query object containing the major information elements such as image study level and a zero-length UID value (unique identification value) defined by the DICOM standard. The zero-length UID prompts the Query-Retrieve server to return every single UID for the queried level. Then the Query-Retrieve server responds with all the matching objects in the scanner database according to the requested image study level. The content of each responded object mainly includes information such as a study number and the corresponding study UID. The information of each responded object is then compared with the records in the gateway database. Those study numbers that are in the responded objects but not recorded in the PACS acquisition gateway database are considered missing studies. Each of the missing studies can be retrieved by issuing a C-Move object from the gateway, which is equivalent to an image retrieval request. Because the retrieval is study specific, the study UID must be included in the C-Move object explicitly. In the scanner, after the C-Move object is received by the Query-Retrieve server, it relays the retrieval request to the C-Store client. As a result, the study transfer is actually conducted by the C-Store client and server processes. Recovery of Missing Series/Image Recovering missing series or images can be performed with the same recovery mechanism as for the study/image level. The difference between the study level and the series level in the recovery scheme is that a specific study UID must be encoded in the C-Find query object for the series level or the image level as opposed to a zero-length UID for the study level. The records of available images in the scanner database and the gateway database may not synchronize. This situation can happen during a current study when its associated series and images are being generated in the scanner but not yet transferred to the gateway computer. This asynchronization can result in incorrect identification because the image recovery process may mistake the current study listed in the scanner database for a missing one because it has not been transferred to the gateway. To avoid this, the responded objects from the Query-Retrieve server are first sorted in chronological order by study creation date and time. If the creation date and time of the most recent study is less than a predefined time interval compared with the current time, it is considered to be the study being generated by the
Ch08.qxd 2/12/04 5:14 PM Page 206
206
IMAGE ACQUISITION GATEWAY
GATEWAY COMPUTER
Form C-Find object to query according to a specified image level (e.g. study, series, image)
Q/R SERVER at SCANNER
Obje
cts r
eque
st
se pon res ect j b do che Mat
Buffer the responded object(s), which include information such as study, series, image number and UID.
Perform the imaging system database lookups and respond matched object(s)
Sort the object(s) in chronologic order
PACS acquisition database tables
Sorted object(s)
Identify the missing ones according to the queried image level
Identified missing ones
Perform C-Move association
yes
more study?
no
Query the other image level or Wait for the next loop
Figure 8.6 General processing flow diagram of the automatic DICOM query-retrieve image recovery scheme. The scheme starts by the acquisition computer issuing a C-Find command (upper left). The recovery starts by a C-Move command (lower left). Both commands are received by the imaging device Query/Retrieve server.
Ch08.qxd 2/12/04 5:14 PM Page 207
AUTOMATIC IMAGE RECOVERY SCHEME FOR DICOM CONFORMANCE DEVICE
207
scanner. Thus the image recovery process will not identify the most recent study as a missing study. 8.3.2.3 Results and the Extension of the Recovery Scheme Figure 8.7 shows an early clinical implementation of connecting an acquisition gateway computer to an existing clinical CT and MR network running DICOM communication protocols. Consecutive daily clinical image data were collected before and after the implementation of the recovery scheme. Table 8.1 shows the results of the comparison. It is seen from the table that human errors were common at the image acquisition site (of 259 studies, human intervention error caused 49 completely missing and 9 partially missing studies) and that with the recovery algorithm implemented, all 58 were automatically recovered successfully. These types of algorithms have been implemented by manufacturers in the PACS acquisition gateway computer as a means of preserving the integrity of the image when it is transmitted from the imaging devices. So far, the discussion of the recovery scheme has been concentrated in missing images caused by human intervention at the imaging device. This concept can be extended to missing images caused by hardware and software malfunctions during DICOM communications as well. The reason is that missing images from these malfunctions have no characteristic difference from those created by human intervention.
MR 1
MR 3
MR 2
CT 1
CT 2
PACS Image Acquisition gateway Computer
MR 4
Ethernet Router
3-D Workstation
CT 3
High Resolution Display Workstation
Ethernet
Figure 8.7 Schematic of the network connection for evaluation of the DICOM automatic image recovery scheme. Four MRI, three CT scanners, one 3-D workstation, one Ethernet router, one workstation, and one image acquisition gateway were connected in this network. Data was taken from one MR4 (bold) and validated at the workstation (bold). The recovery algorithm was performed automatically at the gateway computer. Results are shown in Table 8.1.
Ch08.qxd 2/12/04 5:14 PM Page 208
208
IMAGE ACQUISITION GATEWAY
TABLE 8.1 Recovery Scheme
Before After
Performance of the DICOM Automatic Image Recovery Scheme Total Number of Studies Conducted
475 259
Number of Missing Studies Due to Human Intervention Errors Completely
Partially
133 (28%) 49 (19%)
29 (6.1%) 9 (3.5%)
Number of Missing Studies Manually Recovered
Number of Missing Studies Automatically Recovered
Number of Series Missing Images Due to Grouping Process
162 0
0 58
0 0
Comparison of results before and after implementation of the DICOM image recovery scheme in a large MR and CT network.
8.4 8.4.1
INTERFACE OF A PACS MODULE WITH THE GATEWAY COMPUTER PACS Modality Gateway and HI-PACS Gateway
In Section 8.2 we discussed the DICOM image acquisition gateway. This gateway can be used to interface to a PACS module. A PACS module is loosely defined as a self-contained PACS comprised of some imaging devices, a short-term archive, a database, some display workstations, and a communication network linking these components together. In practice, the module can function alone as a self-contained integrated imaging unit in which the workstations can show images from the imaging devices. An example of a PACS module is the ultrasound (US) PACS, in which several US scanners are connected to a short-term archive (several days) of examinations. The display workstations can show images from all US scanners with a display format tailored for US images. There are certain advantages in connecting the US PACS module to a hospital integrated PACS (HI-PACS). First, once connected, US images can be appended into the same patient’s PACS image folder (or database) to form a complete file for long-term archiving. Second, US images can be shown with other modality images in the PACS general display workstations for crossmodality comparisons. Third, some other modality images can also be shown in the US module’s specialized workstations. In this case, care must be taken because the specialized workstation (e.g., US workstation) may not have the full capability for displaying images from other modalities.A preferred method for interfacing a PACS module with the HI-PACS is to treat the module as an imaging device. Two gateways would be needed, one US PACS gateway connected to the US Server, and a PACS image acquisition gateway connected to the HI-PACS. These two gateways are connected by using the method described in Section 8.2. Figure 8.8 shows the connection of the two gateways. Each patient image file in the US server contains the full-sized color or black-and-white images (compressed or original), thumbnail (quarter sized) images for indexing, and image header information for DICOM conversion. In the US gateway computer, several processes are running concurrently. The first process is a daemon constantly checking for new US examinations arriving from scanners. When one is found, the gateway uses a second process to convert the file to the DICOM format. Because a color US image file is normally larger (Section 4.6), a third process compresses it to a smaller file,
Ch08.qxd 2/12/04 5:14 PM Page 209
INTERFACE OF A PACS MODULE WITH THE GATEWAY COMPUTER
US PACS Module
Print Server US Scanner
US Scanner
209
US Workstation
US Workstation
US Server with shortTerm Storage
DICOM US PACS Gateway
PACS Acquisition Gateway
HI - PACS
PACS Network
Figure 8.8 Connection of an US PACS module to the HI-PACS. Two gateways are used: US PACS gateway and HI-PACS gateway.
normally with a 3 : 1 ratio (Section 5.7.3). The US gateway generates a DICOM send command to transmit the compressed DICOM file to the PACS acquisition gateway computer, by using DICOM TCP/IP protocols. In the acquisition gateway computer, several daemons are running concurrently also. The first is a DICOM command that checks for the DICOM send command from the US gateway. Once the send command is detected, the second daemon checks the proper DICOM format and saves the information in the PACS gateway computer. The third daemon queues the file to be stored by the PACS controller’s long-term archive. 8.4.2
Image Display at the PACS Modality Workstation
To request other modality images from a US workstation, the patient’s image file is first queried/retrieved from the PACS long-term archive. The archive transmits the file to the HI-PACS gateway computer, which sends it to the US PACS gateway. The US gateway computer transmits the file to the US server and then the workstation. Other PACS modules that can be interfaced to the HI-PACS are the nuclear medicine PACS module, the emergency room module, the ambulatory care module, and the teleradiology module. The requirements for interfacing these modules are an
Ch08.qxd 2/12/04 5:14 PM Page 210
210
IMAGE ACQUISITION GATEWAY
individual specialized DICOM gateway (like the US gateway) in the respective module with DICOM commands for communication and DICOM format for the image file. Multiple modules can be connected to the HI-PACS by using several pairs of acquisition gateways as shown in Figure 8.8.
8.5 8.5.1
DICOM CONFORMANCE PACS BROKER Concept of the PACS Broker
The PACS broker is an interface between the radiology information system (RIS) and the PACS (or HIS when RIS is a component of HIS). There are very few direct connections between a RIS and a PACS because most current RIS can only output relevant patient demographic and exam information in HL7 format (Section 7.2), which is a format that most PACS cannot receive and interpret. A PACS broker acts as an interface by processing the HL7 messages received by different RIS systems and mapping the data into easily customizable database tables. Then it can process requests made by various PACS components and provide them with the proper data format and content. Figure 8.9 shows the architecture and functions of the PACS broker. It receives HL7 messages from the RIS and maps them into its own customizable database table. Components can then request from the broker specific data and formats. These include the DICOM worklist for an acquisition modality, scheduled exams for prefetching, patient location for automatic distribution of PACS exams to workstations in the hospital ward areas, and radiology reports for PACS viewing workstations. 8.5.2
An Example of Implementation of a PACS Broker
The following is an example of how a PACS broker is implemented, assuming that the hospital site has an existing RIS. The hospital plans to implement PACS; however, the PACS needs particular information from the RIS. No interface is avail-
DICOM Worklist
RIS HL7 Messages
PACS BROKER
Acquisition Modality
Schedule Exams & Patient Location Report Results
PACS Server
PACS Workstation Figure 8.9 Functions of a PACS (DICOM) broker.
Ch08.qxd 2/12/04 5:14 PM Page 211
IMAGE PREPROCESSING
211
able between the existing RIS and the new PACS to transfer the data. Therefore, a commercial broker is purchased and implemented to act as an interface between the two systems. The RIS can output HL7 messages triggered by particular events in the radiology work flow (e.g., Exam scheduled, Exam dictated, Exam completed). The broker receives the HL7 messages from the RIS. Following the specifications provided by the RIS, the broker has been preconfigured to map the incoming HL7 message data into particular fields of its own database tables. The PACS components can now communicate directly with the broker to make requests for information. Some important data that are requested by PACS include: (1) A worklist of scheduled exams for an acquisition modality (2) Radiology reports and related patient demographic and exam information for PACS viewing workstations (3) Patient location for automatic distribution of PACS exams to workstations in the wards (4) Scheduled exam information for prefetching by the PACS server to PACS viewing workstations. These requests are in addition to general patient demographic data and exam data that are needed to populate a DICOM header of a PACS exam.
8.6
IMAGE PREPROCESSING
In addition to receiving images from imaging devices, the acquisition gateway computer also performs certain image preprocessing functions before images are sent to the PACS controller server or workstations. There are two categories of preprocessing functions. The first is related to the image format—for example, a conversion from the manufacturer’s format to a DICOM-compliant format of the PACS. This type of preprocessing involves mostly data format conversion and is described in Section 7.4.1. The second type of preprocessing prepares the image for optimal viewing at the display workstation. To achieve optimal display, an image should have proper size, good initial display parameters (i.e., a suitable lookup table; see Chapter 11), and proper orientation; any visual distracting background should be removed. Preprocessing function is modality specific in the sense that each imaging modality has a specific set of preprocessing requirements. Some preprocessing functions may work well for certain modalities but poorly for others. In the remainder of this section we discuss preprocessing functions according to each modality. 8.6.1
Computed Radiography (CR) and Digital Radiography (DR)
8.6.1.1 Reformatting A CR image can have three different sizes (given here in inches) depending on the type of imaging plates used: L = 14 ¥ 17, H = 10 ¥ 12, or B = 8 ¥ 10 (high-resolution plate). These plates give rise to 1760 ¥ 2140, 1670 ¥ 2010, and 2000 ¥ 2510 matrices, respectively (or similar matrices dependent on manufacturers). There are two methods of mapping a CR image matrix size to a given size monitor. First, because display monitor screens vary in pixel sizes, a reformatting of
Ch08.qxd 2/12/04 5:14 PM Page 212
212
IMAGE ACQUISITION GATEWAY
the image size from these three different plate dimensions may be necessary in order to fit a given monitor. In the reformat preprocessing function, because both the image and the monitor size are known, a mapping between the size of the image and the screen is first established. We use as an example two of the most commonly used screen sizes: 1024 ¥ 1024 and 2048 ¥ 2048. If the size of an input image is larger than 2048 ¥ 2048, the reformatting takes two steps. First, a two-dimensional bilinear interpolation is performed to shrink the image at a 5 : 4 ratio in both directions; this means that an image size of 2000 ¥ 2510 is reformatted to 1600 ¥ 2008. Second, a suitable number of blank lines are added to extend the size to 2048 ¥ 2048. If a 1024 ¥ 1024 image is desired, a further subsampling ratio of 2 : 1 from the 2048 image is performed. For imaging plates that produce pixel matrix sizes smaller than 2048 ¥ 2048, the image is extended to 2048 ¥ 2048 by adding blank lines and then subsampling (if necessary) to obtain a 1024 ¥ 1024 image. The second method is to center the CR image on the screen without altering its size. For an image size smaller than the screen, the screen is filled with blank pixels and lines. For an image size larger than the screen, only a portion of the image is displayed; a scroll function is used to roam the image (see Chapter 11). Similar methods can be used for DR images. 8.6.1.2 Background Removal The second CR preprocessing function is to remove the image background due to X-ray collimation. In pediatric, extremity, and other special localized body part images, collimation can result in the inclusion of significant white background that should be removed in order to reduce unwanted background in the image during soft copy display. This topic is discussed in Section 3.3.4 extensively, with results shown in Figure 3.15. After background removal, the image size will be different from the standard L, H, and B sizes. To center an image such that it occupies the full monitor screen, it is sometimes advantageous to automatically zoom and scroll the backgroundremoved image for an optimal display. Zoom and scroll functions are standard image processing functions and are discussed in Chapter 11. Similar methods can be used for DR images. 8.6.1.3 Automatic Orientation The third CR preprocessing function is automatic orientation. “Properly oriented” means that when it is displayed on a monitor, the image appears in the conventional way as expected by a radiologist about to read the hard copy image from a light box. Depending on the orientation of the imaging plate with respect to the patient, there are eight possible orientations (Fig. 8.10). An image can be oriented correctly by rotating 90° clockwise, 90° counterclockwise, 180°, or y-axis flipped. The algorithm first determines from the image header the body region in the image. Three common body regions are the chest, abdomen, and extremities. Let us first consider the automatic orientation of the anterior-posterior (AP) or PA chest images. For AP or PA chest images, the algorithm searches for the location of three characteristic objects: spine, abdomen, and neck or upper extremities. To find the spine and abdomen, horizontal and vertical pixel value profiles (or line scans), evenly distributed through the image, are taken. The average density of each profile is
Ch08.qxd 2/12/04 5:14 PM Page 213
IMAGE PREPROCESSING
213
Figure 8.10 Eight possible orientations of an AP chest image: The automatic orientation program determines the body region shown and adjusts it to the proper orientation for viewing. (Courtesy of Ewa Pietka.)
calculated and placed in a horizontal or a vertical profile table. The two tables are searched for local maxima to find a candidate location. Before a decision is made regarding which of the two possible (horizontal or vertical) orientations marks the spine, however, it is necessary to search for the densest area that could belong to either the abdomen or the head. To find this area, an average density value is computed over two consecutive profiles taken from the top and bottom of both tables. The maximum identifies the area to be searched. From the results of these scans and computations, the orientation of the spine is determined to be either horizontal or vertical. A threshold image (threshold at the image’s average gray value) is used to find the location of the neck or upper extremities along the axis perpendicular to the spine. The threshold marks the external contours of the patient and separates them from the patient background (the area that is exposed but outside the patient). Profiles of the threshold image are scanned in a direction perpendicular to the spine (identified earlier). For each profile, the width of the intersection between the scan line and the contour of the patent is recorded. Then the upper extremities for an AP or PA image are found on the basis of the ratios of minimum and maximum intersections for the threshold image profiles. This ratio also serves as the basis for distinguishing between AP (or PA) and lateral views. The AP and PA images are oriented on the basis of the spine, abdomen, and upper extremity location. For lateral chest images, the orientation is determined
Ch08.qxd 2/12/04 5:14 PM Page 214
214
IMAGE ACQUISITION GATEWAY
by using information about the spine and neck location. This indicates the angle that the image needs to be rotated (0°, 90° counterclockwise, 180°, or y-axis flipped). For abdomen images, again there are several stages. First the spine is located by using horizontal and vertical profiles, as before. The average density of each profile is calculated, and the largest local maximum defines the spine location. At the beginning and end of the profile marking the spine, the density is examined. Higher densities indicate the subdiaphragm region. The locations of the spine and abdomen determine the angle at which the image is to be rotated. For hand images, the rotation is performed on a threshold image: a binary image for which all pixel values are set at zero if they are below the threshold value and at one otherwise. To find the angle (90° clockwise, 90° counterclockwise, or 180°), two horizontal and two vertical profiles are scanned parallel to the borders. The distances from the borders are chosen initially as one-fourth of the width and height of the hand image, respectively. The algorithm then searches for a pair of profiles: one that intersects the forearm and a second that intersects at least three fingers. If no such pair can be found in the first image, the search is repeated. The iterations continue until a pair of profiles (either vertical or horizontal), meeting the abovementioned criteria, is found. On the basis of this profile pair location, it is possible to determine the angle at which the image is to be rotated. DR images do not have the orientation problem because the relative position between the DR receptor and the patient always remains the same. 8.6.1.4 Lookup Table Generation The fourth preprocessing function for CR images is the generation of a lookup table. The CR system has a built-in automatic brightness and contrast adjustment; because CR is a 10-bit image, however, it requires a 10- to 8-bit lookup table for mapping onto the display monitor. The procedure is as follows. After background removal, the histogram of the image is generated. Two numbers, the minimum and the maximum, are obtained from the 5% and 95% points on the cumulative histogram, respectively. From these two values, one computes the two parameters in the lookup table: level = (maximum + minimum)/2 and window = (maximum - minimum). This is the default linear lookup table for displaying the CR images. For CR chest images, preprocessing at the imaging device (or the exposure itself) sometimes results in images that are too bright, lack contrast, or both. For these images, several piecewise-linear lookup tables can be created to adjust the brightness and contrast of different tissue densities of the chest image. These lookup tables are created by first analyzing the image gray level histogram to find several key break points. These break points serve to divide the image into three regions: background (outside the patient but still within the radiation field), soft tissue region (skin, muscle, fat, and overpenetrated lung), and dense tissue region (mediastinum, subdiaphragm, and underpenetrated lung). From these breakpoints, different gains can be applied to increase the contrast (gain or slope of the lookup table >1) or reduce the contrast (gain or slope <1) of each region individually. In this way, the brightness and contrast of each region can be adjusted depending on the application. If necessary, several lookup tables can be created to enhance the radiographic dense and soft tissues, with each having different levels of enhancement. These lookup tables can be easily built in and inserted
Ch08.qxd 2/12/04 5:14 PM Page 215
IMAGE PREPROCESSING
215
into the image header and applied at the time of display to enhance different types of tissues. Similar methods can be used for DR images. 8.6.2
Digitized X-Ray Images
Digitized X-ray images share some preprocessing functions with the CR: reformatting, background removal, and lookup table generation. However, each of these algorithms requires some modifications. Most X-ray film digitizers are 12 bits and allow the user to specify a field in the film to be digitized; as a result, the digitized image differs from the CR image in three aspects. First, the size of the digitized image can have various dimensions instead of just three. Second, there will be no background in the digitized image because the user can effectively eliminate it by positioning the proper window size during digitizing. Third, the image is 12 bits instead of 10 bits. For reformatting, the mapping algorithm therefore should be modified for multiple input dimensions. No background removal is necessary, although the zoom and scroll functions may still be needed to center the image to occupy the full screen size. The lookup table parameters are computed from 12 instead of 10 bits. Note that no automatic orientation is necessary, because the user would have oriented the image properly during the digitizing. 8.6.3
Digital Mammography
A digital mammogram size is generally 4K ¥ 5K ¥ 12 bits with a lot of background outside of the breast contour. For this reason, image preprocessing is necessary to optimize the presentation of both the spatial and gray level on the monitor. Three types of preprocessing functions are of importance. The first function is to perform a background removal outside the breast. The concept is described in Section 3.3.4. In this case, the background removal is simpler because the separation between the breast tissues and the background is quite distinct. An edge detection algorithm can automatically delineate the boundary of the breast. The background is replaced by an average grey level of the breast tissues (see Fig. 5.3). The second function is to optimize the default brightness and contrast of the digital mammogram presented on the monitor. A preprocessing algorithm determines the histogram of the image. The 5% and 95% cumulative histogram values are used to generate the initial mean and window for the display of the image. The third function is to automatically correct the orientation of the mammogram based on left versus right side as well as specific mammography projection. 8.6.4
Sectional Images—CT, MR, and US
In sectional imaging, the only necessary preprocessing function is lookup table generation. The two lookup table parameters, window and level, can be computed from each image (either 8 or 12 bits) in a sectional examination similar to that described for the CR image. The disadvantages of using this approach are twofold: Inspection of many images in a sectional examination can turn out to be a very timeconsuming process, and a lookup table is needed for each image. The requirement for separate lookup tables will delay the multiple-image display on the screen because the display program must perform a table lookup for each image. A method
Ch08.qxd 2/12/04 5:14 PM Page 216
216
IMAGE ACQUISITION GATEWAY
to circumvent the drawback of many lookup tables is to search the corresponding histograms for the minimum and maximum gray levels of a collection of images in the examination and generate a lookup table for all images. For US, this method works well because the US signals are quite uniform between images. In CT, several lookup tables can be generated for each region of the body for optimal display of lungs, soft tissues, or bones. This method in general works well for CT but would not for MR. In MR, even though a series of images may be generated from the same pulse sequence, the strength of the body or surface coil can vary from section to section, which creates variable histograms for each image. For this reason, the automatic lookup table generation method would perform poorly for a set of MR images; the development of an optimal display algorithm remains challenging.
8.7 8.7.1
AN EXAMPLE OF A GATEWAY IN A CLINICAL PACS ENVIRONMENT Gateway in a Clinical PACS
This section describes a gateway in a clinical PACS at St. John’s Healthcare Center, Santa Monica, CA. Figure 8.11 shows the location of the acquisition gateway, which is sitting behind (in the sense of data flow) a quality control (QC) workstation. The images from Imaging Modalities are first sent to the QC workstation, where a technician views the images and allows them to be transmitted if the quality is acceptable. Then the QC workstation sends images to the acquisition gateway and then to the proper workstations and to the PACS archive for permanent storage. These workstations are located in the main radiology department; the PACS archive is located at the St. John’s Data Center. With a network backbone of 100 Mbits/s, data transmissions are fast. 8.7.2 Clinical Operation Conditions and Reliability: Weaknesses and Single Points of Failure Because the imaging modalities are continuously sending large amounts of images to the gateway, the workload of the gateway is very heavy, especially when multi-
Gateway – Logical Layout With Respect To Total PACS DICOM Network Backbone (100 Mbits/Sec)
1 Imaging Modalities
2 QC
Gateway
Workstation
Computer
PACS Archive
Figure 8.11 A clinical layout of the acquisition gateway in PACS at St. John Healthcare Center, Santa Monica, CA.
Ch08.qxd 2/12/04 5:14 PM Page 217
AN EXAMPLE OF A GATEWAY IN A CLINICAL PACS ENVIRONMENT
Acquisition Gateway
Manual Switch
Imaging Modalities
217
x
Acquisition Gateway
x
PACS Controller & Archive Server
Workstations
Figure 8.12 Manual switching between two gateways in PACS when one fails.
ple modalities send all images to one gateway or the gateway shares the computer with other PACS processes like image preprocessing. The computer for the gateway may not have enough power to handle the data flow in the peak hour. The gateway is a single-point-of-failure component between the modalities and PACS archive server. Even though there are multiple gateways in PACS, switching from one failed gateway to another gateway is manual. Because DICOM communication needs to set up proper DICOM communication parameters, such as AE Title, IP address, and port number, it requires an experienced PACS administrator to set up and switch gateways correctly and quickly. Fixing the failure of a gateway might take several minutes to half an hour, and hence it affects the data flow of the PACS. Figure 8.12 shows the manual switching between two gateways in PACS. A built-in fail-safe mechanism in the imaging device to automatically switch a failing gateway to others would be a better solution.
c09.qxd 2/12/04 5:11 PM Page 219
CHAPTER 9
Communications and Networking
HIS Database
Generic PACS Components & Data Flow Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS Controller & Archive Server
Application Servers
Workstations
Web Server
Figure 9.0 PACS Networks.
This chapter discusses three major topics in PACS networking. The first topic is the basic knowledge in communication and networking related to PACS (Sections 9.1, 9.2, and 9.3). The second topic is PACS network design and examples (Sections 9.4 and 9.5). The third topic is emerging network technologies, Internet 2, wireless, and scalable network, which may benefit PACS future development (Sections 9.6, 9.7, and 9.8). Figure 9.0 shows the PACS networks connecting to all components.
9.1 9.1.1
BACKGROUND Terminology
Communication is the transfer of information from one place to another, usually by way of media of some type. Media may be either bound (cables) or unbound (broadPACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
219
c09.qxd 2/12/04 5:11 PM Page 220
220
COMMUNICATIONS AND NETWORKING
cast). Analog communication systems encode the information into some continuum (wave form video) of signal (voltage) levels. Digital systems encode the information into two discrete states (“0” and “1” called bits) and rely on the collection of these binary states to form meaningful data. A communication standard encompasses detailed specifications of the media, the explicit physical connections, the signal levels and timings, the packaging of the signals, and the high-level software necessary for the transport. A video communication standard describes the characteristics of composite video signals including interlace or progressive scan, frame rate, line and frame retrace times, number of lines per frame, and number of frames per second. In a PACS, the soft copy display on the monitor is video signals; depending on the types of monitor used, these video signals also follow certain standards. In digital communications, the packaging of the bits to form bytes, words, packets, blocks, and files is usually referred to as a communication protocol. Serial data transmission moves digital data, one bit at a time, over a single wire or a channel. This single bit stream is reassembled into meaningful byte/word/packet/block/file data at the receiving end of a transmission. On the other hand, parallel data transmission uses many wires to transmit bits in parallel. Thus, at any moment in time, a serial wire has only one bit present but a set of parallel wires may have an entire byte or word present. Consequently, parallel transmission provides an n-fold increase in transmission speed, where n is the number of wires used. In applications that call for maximum speed, synchronous communication is used. That is, the two communication nodes share a common clock and data are transmitted in a strict way according to this clock. Asynchronous communication, used when simplicity is desired, relies on start and stop signals to identify the beginning and end of data packets. Accurate timing is still required, but the signal encoding allows a wide variance in the timing on the different ends of the communication line. We discuss the asynchronous transfer mode (ATM) technology in Section 9.1.3.2. In digital communication, the most primitive protocol is the RS-232 asynchronous standard for point-to-point communication, promulgated by the Electrical Industry Association (EIA). This standard specifies the signal and interface mechanical characteristics, gives a functional description of the interchange circuits, and lists application-specific circuits. This protocol is mostly used for peripheral devices (e.g., the trackball and mouse in a display workstation). Current digital communication methods are mostly networking. Table 9.1 lists five popular network topologies, and Figure 9.1 shows their architecture. The bus, ring, and star architectures are most commonly used in PACS applications. A network that is used in a local area (e.g., within a building or a hospital) is called a local area network, or LAN. If it is used outside of a local area, it is called a metropolitan area network (MAN) or wide area network (WAN), depending on the area covered. 9.1.2
Network Standards
The two most commonly used network standards in PACS applications are the DOD standard developed by the U.S. Department of Defense and the OSI (Open Systems Interconnect) developed by the International Standards Organization (ISO). As shown in Figure 9.2, the former has four-layer protocol stacks and the
c09.qxd 2/12/04 5:11 PM Page 221
BACKGROUND
221
TABLE 9.1
Five Commonly Used Network Topologies
Topology
In PACS Applications
Bus
Ethernet
Simplicity
Difficult to trace problems when a channel fails
Tree
Video broadband headend
Simplicity
Bottleneck at the upper level
Ring
Fiber distributed data interface (FDDI), High-speed ATM SONET ring
Simplicity, no bottleneck
In a single ring, the network fails if the channel between two nodes fails
Star (Hub)
High-speed Ethernet switch, ATM switch
Simplicity, simple to isolate the fault
Bottleneck at the hub or switch, single point of failure at switch
Mesh
Used in wide area network applications
Immunity to bottleneck failure
Complicated
TABLE 9.2
The Seven-Layer Open Systems Interconnect (OSI) Protocols
Layer
Protocol
Definition
7 6
Application layer Presentation layer
5 4 3 2
Session layer Transport layer Network layer Data link
1
Physical layer
Provide services to users Transformation of data (encryption, compression, reformatting) Control applications running on different workstations Transfer of data between end points with error recovery Establish, maintain, and terminate network connections Medium access control—network access (collision detection, token passing) and network control Logical links control— send and receive data messages or packets Hardware layer
Advantages
Disadvantages
latter has seven-layer stacks. In the DOD protocol stack, the FTP (file transfer protocol) and the TCP/IP (Transport Control Protocol/Internet Protocol) are two popular communication protocols used widely in the medical imaging field. The seven layers in the OSI protocols are defined in Table 9.2. We now use an example to explain how data are sent between two nodes in a network using the DOD TCP/IP protocol, used by PACS communications. Figure 9.3 shows the procedure (the steps by which a block of data is transmitted with protocol information listed on the left). First, the block of data is separated into segments (or packets) of data, whereupon each segment is given a TCP header, then an IP header, and finally a packet header and a packet trailer. The packet of encapsulated data is then sent, and the process is repeated until the entire block of data has been transmitted. The encapsulated procedure is represented by the boxes on the right in Figure 9.3. In the TCP/IP protocols, the storage overheads of the transmission are the TCP header, the IP header, and the packet header and trailer, and the time overheads are the encoding and decoding.
c09.qxd 2/12/04 5:11 PM Page 222
222
COMMUNICATIONS AND NETWORKING
A
A
C
E
B
B
D
C
D
E
F
F
(a) BUS Topology
G
H
(b) Tree Topology
C
D
A B E
A
E
B
D
G
C
(c) Ring Topology
H
F
(d) Star Topology
B
E
F H
A
C G D (e) Mesh Topology
Figure 9.1 Five commonly used network topology architectures.
9.1.3
Network Technology
Now we turn to two commonly used network technologies in PACS applications: the Ethernet and asynchronous transfer mode (ATM) running on TCP/IP communication protocols.
c09.qxd 2/12/04 5:11 PM Page 223
BACKGROUND
OSI Protocol Stack
Application Layer
7
Presentation Layer
6
Session Layer
5
Transport Layer
4
Network Layer
3
Data Link Layer
2
Physical Layer
1
223
DOD Protocol Stack
FTP TELNET
TCP
IP
Process Layer (FTP)
Host-to-Host-Layer (TCP) Internet layer (IP)
NETWORK ACCESS LAYER
Network Access Layer
Figure 9.2 Correspondences between the seven-layer OSI and the four-layer DOD communication protocols.
9.1.3.1
Ethernet and Gigabit Ethernet
Standard Ethernet The standard Ethernet (luminiferous ether), which is based on IEEE Standard 802.3, Carrier Sense Multiple Access with Collision Detection (CSMA/CD), uses the bus or star topology (see Fig. 9.1). It operates from 10 Mbits/s to 1 Gbit/s either on a half-inch coaxial cable, twisted pair wires, or fiber-optic cables. Data are sent out in packets to facilitate the sharing of the cable. All nodes on the network connect to the backbone cable via Ethernet taps or Ethernet switches (taps are seldom used now except in older establishments). New taps/switches can be added anywhere along the backbone, and each node possesses a unique node address that allows routing of data packets by hardware. Each packet contains a source address, a destination address, data, and error detection codes. In addition, each packet is prefaced with signal detection and transmission codes that ascertain status and establish the use of the cable. For twisted pair cables, the Ethernet concentrator acts as the backbone cable. As with all communication systems, the quoted operating speed or signaling rate represents the raw throughput speed of the communication channel—in this case a coaxial cable with a base signal of 10 Mbits/s. The Ethernet protocol calls for extensive packaging of the data. Because a package may contain as many as 1500 bytes or even more for image transmission, a single file is usually broken up into many
c09.qxd 2/12/04 5:11 PM Page 224
224
COMMUNICATIONS AND NETWORKING
Block of data
data
A segment of data
Application
AH
data
TCP header
AH
data
IP TCP header header
AH
data
Packet IP TCP header header header
AH
data
data
…
data
AH: Application Header
Add TCP header: destination port data sequence no. checksum Internet layer
add IP header destination computer address
Network access layer
add packet header and trailer address of sub network
Packet trailer
Figure 9.3 Example of sending a block of data from one network node to another with the TCP/IP protocol.
packets. This packaging is necessary to allow proper sharing of the communication channel. It is the job of the Ethernet interface hardware to route and present the raw data in each packet to the necessary destination computer. Dependent on the type of computer and communication software used, the performance of a node at a multiple-connection Ethernet can deteriorate from 10 Mbits/s to as slow as 50 Kbits/s. Fast Ethernet and Gigabit Ethernet Advances in fast Ethernet (100 Mbit/s) and gigabit Ethernet (1.0 Gbit/s or higher) switches allows nodes to be connected to them with very high-speed performance. Fast Ethernet is in the same performance ranges as the ATM OC-3 (155 Mbits/s), and the gigabit Ethernet has better performance than the ATM OC-12 (622 Mbits/s; see Section 9.1.3.2). High-speed Ethernet technology is a star topology very much like the ATM. Each switch allows for a certain number of connections to the workstations through a standard 100 Mbits/s board or an adapter board in the workstation for higher-speed connection. A gigabit Ethernet switch can be branched out to 100 Mbit/s work-
c09.qxd 2/12/04 5:11 PM Page 225
BACKGROUND
225
Gigabit Ethernet Switch 100
/s bits
M
100 Mbits/s
PACS Controller Fast Ethernet Switch 100Mbits/s
0M bit
s/s
Fast Ethernet Switch 100Mbits/s
10 Mbits/s
10
WS
WS Gateway Computer
WS
WS
10 Mbits/s Switch
10 Mbits/s Switch
Gigabit/ATM Converter ATM OC3
10 Mbits/s
ATM Switch ATM OC3 Image Acquisition
WS
WS
cloud
Figure 9.4 Schematic of gigabit Ethernet switch applications in PACS.
stations and 100 Mbits/s Ethernet switches, which in turn can be stepped down to several 10 Mbit/s switches and 10 Mbit/s workstations connections as shown in Figure 9.4. Because Ethernet is used for LANs, and ATM can be used for both LANs and WANs, it is important to know that the gigabit Ethernet switch can also be used to connect to ATM OC-3 switches by an ATM-Ethernet converter, providing a connection for these two technologies (Fig. 9.4). Currently, a two-node connection in a Gbit/s Ethernet switch can achieve about 500 Mbit/s, sufficient for most medical imaging applications. 9.1.3.2 ATM (Asynchronous Transfer Mode) Ethernet was originally designed for LAN applications, but radiological image communication should have no physical or logical boundaries between LANs and WANs. ATM stepped in as a good technology for both LANs and WANs. ATM is a method for transporting information that splits data into fixed-length cells, each consisting of 5 bytes of ATM transmission protocol header information and 48 bytes of data information. Based on the virtual circuit-oriented packet-switching theory developed for telephone circuit switching applications, ATM systems were designed on the star topology, in which an ATM switch serves as a hub. The basic signaling rate of ATM is 51.84 Mbit/s, called Optical Carrier Level 1 (OC-1). Other rates are multiples of OC-1, for example, OC-3 (155 Mbit/s), OC-12 (622 Mbit/s), OC-48 (2.5 Gbit/s), and
c09.qxd 2/12/04 5:11 PM Page 226
226
COMMUNICATIONS AND NETWORKING
TABLE 9.3
Communication Devices for Internetwork Connection
Device
Protocol Layer*
Network Connections
Repeater Bridge Router Gateway
Physical (1) Data link (2) Network (3) Application (7)
Similar network but different media Similar network Similar or dissimilar network Different network architecture
* Numbers in parentheses give OSI layer as defined in Table 9.2.
up. In imaging applications, the standard TCP/IP for both LAN and WAN, or application software containing TCP/IP like DICOM can be used. In Chapter 14, we discuss a SONET (synchronous optical NETwork) ATM ring that can be used as a high-speed WAN supporting multiple campuses. Combining ATM and Gbit/s Ethernet technologies forms the backbones of the Internet 2 (see Section 9.6). 9.1.4
Network Components for Connectivity
Communication protocols are used to pass data from one node to another node in a network. To connect different networks, additional hardware and software devices are needed; these are repeaters, bridges, routers, and gateways. A repeater passes data bit by bit in the physical layer. It is used to connect two networks that are similar but use different media; for example, a thinnet and a twisted pair Ethernet (see Section 9.2.1) might be connected by means of hardware in layer 1 of the OSI standard. A bridge connects two similar networks (e.g., Ethernet to Ethernet) by both hardware and software in layer 2. A router directs packets by using a network layer protocol (layer 3); it is used to connect two or more networks, similar or not (e.g., to transmit data between WAN, MAN, and LAN). A gateway, which connects different network architectures (e.g., RIS and PACS), uses an application-level (level 7) protocol. A gateway is usually a computer with dedicated communication software. Table 9.3 compares these four communication devices.
9.2 9.2.1
CABLE PLAN Types of Networking Cables
This section describes several types of cables used for networking. The generic names have the form “10 Base X,” where 10 means 10 Mbit/s, and X represents a medium type specified by IEEE Standard 802.3, because some of these cables were developed for Ethernet use. 10 Base5, also called thicknet or thick Ethernet, is a coaxial cable terminated with N Series connectors. 10 Base5 is a 10-Mhz, 10 Mbit/s network medium with a distance limitation of 500 m. This cable is typically used as an Ethernet trunk or backbone path of the network. Cable impedance is 50 ohms (W). 10 Base2, also called thinnet or cheaper net, is terminated with BNC connectors. Also used as a Ethernet trunk or backbone path for smaller networks, 10Base2 is
c09.qxd 2/12/04 5:11 PM Page 227
CABLE PLAN
227
a 10-MHz, 10 Mbit/s medium with a distance limitation of 185 m. Cable impedance is 50 W. 10 BaseT, also called UTP or unshielded twisted pair, is terminated with AMP 110, or RJ-45 connectors following the EIA 568 standard. With a distance limitation of 100 m, this low-cost cable is used for point-to-point applications such as Ethernet and copper distributed data interface (CDDI), not as a backbone. Category 3, 4, and 5 UTP can all be used for Ethernet, but Category 5, capable of 100 Mhz and 100 Mbit/s, is recommended for medical imaging applications. 100 BaseT is used for the fast Ethernet connection to support 100 Mbit/s. Fiber-Optic Cables normally come in bundles from 1 to 216 fibers. Each fiber can be either multimode (62.5 mm in diameter) or single mode (9 mm). Multimode, normally referred to as 10 Base F, is used for Ethernet and ATM (see Section 9.1.3.3). The single mode is used for longer distance communication. 10 BaseF cables are terminated with SC, ST, SMA, or FC connectors, but usually ST. For Ethernet applications, single mode has a distance limitation of 2000 m and can be used as a backbone segment or point to point. 10 BaseF cables are used for networking. Patch cords are used to connect a network with another network or a network with an individual component (e.g., imaging device, image workstation). Patch cords usually are AUI (Attachment Unit Interface—DB 25), UTP, or short fiber-optic cables with the proper connectors. Air-Blown Fiber (ABF) is a recent technology that makes it possible to use compressed nitrogen to “blow” fibers as needed through a tube distribution system (TDS). Tubes come in quantities from 1 to 16. Each tube can accommodate bundles from 1 to 16 fibers, either single mode or multimode. The advantage of this type of system is that fibers can be blown in as needed once the TDS has been installed. Video Cables are used to transmit images to high-resolution monitors. For 2 K monitors, 50-W cables are used: RG 58 for short lengths or RG 214U for distances up to 150 ft. RG 59, a 75-W cable used for 1 K monitors, can run distances of 100 ft. 9.2.2
The Hub Room
A hub room contains repeaters, fanouts (one to-many repeaters), bridges, routers, switches, gateway computers, and other networking equipment for connecting and routing/switching information to and from networks. This room also contains the center for networking infrastructure media such as thicknet, thinnet, UTP, and fiberoptic patch panels. Patch panels, which allow the termination of fiber optics and UTP cables from various rooms in one central location, usually are mounted in a rack. At the patch panel, networks can be patched or connected from one location to another or by installing a jumper from the patch panel to a piece of networking equipment. One of the main features of the hub room is its connectivity to various other networks, rooms, and buildings. Air conditioning and backup power are vital to a hub room to provide a fail-safe environment. If possible, semi-dust-free conditions should be maintained. Any large network installation needs segmented hub rooms, which may span different rooms and or buildings. Each room should have multiple network connections and patch panels that permit interconnectivity throughout the campus. The center or main hub room is usually called the network distribution center (NDC). The NDC houses concentrators, bridges, main routers, and switches. It should be
c09.qxd 2/12/04 5:11 PM Page 228
228
COMMUNICATIONS AND NETWORKING
possible to connect via a computer to every network within the communication infrastructure from this room. From the NDC, the networks span to a different building to a room called the building distribution frame (BDF). The BDF routes information from the main subnet to departmental subnets, which may be located on various floors within a building. From the BDF, information can be routed to an intermediate distribution frame (IDF), which will route or switch the network information to the end users. Each hub room should have a predetermined path for cable entrances and exits. For example, four 3-in. sleeves (two for incoming and two for outgoing cables) should be installed between the room and the area the cables are coming from, to allow a direct path. Cable laddering is a very convenient way of managing cables throughout the room. The cable ladder is suspended from the ceiling, which allows the cables to be run from the 3-in. sleeves across the ladder and suspends the cables down to their end locations. Cable trays can also be mounted on the ladder for separation of coaxial, UTP, and fiber optics. In addition to backup power, access to emergency power (provided by external generators typically in a hospital environment) is necessary. The room should also have a minimum of two dedicated 20-A, 120-V and 220-V quad power outlets—more may be required, depending on the room size. Figure 9.5 shows the generic configuration of hub room connections. 9.2.3
Cables for Input Sources
Usually, an imaging device is already connected to an existing network with one of the four media types: thicknet, thinnet, fiber optics, or twisted pair. A tap or switch of the same media type can be used to connect a gateway computer to this network, or the aid of a repeater may be required, as in the following example. Suppose a CT scanner is already on a thinnet network, and the acquisition gateway computer has access only to twisted pair cables in its neighborhood. The system designer might select a repeater residing in a hub room that has an input of thinnet and an output of UTP to connect the network and the computer. When cables must run from a hub room to an image acquisition device area, it is always advisable to lay extra cables with sufficient length to cover the diagonal of the room and from the ceiling to the floor at the time of installation, because it is easier and less expensive than pulling the cables later to clear the way for relocation or upgrade. When installing cables from the IDF or BDF to areas housing acquisition devices planned for Ethernet use at a distance less than 100 m, a minimum of one Category 5 UTP per node and four strands of multimode fiber per imaging room (CT, MR, CR) is recommended. If the fiber-optic broadband video system is also planned for multiple video distribution, additional multimode fiber-optic cables should be allocated for each input device. With this configuration, the current Ethernet technology is fully utilized and the infrastructure still has the capacity to be upgraded to accept any protocols that may be encountered in the future. 9.2.4
Cables for Image Distribution
Cable planning for input devices and image distribution differs in the sense that the former is ad hoc—that is, most of the time the input device already exists and there is not much flexibility in the cable plan. On the other hand, image distribution
c09.qxd 2/12/04 5:11 PM Page 229
CABLE PLAN
IDF
IDF
229
IDF
BDF IDF
IDF
IDF
NDC
BDF
IDF
BDF
IDF
IDF
BDF
Fiber Optics
IDF
IDF
IDF Fiber Optics & Twisted Pair MR
NDC: BDF: IDF: MR: CT: RR:
CT
RR
Network Distribution Center Building Distribution Frame Intermediate Distribution Frame Magnetic Resonance Imaging Computerized tomography Imaging Reading Room
Figure 9.5 Generic configuration of hub room connections. NDC is the network distribution center, BDF is the building distribution frame, and IDF is the intermediate distribution frame from which the image acquisition devices and image workstations are connected.
requires planning because the destinations usually do not have existing cables. In planning the cables for image distribution, the horizontal runs and the vertical runs should be considered separately. The vertical runs, which determine several strategic risers by taking advantage of existing telecommunication closets in the building, usually are planned first. From these closets vertical cables are run for the connections to various floors. Horizontal cables are then installed from the closets to different hub rooms, to NDC, to BDF, to IDF, and finally to the image workstation areas. All cables at the workstation areas should be terminated with proper connectors and should have enough cable slack for termination and troubleshooting.
c09.qxd 2/12/04 5:11 PM Page 230
230
COMMUNICATIONS AND NETWORKING
Horizontal run cables should be housed in conduits. Because installing conduits is expensive, it is advisable whenever possible to put in a larger conduit than is needed initially, to accommodate future expansion. Very often, the installation of conduits calls for drilling holes through floors and building fire walls (core drilling). Drilling holes in a confined environment like a small telephone cable closet in a hospital is a tedious and tricky task. Extreme care must be exercised. It is important to check for the presence of any pipes or cables that may be embedded in the concrete. In addition, each hole should be at least three times the size of the diameter of the cables being installed, to allow for future expansion and to meet fire code regulations. The following tips will also be useful when planning the installation of cables. 1. Always look for existing conduits and cables, to avoid duplication. 2. If possible, use Plenum cables (fire retardant) for horizontal runs. 3. When installing cables from a BDF to multiple IDFs or from an IDF to various rooms, use fiber whenever possible. If the distance is long and if future distribution to other remote sites through this route is possible, install at least twice as many fibers as planned for the short term. 4. Label all cables and fibers at both ends with meaningful names that will not change in the near future (e.g., room numbers, building/floor number).
9.3 9.3.1
DIGITAL COMMUNICATION NETWORKS Background
Within the PACS infrastructure, the digital communication network is responsible for transmission of images from acquisition devices to gateways, the PACS controller, and display workstations. Many computers and processors are involved in this image communication chain; some have high-speed processors and communication protocols and some do not. Therefore, in designing this network, a mixture of communication technologies must be used to accommodate the various computers and processors. The ultimate goal is to have an optimal image throughput in a given clinical environment. It must be cautioned that in PACS, image communication involves megabytes of data per transaction, this is quite different from the conventional health care data transmission. Network bottleneck will occur if the network is not designed properly for large files transmission during peak hours. Table 9.4 describes the image transmission rate requirements of PACS. Transmission from the imaging modality device to the acquisition gateway computer is slower because the imaging modality device is generally slow in generating images. The medium speed requirement from the acquisition computer to the PACS controller depends on the types of acquisition gateway computer used. High-speed communication between the PACS controller and image display workstations is necessary because radiologists and clinicians must access images quickly. In general, 4 Mbytes/s, equivalent to transfer the first 2048 ¥ 2048 ¥ 12 bit conventional digitized X-ray image in an image sequence in 2 s, is the average tolerable waiting time for the physician.
c09.qxd 2/12/04 5:11 PM Page 231
DIGITAL COMMUNICATION NETWORKS
TABLE 9.4
Image Transmission Characteristics Between PACS Components Image Modality Device to Acquisition Gateway Computer
Acquisition Gateway Computer to PACS Controller
Speed Requirement Technology
Slow 100 Kbytes/s Ethernet
Signaling Rate Cost per Connection
10 Mbits/s
Medium 200–500 Kbytes/s Ethernet/ATM/Fast or Gigabit Ethernet 100 Mbits/s
1 Unit
1–5 Units
9.3.2
231
PACS Controller to Display Workstations Fast 4 Mbytes/s ATM/Fast or Gigabit Ethernet 155, 100,1000 Mbits/s 1–10 Units
Design Criteria
The five design criteria for the implementation of digital communication networks for PACS are speed, standardization, fault tolerance, security, and component cost. 9.3.2.1 Speed of Transmission Table 9.4 shows that standard Ethernet is adequate between imaging devices and acquisition gateway computers. For image transfer between acquisition gateway computers and image archive servers, fast Ethernet or ATM should be used if the acquisition computer supports the technology. For image transfer between the PACS controller and display workstations, ATM, fast, or gigabit Ethernet should be used. 9.3.2.2 Standardization The throughput performance for each of the two networks described above (Ethernet and ATM) can be tuned through the judicious choice of software and operating system parameters. For example, enlarging the TCP send and receive buffer within the UNIX kernel for Ethernet and ATM network circuits can increase the throughput of networks using TCP/IP protocols. Alternatively, increasing the memory data buffer size in the application program may enhance transmission speed. The altering of standard network protocols to increase network throughput between a client-server pair can be very effective. The same strategy may prove disastrous in a large communication network, however, because it interferes with network standardization, making it difficult to maintain and service the network. All network circuits should use standard TCP/IP network protocols with a standard buffer size (e.g., 8192 bytes). 9.3.2.3 Fault Tolerance Communication networks in the PACS infrastructure should have backup. All active fiber-optic cables, twist pairs, Ethernet backbone (thicknet and twisted pair), and switches should have spares. Because the standard TCP/IP protocols are used for all networks, if any higher-speed network fails, the socket-based communications software immediately switches over to the next fastest network, and so forth, until all network circuits have been exhausted. The global Ethernet backbone, through which every computer on the PACS network is connected, is the ultimate backup for the entire PACS network.
c09.qxd 2/12/04 5:11 PM Page 232
232
COMMUNICATIONS AND NETWORKING
9.3.2.4 Security There are normally two network systems in a PACS network. The first comprises networks leased from or shared with the campus or hospital, managed by the central network authority (CNA). In this case, users should abide by the rules established by the CNA. Once these cables are connected, the CNA enforces its own security measures and provides service and maintenance. The second network system consists of the PACS network; its cables are confined inside conduits with terminations at the hub rooms. The hub rooms should be locked, and no one should be allowed to enter without authorization from PACS officials. The global Ethernet should be monitored around the clock with a LAN analyzer. The ATM is a closed system and cannot be tapped in except at ATM switches. Security should be set up such that authorized hospital personnel can access the network to view patients’ images at workstations, similar to the set up at a film library or film light boxes. Only authorized users are allowed to copy images and deposit information into the PACS database through the network. Refer to Sections 16.4 and 16.5 for DICOM and HIPAA security profiles. 9.3.2.5 Costs PACS communication networks are designed for clinical use and should be built as a very robust system with redundancy. Cost, although of utmost importance, should not be compromised in the selection of network components and the fault tolerance backup plan.
9.4
PACS NETWORK DESIGN
PACS networks should be designed as an internal network (within the PACS) with connections to external networks such as the manufacturer’s imaging network, radiology information network, hospital information networks, or the enterprise networks. All network connections shown in Figure 9.0 are internal network except those going outside of the PACS domain. External network security is the responsibility of the manufacturer’s imaging network manager, hospital, and enterprise network authority. Internal network security is the responsibility of the PACS administration, the network should only be accessible through layers of security measures assigned by the PACS manager. 9.4.1
External Networks
This section describes types of external networks that can be connected to the PACS network. 9.4.1.1 Manufacturer’s Image Acquisition Device Network Major imaging manufacturers have their own networks for connecting several imaging devices like CT and MR scanners in the radiology department for better image management. These networks are external networks in the PACS design. Most of these networks are Ethernet based; some use the TCP/IP protocols, and others use proprietary protocols for better network throughput. If such a network is already in existence in the radiology department, acquisition gateway computers must be connected to this external network first before CT and MR images can be transmitted to the PACS controller. This external network has no security with respect to the PACS infra-
c09.qxd 2/12/04 5:11 PM Page 233
PACS NETWORK DESIGN
233
structure because every user from outside with a password can have access to the external network and retrieve all information passing through it. 9.4.1.2 Hospital and Radiology Information Networks A hospital or university campus usually has a campus CNA that administers institutional networks. Among the information systems that go through these networks are the HIS and RIS. PACS requires data from both HIS and RIS, but these networks with connection to the PACS network are maintained by the campus CNA, over which the PACS network has no control. For this reason, as far as the PACS infrastructure design is concerned, the hospital or institutional network is an external network with respect to the PACS. 9.4.1.3 Research and Other Networks One major function of PACS is to allow users to access the wealth of the PACS database. A research network can be set up for connecting research equipment to the PACS. Research equipment should allow access to the PACS database for information query and retrieval but not to deposit information. In the PACS infrastructure design, a research network is considered an external network. 9.4.1.4 The Internet Any network carrying information from outside the hospital or the campus through the Internet is considered an external network in the PACS infrastructure design. Such a network carries supplementary information for the PACS, ranging from electronic mail and library information systems to data files available through FTP. 9.4.1.5 Imaging Workstation Networks Sometimes it is advantageous to have display workstations of similar nature to form a subnetwork for the sharing of information. For example, workstations in intensive care units of a hospital can form an ICU network and neuro workstations can form a neuro network. Networks connecting these display workstations are open to all health care personnel to use, and therefore only a minimum security should be imposed. Too many restrictions would deter the users from logging on. However, certain layers of priority can be imposed; for example, some users may be permitted access to a certain level. But at no time are users of these networks are allowed to deposit information to the PACS archive. These types of networks are sometimes maintained by their respective departments. 9.4.2
Internal Networks
A PACS internal network, on the other hand, has the maximum security. Data inside the internal network are considered to be uncompromised clinical archive. Both image and textual data from acquisition devices and other information systems coming from different external networks just described, except those of display workstations, have gone through gateway computers, where data were checked and scrutinized for authenticity before they were allowed to be deposited in the internal network. Fire wall machines are sometimes incorporated into the gateway computer for this purpose. Only the PACS manager is authorized to allow data to be deposited to the archive through the internal network.
c09.qxd 2/12/04 5:11 PM Page 234
234
9.5 9.5.1
COMMUNICATIONS AND NETWORKING
EXAMPLES OF PACS NETWORKS An Earlier PACS Network at UCSF
The concept of external and internal PACS networks was conceived in the mid1990s. For historical purpose, the PACS network developed by the author and colleagues in the University of California at San Francisco (UCSF) reviewed in this section is shown in Figure 9.6 (1995–1999). In this architecture, there are several external networks: WAN, campus network, departmental network, the Laboratory for Radiological Informatics (LRI) research network, the Internet, the PACS external network, and workstation networks. 9.5.1.1 Wide Area Network The WAN Gateway (Fig. 9.6, top left) is used to connect the UCSF main campus radiology department with radiology departments from affiliated hospitals and clinics (MZH, Mt. Zion Hospital; VAMC, VA Medical Center) in the San Francisco Bay area. The standard connection is T1 lines (Frame Access) with 1.5 Mbit/s and the ATM OC3 with 155 Mbit/s (ASX 200). 9.5.1.2 The Departmental Ethernet The departmental Ethernet (lower right in Fig. 9.6) connects 150 Macintosh users in the department. This network is mainly for file transfer and electronic mail and serves as a connection to the department
MZH VAMC WAN
WAN Gateway
CTs
MRs
CRs
Film Scanners
NM PACS
US PACS
To Internet
PACS External Network 2K Stations
1K Stations
Neurorad.
ATM Switches
ICU
ATM
PACS Internal Network
DSA
MAC File Server ATM HIS
ATM
LRI Gateway
PACs Controller
PACS DB
Optical Disk Library
PACS DB
LRI Research Equipment
RIS
Campus Network 100 MACs Physician’s desk top
PACS Central Node
LRI Research Network Department Network
Figure 9.6 Earlier UCSF PACS Network architecture (1995–1999). LRI, Laboratory for Radiological Informatics; MZH, Mount Zion Hospital; VAMC, VA Medical Center.
c09.qxd 2/12/04 5:11 PM Page 235
EXAMPLES OF PACS NETWORKS
235
image file server, which allows Macintosh users access to the PACS image database and the RIS database. This network is connected to the Laboratory for Radiological Informatics research network through a bridge that allows Macintosh users access to images generated from research equipment. Macintosh users can also have access to the Internet through the campus network. HIS, RIS, and digital voice information are transmitted to the PACS controller first through the campus network and then through the departmental network. 9.5.1.3 Research Networks Each radiology department may have its own research network (for example, see Fig. 9.6, bottom middle right; LRI, Lab. for Radiological Informatics), which connects all research equipment in the laboratory, including image acquisition devices, laser film scanners, laser film printers, image processing computers, research image file servers, display workstations, and the PACS image file server. It also connects to the departmental Ethernet through a LRI Gateway. 9.5.1.4 Other PACS External Networks In addition to the networks described above, other PACS external networks are those that connect to all clinical digital image acquisition devices in the department including CTs, MRs, CRs, film digitizers, nuclear medicine PACS, and the US PACS modules. All clinical workstations, either 1 K or 2 K, are also connected through the workstation external network. The WAN ATM gateway and the T1 lines are connected to the external wide area network via a router and an ATM gateway computer. 9.5.1.5 PACS Internal Network The PACS internal network is a secured network that connects the PACS controller and the PACS database to the PACS external network. The router and the fire wall machine protect the internal network by screening all incoming information to the PACS controller. The internal network transmits DICOM image files from the PACS image database to 1 K and 2 K display workstations for clinical use. Macintosh users can also access image files from the PACS controller through the departmental Mac image server. 9.5.2
Network Architecture for Health Care IT and PACS
9.5.2.1 General Architecture In Section 9.5.1, we discussed an earlier PACS network at UCSF setting the stage to the emergence of today’s high-speed network technology for PACS application. This section describes a current large enterpriselevel PACS networks with three topics, health care enterprise level information technology (IT) networks, PACS networks, and integrated IT and PACS networks, with UCLA as an example. For a large enterprise hospital system, the network architecture for health care IT is distributed and spanned across multiple campuses. Back end enterprise servers are distributed in various sites and locations. At the highest level of the network infrastructure hierarchy is the network backbone with the highest available bandwidth (Fig. 9.7, middle). This backbone network is a fiber-optic LAN infrastructure interconnected by various high-bandwidth switches and routers that service the enterprise server farms, mainframe of the HIS, UNIX application servers, enterprise E-mail servers, remote campus clinics’ intranet WAN routers, and intranet depart-
c09.qxd 2/12/04 5:11 PM Page 236
236
COMMUNICATIONS AND NETWORKING
Figure 9.7 Generic health care enterprise-level information technology (IT) network infrastructure. Symbol definitions in this figure are also used in Figs. 9.8 and 9.9.
mental LAN routers and switches. The backbone network also consists of network security/fire wall devices (Fig. 9.7, top), intrusion detection devices, virtual private network device for Internet connections, and network traffic monitor and management devices. In the second tier of the hierarchy, switches and routers connect the remote campuses’ LAN and departmental LAN. These include departmental applications such as RIS, medical supply inventory and distribution information system,
c09.qxd 2/12/04 5:11 PM Page 237
EXAMPLES OF PACS NETWORKS
237
pharmacology information system, pharmaceutical inventory system, and PACS (Fig. 9.7, middle lower). These departmental and remote campus systems also need to be secure from the rest of the enterprise network and its users. Some of these servers reside at the departmental level or in the remote campus location and some server systems reside in the main campus data center. The bandwidth requirements for these systems vary from low-bandwidth, 100-mbps text-based data and messaging web applications to very high-bandwidth time-sensitive medical imaging PACS application. 9.5.2.2 Network Architecture for Health Care IT In an enterprise health care IT, HIS are still based on mainframes at the back end but are slowly migrating to web-based client interfaces in the front end. Communications to and from the mainframe are moving toward Ethernet protocols and away from SNA/token ring-type network protocols. The enterprise E-mail system is comprised of a clustering of servers in the main campus data center, and it is configured in a client/server-type environment. Intranet Ethernet LAN technologies are deployed mostly for personal computers running Windows OS and Mac OS clients. They are becoming the standard client user interface for HIS interaction and E-mail access. These clients reside throughout the main campus, remote campuses, and various departmental levels. Departmental information systems are moving toward clustered UNIX systems and/or clustered NT/Windows base systems. They operate mostly in a client/server-type environment as well and are also using web-based interfaces. Some examples are the radiology information system and the pharmaceutical information system. These departmental applications interact with the HIS as well as between various departments, remote campuses, and clinics. Various WAN and LAN bandwidths are deployed to support various users throughout the hospital campuses. Because these applications are mostly text-based data and messaging using a Web-based interface, various performance bandwidths can be deployed that will not hinder clinical user workflow. Typically, HIS users can be connected by using Ethernet at 10 mbps without hindering performance work flow because the HIS is a text-based system. Although newer applications such as RIS are designed with a web interface, it is more graphically oriented and will require a slightly higher bandwidth to the desktop level. The bandwidth requirements for these desktop clients are sufficient to run at 100 mbps even for heavy usage users. The servers in this environment are typically connected at 100 mpbs full duplex. 9.5.2.3 Network Architecture for PACS PACS is a high-volume image dataintensive system and requires a very high bandwidth. In a peer-to-peer type architecture (Section 6.4.1), image data are not as critically time sensitive because the image data are mostly prefetched to the local workstation’s storage. However, if the PACS is a client/server-type architecture (Section 6.4.2) it is time sensitive because of the expectation of on-demand viewing of images. The image data are not stored locally, but rather they are retrieved on demand over the network. Therefore, this becomes time sensitive, and the network bandwidth must be capable of delivery of the data within a couple of seconds. The network must be designed with Ethernet of gigabits and higher on the backbone with multiple cross-connections to provide
c09.qxd 2/12/04 5:11 PM Page 238
238
COMMUNICATIONS AND NETWORKING
redundancy. All PACS servers must have gigabit Ethernet links to a sophisticated switching environment with high-bandwidth routing capability. In a large PACS with multiple subnets, high-bandwidth routing to multiple subnets minimizes bottleneck at each routing point of image data distribution. All switches connected with PACS workstations must, at a minimum, be a gigabits Ethernet uplink to minimize bottlenecking at the edge switch level. Edge switch is a lower-end switch that serves the purpose of distributing images to PACS workstations, and it has no routing capability. Security is deployed at the top of the switch hierarchy to prevent and protect intrusion of unnecessary network traffic. Network management and traffic detection systems may also be deployed at the backbone level to help isolate potential hazardous conditions. The client workstation must have 100 mbps duplex Ethernet, and high-volume on-demand workstations may even require gigabits Ethernet connections. Figure 9.8 depicts a generic PACS network. 9.5.2.4 UCLA PACS Network Architecture As an example of an enterpriselevel PACS, Figure 9.9 shows the PACS network architecture of the UCLA PACS, which is a client/server model (see Section 6.4.2). There are five high-performance routing switches, and four of the switches are capable of both routing and switching. Two highest-performance switches operate as a redundant core for switching and routing to multiple PACS subnets, and the other two switches operate as switch mode only but are in standby to act as router when the core routing switches fail. All edge switches are of very high performance and have gigabits uplink to the core. All PACS servers are connected to the core switches. Some are connected in gigabits per second, and all high-performance client radiologist workstations are connected to one of the five core switches. The remote hospitals are connected by Metropolitan Gigaman link, which is at a speed of gigabits per second. The hospital established this link for all user levels, and currently it is sufficient for PACS operation as well. At the main campus, the PACS backbone is currently at 2 Gbits/s and can be easily increased up to 10 Gbits/s if desired. The routing switches are supporting the routing of more than six subnets. The total number of devices on the network is nearly one thousand. The PACS workstations retrieve images from the central archive on demand. In this design, the images are stored at the central archive server, the Image Storage Unit (ISU), and are pulled to the client radiologist viewing workstations’ memory in less than a few seconds. Therefore, a Gbit/s bandwidth PACS backbone network infrastructure is a minimal requirement. This is one of the main reasons why the PACS infrastructure is separated from the rest of the hospital network to minimize the cost of operation by providing the needed bandwidth only to high-bandwidth devices in a mostly isolated area/department. In this architecture, the PACS backbone can easily be maintained and bandwidth increased without interfering with the entire hospital network.
9.6
INTERNET 2
In this section, we discuss an emerging federal government initiative’s WAN communication technology, Internet 2, which is easy to use as the standard Internet and high speed but low cost to operate once the institution is connected to the Internet 2 backbones.
c09.qxd 2/12/04 5:11 PM Page 239
INTERNET 2
239
1000 mbps
Si
1000 mbps
Si
Main Campus Backbone Infrastructure
Si
1000 mbps
1000 mbps
Si
100 mbps
100 mbps
Firewall hardware 100 mbps
100 mbps
Routing Switch 1000 mbps
1000 mbps 1000 mbps
Si
Si
Image Storage Unit (ISU)
PACS Backbone network
Edge Switch R R
gist
.s.
WAN Router
Radiologist w.s. greater than 100 mbps
Modalities
1000 mbps
Nuclear Medicine
Radiologist w.s.
Remote campus Radiology Department
PET CT
1000 mbps
Radiologist w.s.
CT, MRI, DSI, CR .. etc
Other large department
Radiologist w.s.
Figure 9.8 Generic health care enterprise-level PACS network infrastructure. See Fig. 9.7 for symbol definitions.
9.6.1
Image Data Communication
PACS requires high-speed networks to transmit large image files between components. In the case of intranet, that is, PACS within a medical center, or PACS for distance learning within a campus, Gbits/s switches and Mbits/s connections to workstations are almost standard in most hospital and university network
c09.qxd 2/12/04 5:11 PM Page 240
240
COMMUNICATIONS AND NETWORKING
1000 mbps Si
Si
WAN Router 1000 mbps Main Campus Backbone
1000 mbps
Infrastructure
We Si
We
Si
1000 mbps
Web View w.s. 1000 mbps
1000 mbps
1000 mbps
Firewall hardware 1000 mbps
Edge Switch
Backup Link 1000 mbps
Routing Switch
1000 mbps
1000 mbps
1000 mbps 1000 mbps
Si Two Switches
PACS Backbone network 1000 mbps
Si
Image Storage Unit (ISU)
Two Switches
1000 mbps
R R
gist Radiologist w.s. The fifth Switch
Modalities
1000 mbps 1000 mbps Nuclear Medicine
Remote campus Radiology Department
Radiologist w.s. PET CT CT, MRI, DSI, CR .. etc
1000 mbps Radiologist w.s. Radiation Oncology
Radiologist w.s. Radiologist w.s.
Figure 9.9 UCLA on-demand client/server model PACS network infrastructure. See Fig. 9.7 for symbol definitions.
c09.qxd 2/12/04 5:11 PM Page 241
INTERNET 2
241
TABLE 9.5 Transmission Rate of Current Wide Area Network Technology • • • • • • •
DS-0 DS-1 DS-1 (T1) ISDN DS-3 ATM (OC-3) Internet 2
56 Kbits/s 56 to (24 ¥ 56) Kbits/s 1.5 Mbits/s 56 kbits/s to 1.5 Mbits/s 28 DS-1 = 45 Mbits/s 155 Mbits/s and up 100 Mbits/s and up
infrastructures. Their transmission rates, even for large image files, are acceptable for clinical operation. However, in the case of the Internet, image data must be transmitted between hospitals and campuses and current high-speed WAN is not adequate for large image file size applications. Table 9.5 shows the transmission rate of current WAN technology. Tele-medical imaging applications require low-cost and high-speed backbone WANs to carry large amounts of imaging related data for rapid-turnaround interpretation. Current low-cost commercial WAN is too slow for medical imaging application, whereas the high-speed WAN (T1 and up) is too expensive for cost-effective use. Internet 2 technology emerges as a potential candidate for low-cost and high-speed networks for image data transmission. 9.6.2
What is Internet 2 (I2)?
Internet 2 is a federal government initiative since 1996 for the integration of higherspeed backbone communication networks (up to 10 Gbits/s) as a means to replace the current Internet for many applications including medical imaging related data. The nonprofit I2 organization known as the UCAID (University Corporation for Advanced Internet Development) consortium was founded in the summer of the same year and supported by the National Science Foundation. It is a network infrastructure backbone in the U.S. for high-speed data transmission. Members of the consortium consisting of more than 200 research universities and over 100 nonacademic institutes are now connecting to the I2 backbones. Currently I2 has three backbones: vBNS (very high-speed backbone network service, Fig. 9.10), Abilene (Fig. 9.11), and CalREN (California Research and Education Network; Fig. 9.12), each of which has the transmission rate of from 2.4 to 10 Gbits/s. I2 members in the consortium connect to the backbones through regional GigaPoP (point of presence). Figure 9.13 shows an example of how to connect an imaging laboratory in the hospital of an academic institute to the I2 backbone. We distinguish I2 from NGI (next generation Internet); the former refers to the infrastructure whereas the latter refers to applications. 9.6.3
Current I2 Performance
Table 9.6 gives examples of I2 performance between CHLA/USC (Childrens Hospital Los Angeles/University of Southern California) and some I2 members, including UCLA (University of California, Los Angeles), UCSF (University of California, San Francisco), Stanford University, UH (University of Hawaii), and the National
242
Figure 9.10 The vBNS backbone network and its connectivity. (Courtesy of vBNS.)
c09.qxd 2/12/04 5:11 PM Page 242
COMMUNICATIONS AND NETWORKING
USC
UCSF NLM
INTERNET 2
Figure 9.11 The Abilene backbone (courtesy of Abilene). Red lines and letters indicate the sites and connections where I2 performance was measured. Results are given in Fig. 9.14 and 9.15. (See color insert.)
UCLA
Stanford
UH
c09.qxd 2/12/04 5:11 PM Page 243
243
c09.qxd 2/12/04 5:11 PM Page 244
COMMUNICATIONS AND NETWORKING
Figure 9.12 The CalREN 2 backbone. (Courtesy of CalREN 2.)
244
c
e
n
t
2
26
1
5
25
2
2
3
6
2
27
1
3
4
7
28
2
2
5
9
2
4
29
5
8
6
30
3
0
6
7
31
3
7
1
32
8
2
8
33
9
3
3
9
34
10
3
3
1
5
1
1
4
0
35
11
3
36
12
3
1
6
2
3
3
37
13
1
7
38
14
1
8
39
15
3
4
9
5
40
16
3
1
4
1
1
7
0
6
41
17
4
1
42
18
4
1
2
8
43
19
1
4
9
3
44
20
2
45
21
4
4
0
5
1
46
22
4
2
4
2
4
2
6
2
47
23
7
3
48
24
P
J
M
1
A
1
A
O
0
2
4
C
N
D
8
4
0
K
C
E
U
L
A
L
T
A
5
R
S
D
3
27
4
28
5
29
6
30
7
31
8
32
9
33
10
34
11
35
12
36
37
13
38
14
39
15
40
16
41
17
42
18
43
19
44
20
45
21
46
22
47
23
48
24
OC3: 155mb/s WAN
Cisco 7507 Router
M R Closet
2
26
1
25
PacBell ATM
100 FX
D ata Center
Cisco 3508G
Cisco 7507 Router
U SC
100 SX
Cisco 2924M X L
OC12
Smith Research Tower
Internet 2
(>10 Gbits/S)
Abilene
* * Now OC 192
Cisco 3548
D1-168 Cisco 2610
IPI Office
Cisco Catalyst 2924M X L
CalREN 2 OC48 (2.4Gb/s)
OC3
Cisco 6509
DG
Cisco 3550-12T
IPI Lab
Gigabit LAN
1000 SX
D2-101
INTERNET 2
Figure 9.13 An example demonstrating the connectivity of a hospital (Childrens Hospital Los Angeles, CHLA) of an academic institution (University of Southern California, USC) to the CalREN 2, an I2 backbone.
OC3
1000SX
u
Patch Panel & Tel Closet
L
D2-227
A Second Internet 2 Testbed
c09.qxd 2/12/04 5:11 PM Page 245
245
c09.qxd 2/12/04 5:11 PM Page 246
246
COMMUNICATIONS AND NETWORKING
TABLE 9.6
Current I2 Performance between various sites in US
Test Sites
Response Time (32 Bytes) <1 ms 4 ms 23 ms 24 ms 67 ms 76 ms
CHLA/USC LAN CHLA/USC-UCLA CHLA/USC-Stanford CHLA/USC-UCSF CHLA/USC-NLM CHLA/USC–U Hawaii
KBytes/sec 180
Throughput
FT P
0.7 %
0.6 %
9.8 Mbytes/s 2.7 Mbytes/s 900 Kbytes/s 700 Kbytes/s 320 Kbytes/s 700 KBytes/s
5.7 %
160
DICOM
17 %
140 120 100
% DICOM Overhead
80 60 40 20 0
Mammo 40MB
CR
7MB
CT
512KB
MR 130KB
Figure 9.14 T1: FTP and DICOM throughput between CHLA/USC and SJHC (20 miles apart). CHLA, Childrens Hospital Los Angeles; USC, University of Southern California; SJHC, St. John’s Healthcare Center.
Library of Medicine (NLM), Bethesda, MD (See Fig. 9.11 for the connections of these sites with CHLA/USC). These measurement results are compared with the LAN within CHLA/USC, which is about 80 Mbits/s. In the table, the response time and the throughput are given. The I2 performance ranges from 2.5 to 21 Mbits/s (approximately 32–4 s for a 10-Mbyte CR image), depending on the distance between the two sites under consideration. The measurements were obtained under normal clinical operation with no special fine-tuning of the networks. Measurements are also given between CHLA/USC and SJHC (St. John’s Health Center, Santa Monica, CA). Comparing the difference between the T1 and I2 performance given in Figures 9.14 and 9.15 shows that I2 transmission speed is almost twice that of T1 even when the former is connected to a much longer distance between the two nodes (3,000 miles vs. 20 miles). T1 is the most popular WAN technology used for teleradiology application today. The cost of T1 is not expensive within a metropolitan area (between CHLA/USC and STHC in the greater Los
c09.qxd 2/12/04 5:11 PM Page 247
INTERNET 2
KBytes/sec
FTP
5.1%
350
247
DI COM
6.4%
300
21% 250
200
43%
% DICOM Overhead
150
100
50
0
Mammo 40MB
CR
7MB
CT 512KB
MR 130KB
Figure 9.15 Internet 2 FTP and DICOM throughput Between CHLA/USC and NLM (2500 miles). NLM, National Library of Medicine, Bethesda, MD.
Angeles area, about $500/M), but it is very expensive between CHLA/USC and NLM, for example. In the measurements, both FTP (File Transport Protocol) and DICOM were used. It is seen that DICOM does cost certain overhead in image transmission: the smaller the image file, the higher the overhead. 9.6.4
Enterprise Teleradiology
I2 is an ideal tool for two types of application, teleradiology and use of PACS image data for distance learning, for two reasons. First, its operation cost is low once the institution is connected to the I2 backbones. Second, I2 uses standard Internet technology which most healthcare personnel are familiar. In enterprise-level teleradiology applications, if each site in the enterprise is already connected to the I2 through its own institution, images can be transmitted very rapidly among the sites in the enterprise as shown in Table 9.6. Because no T1 or other expensive WAN technologies would be needed, it results in a low-cost, efficient teleradiology or distance learning operation. 9.6.5
Current Status
The current I2 technology is a very fast network compared with the commonly used T1 for teleradiology and standard Internet for PACS distance learning applications. I2 requires a high initial capital investment for connecting to the backbones, which is generally absorbed by the institution. Once the site is connected to the backbones, the network runs with minimal operation costs and its operation is identical to that using the standard Internet. These characteristics make I2 an ideal technology for
c09.qxd 2/12/04 5:11 PM Page 248
248
COMMUNICATIONS AND NETWORKING
TABLE 9.7 • • • •
• •
•
Current Status of Internet 2
Very fast, and will be faster Expensive during initial investment Relative low cost to use Not enough health care IT personnel know-how for connectivity Not enough Internet 2 sites Lack of communication between IT and health care IT personnel Internet security issues
TABLE 9.8 Name 802.11a 802.11b 802.11g
Currently Available Wireless LAN Technology Frequency
Maximum Bandwidth
5 GHz 2.4 GHz 2.4 GHz
54 Mbps 11 Mbps 54 Mbps
developing enterprise-level PACS teleradiology applications as well as imaging- and animation-based distance learning with innovative teaching media. Table 9.7 summarizes the current status of I2. We discuss I2 again in Chapter 14: Telemedicine and Teleradiology.
9.7
WIRELESS NETWORKS
Wireless networks are other emerging technologies for PACS applications. We discuss both the wireless LAN (WLAN) and wireless WAN (WWAN) in this section. 9.7.1
Wireless LAN (WLAN)
9.7.1.1 The Technology A Wireless LAN (WLAN) is a type of LAN that uses high-frequency radio waves rather than wires to communicate between nodes. WLANs are based on IEEE Standard 802.11 also known as Wireless Fidelity (Wi-Fi). IEEE Standard 802.11 was introduced as a standard for wireless LANs in 1997 and is updated now to IEEE 802.11-1999. The current standard is also accepted by ANSI/ISO. Table 9.8 shows the different standards available in this technology. IEEE 802.11b is probably the most widely implemented and routinely used wireless LAN today. It operates in the 2.4-GHz band and uses direct sequence spread spectrum (DSSS) modulation. The 802.11b-compliant devices can operate at 1, 2, 5.5, and 11 Mbps. The next improvement of 802.11b is 802.11g, which increases the bandwidth to 54 Mbps within the 2.4-GHz band. Standard 802.11a is LAN operating in the 5-GHz band and using orthogonal frequency division multiplexing (OFDM). Supported data rates include 6, 9, 12, 18, 24, 36, 48, and 54 Mbps. 802.11a has a range similar to 802.11b in a typical office environment up to 225 ft. Twelve
c09.qxd 2/12/04 5:11 PM Page 249
WIRELESS NETWORKS
249
separate nonoverlapping channels are available for 802.11a; therefore, data rates higher than 54 Mbps can be reached by combining channels. Aspects such as security, mobility, data throughput, and site materials are relevant to making adequate decisions when implementing a WLAN. It should be noted that applications using 802.11b and 802.11a wireless LANs require security consideration. Current approaches to improve the security of WLANs include allowing connectivity to the access points from valid Media Access Control (MAC) addresses on the wireless interfaces and activating the Wired Equivalent Privacy (WEP) feature. Both approaches have been proved to be insufficient to provide security (Borisov and Wagner, 2001; Arbaugh et al., 2001). Mobility permits users to switch from one access point to another without modifying any configuration and in a transparent manner. This is critical to users who have to keep connections active for long periods of time. In a clinical environment there could be a need to transfer high volumes of data to a mobile device, for example, laptop, personal digital assistant (PDA), or tablet PC; thus a permanent connection must be assured regardless of the location of the user. Data throughput is closely related to the number of expected users attached to an access point: the greater the number of users, the slower the throughput per user. Building materials, other signals in the same frequency, and the location and type of the antennas affect signal quality. Table 9.9 shows the maximum coverage radius depending on the type of space to cover (Gast, 2002). 9.7.1.2 Performance Some preliminary performance results with the IEEE 802.11a-compliant consumer-quality product in assessment of potential medical imaging are available as shown in Figure 9.16. Two parameters of interest are signal strength and transfer rate. Figure 9.17 shows results obtained with 32-Kbyte data transfers. At 175 ft between two test nodes in an open environment (that is, no obstruction between nodes), a transfer rate of 18 Mbits/s with 20% signal strength was achieved. This translates to approximately 9 MR images. In a closed environment, where lead walls and heavy doors are in the transmission path, the performance is far less reliable. Results are shown in Table 9.10: Over 120 ft, the performance became erratic with a long response time. Table 9.10 is difficult to plot because measured data are extremely variable. The signal strength, data rate, and response time flicker from one extreme to the other. Research on the use of WLAN in closed environments is in progress at various laboratories.
TABLE 9.9
Current WLAN Performance
“Rule-of-Thumb” Coverage Radius for Different Types of Space Type of Space Maximum Coverage Radius Closed office Open office (cubicles) Hallways and other large rooms Outdoors
Up to 50–60 ft Up to 90 ft Up to 150 ft Up to 300 ft
c09.qxd 2/12/04 5:11 PM Page 250
250
COMMUNICATIONS AND NETWORKING
Laptop with wireless card
AP Test Computer
Figure 9.16 Experimental setup in the measurement of the IEEE 802.11a compliant WLAN Performance. , 802.11a compliant cardbus; AP, access point (In this case it is a Dlink Access Point DWL-5000AP).
Signal strength (%) and data transfer rate as a function of distance
100 % (54 Mbps) 80 % 48 Mbps 60 % 36Mbps 40 % 18Mbps 20 %
35
70
105
140
175
210
Ft.
- Signal Strength - Transfer rate
Figure 9.17 Performance of IEEE 802.11a-compliant WLAN between two nodes (in ft) in an open environment.
TABLE 9.10 Measured Performance of IEEE 802.11a-Compliant WLAN Between Two Nodes in a Closed Environment Distance, ft 0–35 35–120 120–140 140–200 200–225
Signal Strength, %
Transfer rate, Mbps
Response Time, ms
60–80 60–80 20–40 20 20–0
36–48 36–48 18–36 18–36 0–18
81–101 90–120 88— No response No response
c09.qxd 2/12/04 5:11 PM Page 251
SELF-SCALING NETWORKS
251
UMTS EDGE GPRS GSM 2002
2003
2004
2005
Figure 9.18 Estimated timeline of availability of WWAN protocols and devices.
9.7.2
Wireless WAN (WWAN)
9.7.2.1 The Technology For WWAN, the current available technologies are GSM (Global System for Mobile Communication), GPRS (General Packet Radio Service), EDGE (Enhanced Data Rates for GSM Evolution), and UMTS (Universal Mobile Telecommunications Service). WWAN devices are much slower than the previously introduced WLAN but are available almost everywhere in the United States. Figure 9.18 shows the estimated time available status of these four technologies. Currently, GSM using the circuit-switched mode can achieve 14.4 Kbits/s and GPRS with 4 slots can achieve 56 Kbits/s download, 28.8 Kbits/s upload. 9.7.2.2 Performance The WWAN communication class using GSM 9.6 Kbits/s and 14.4 Kbits/s circuit-switched data transfer rates by sending a 12 MB single file package of 50 functional MRI brain volumes (64 ¥ 64 ¥ 30 matrix, 2 bytes/voxel) between a workstation and an ftp-server on the Internet were measured. The average transfer rates are 8.043 Kbits/s (9.6 Kbits/s connection) and 12.14 Kbits/s (14.4 Kbits/s connection), respectively. Further research and development will be required for WWAN to be used for medical image applications.
9.8 9.8.1
SELF-SCALING NETWORKS Concept of Self-Scaling Networks
In this section, we discuss the potential of using the three emerging network technologies, Internet 2, WLAN, and WWAN, as an integrated image self-scaling network (ISSN) for medical image management during a local disaster, such as an earthquake or a data center failure, during which many communication networks would be out of service. The selection of Internet 2 for broadband communication is based on flexibility, widespread availability, as well as cost-benefit factors: High speeds are achievable at a very low cost, representing the most likely and appropriate means of future broadband medical networking for many sites. A network self-scalar can be designed to automatically scale the three networks based on the local environment for a given clinical application. Network self-scaling is defined in this context as a mechanism for selecting the proper networking technology for an application. A disparate grouping of technologies including compression, image
c09.qxd 2/12/04 5:11 PM Page 252
252
COMMUNICATIONS AND NETWORKING
security, image content indexing, and display will be variously applied to augment the ISSN with the goal to identify and deliver “just enough” information to the user during a disaster. The full self-scaling mechanism is therefore defined as the combination of network self-scaling and information self-scaling. Figure 9.19 shows the three network technologies, and Figure 9.20 depicts the design concept of the network self-scalar.
A
Category
B
I2
Current Performance
100 -155 mbits/s drop
Open field 6 - 54 mbits/s
Closed field 100 ft 1.5 mbits/s
WLAN
9.6 - 14.4 kbits/s
WWAN
Figure 9.19 Three emerging network technologies, Internet 2, WLAN, and WWAN, can be used for the design of a self-scaling network.
Internet 2
802.11b WLAN 802.11a 802.11g
WWAN
Self Scalar
Application
GSM GPRS EDGE UMTS
Figure 9.20 Design of an image self-scaling network (ISSN) using the concept of a selfscalar.
c09.qxd 2/12/04 5:11 PM Page 253
SELF-SCALING NETWORKS
DICOM ICMP
WLAN
253
DICOM ICMP
802.11b/a/g
PDA, Tablet, Notebook or Local Self-Scalar Workstation TCP/IP Application
I2
Self-Scaling Local DICOM Application TCP/IP TCP/IP Server Server
Self-Scalar Local
DICOM DICOM WWAN ICMP ICMP GSM/GPRS/ EDGE/UMTS
Figure 9.21 Setup of the ISSN for health care applications. ICMP: Internet Control Message Protocol.
9.8.2
Design of the Self-Scaling Network in the Health Care Environment
9.8.2.1 Health Care Application The ISSN utilizes three different communication classes, Internet 2, WLAN and WWAN. Figure 9.21 shows the integration of the three classes of network using the concept of a self-scalar for health care application. A tandem system is used to provide transparent and fault-tolerant communications between health care providers. The self-scalar schematic shown in Figure 9.20 is used as the self-scaling mechanism. Using the Internet Control Message Protocol (ICMP), the ISSN can automatically determine the fastest available network connection between the medical image application site (left) and the image server (right). Permanent network monitoring and transfer speed measurements provide the best available service. In addition, each self-scalar can be modified to initially evaluate any one specific path (e.g., Internet 2 first) in each network availability test cycle, before the final selection is made. 9.8.2.2 The Self-Scalar The key component of the ISSN is the self-scalar (see Fig. 9.21). The DICOM-compliant self-scalar determines automatically which communication classes are available and automatically routes to the fastest available connection. As the self-scalar itself is based on a modular concept, new communication classes or devices can be added seamlessly when new technology emerges. Additional WLAN or WWAN ports can be added to increase the transmission rates. ISSN has a broad base of applications in health care delivery, especially in missioncritical environments
c09.qxd 2/12/04 5:11 PM Page 254
Ch10.qxd 2/12/04 5:16 PM Page 255
CHAPTER 10
PACS Controller and Image Archive Server
HIS Database
Generic PACS Components & Data Flow Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS Controller & Archive Server
Application Servers
Workstations
Web Server
Figure 10.0 PACS controller and archive server.
The PACS central node, the engine of the PACS, has two major components: the PACS controller and the archive server.The former, consisting of the hardware and software architecture, directs the data flow in the entire PACS by using interprocess communication among major processes.The latter provides a hierarchical image storage management system for short-, medium-, and long-term image archiving. Section 10.1 describes the design concept and implementation strategy of the PACS central node, Section 10.2 presents the archive server functions, and Section 10.3 discusses the archive server system operation. Sections 10.4 and 10.5 give the design of a DICOMcompliant PACS archive server and its basic hardware and software. Section 10.6 provides the concept of the backup archive server. Figure 10.0 shows the relative logical position of the PACS controller in the PACS data flow (no shade). PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
255
Ch10.qxd 2/12/04 5:16 PM Page 256
256
10.1
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
IMAGE MANAGEMENT DESIGN CONCEPT
Two major aspects should be considered in the design of the PACS image storage management system: data integrity, which promises no loss of images once they are received by the PACS from the imaging systems, and system efficiency, which minimizes access time of images at the display workstations. We only discuss the DICOM-compliant PACS controller and image archive server. 10.1.1 Local Storage Management via PACS Intercomponent Communication To ensure data integrity, the PACS always retains at least two copies of an individual image on separate storage devices until the image has been archived successfully to the long-term storage device (e.g., an optical disk or tape library). Figure 10.1 shows the various storage subsystems in the PACS. This backup scheme is achieved via the PACS intercomponent communication, which can be broken down as follows:
Imaging Devices
Acquisition Gateway Computer
Magnetic Disks
Magnetic Disks
RAID
PACS Controller and Archive Server
Workstations
DLT Library
Magnetic Disks
Figure 10.1 Hierarchical storage subsystems in PACS ensuring data integrity. Until an individual image has been archived in the permanent storage (e.g., digital linear tape), two copies of it are retained in separate storage subsystem.
Ch10.qxd 2/12/04 5:16 PM Page 257
IMAGE MANAGEMENT DESIGN CONCEPT
•
•
•
•
257
At the radiological imaging device. Images are not deleted from the imaging device’s local storage until technologists have verified the successful archiving of individual images via the PACS connections. In the event of failure of the acquisition process or of the archive process, images can be re-sent from imaging devices to the PACS. At the acquisition gateway computer. Images in the acquisition gateway computer acquired from the imaging device remain in its local magnetic disks until the archive subsystem has acknowledged to the gateway computer that a successful archive has been completed. These images are then deleted from the magnetic disks residing in the gateway computer so that storage space from these disks can be reclaimed. At the PACS controller and archive server. Images arriving in the archive server from various acquisition gateways are not deleted until they have been successfully archived to permanent storage. On the other hand, all archived images are stacked in the archive server’s cache magnetic disks and will be deleted based on their aging criteria (e.g., number of days ago the examination was performed; discharge or transfer of the patient). At the display workstation. In general, images stored in the designated display workstation will remain there until the patient is discharged or transferred in the stand-alone PACS model (see Section 6.4.1). In the client/server model (Section 6.4.2), images are deleted after review. Images in the PACS archive can be retrieved from any display workstation via the DICOM query/retrieve command.
10.1.2
PACS Controller System Configuration
The PACS controller and the archive server consist of four components: an archive server, a database, a digital linear tape (DLT) library, and a communication network (Fig. 10.2). Attached to the archive system through the communication network are the acquisition computers and the display workstations. Images acquired by the acquisition computers from various radiological imaging devices are transmitted to the archive server, from which they are archived to the DLT library (or other types of library) and routed to the appropriate display workstations. 10.1.2.1 The Archive Server The archive server consists of multiple powerful central processing units (CPUs), small computer systems interface (SCSI) data buses, and network interfaces (Ethernet and ATM). With its redundant hardware configuration, the archive server can support multiple processes running simultaneously, and image data can be transmitted over different data buses and networks. In addition to its primary function of archiving images, the archive server acts as a PACS controller, directing the flow of images within the entire PACS from the acquisition gateway computers to various destinations such as archive, workstations, or print stations. The archive server uses its large-capacity RAID (redundant array of inexpensive disks) as a data cache (Section 10.3), capable of storing several weeks’ worth of images acquired from different radiological imaging devices. As an example, small 20-Gbyte disk storage, without using compression, can hold simultaneously up to 500 computed tomography (CT), 1000 magnetic resonance (MR), and 500 computed
Ch10.qxd 2/12/04 5:16 PM Page 258
258
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
PACS Gateway
To Departmental Ethernet, RIS, & HIS
PACS Network To Acquisition Gateways
To Acquisition Gateways
PACS Archive System
PACS Database Servers (mirrored)
DLT Library
Archive Server RAID Gbit Switch
To 1K, 2K Workstations
PACS Gateway To 1k Stations & Print Station T1 To Remote Sites
ATM
Figure 10.2 The configuration of the archive system and the PACS network. A digital linear tape (DLT) library for permanent storage is used as an example. The archive server is connected to the DLT and a pair of mirrored database servers. Patient, study, and image directories are stored in the database; images are stored in the DLT. A local Ethernet network connects all PACS components, and a high-speed Gbit switch (as an example) connects the archive server to 1K and 2K display workstations, providing fast image display. In addition, the archive server is connected to remote sites via T1 and ATM (as an example) and the hospital information system (HIS), and the radiology information system (RIS) via departmental and campus Ethernet.
radiography (CR) studies. In this example, each CT or MR study consists of a complete sequence of images from one examination and each CR study consists of one exposure. The calculation is based on the average study sizes in the field, in megabytes: CT, 11.68; MR, 3.47; and CR, 7.46. Nowadays, very large RAID (terabytes) is available in the archive server, especially in the client/server model discussed in Section 6.4.2. The magnetic cache disks configured in the archive server should sustain high data throughput for read operation, which provides fast retrieval of images from the RAID. 10.1.2.2 The Database System The database system comprises redundant database servers running identical reliable commercial database systems, (e.g., Sybase, Oracle) with structured query language (SQL) utilities. A mirror database with two identical databases can be used to duplicate the data during every PACS transaction (not images) involving the server. The data can be queried from any PACS computer via the communication networks. The mirroring feature of the
Ch10.qxd 2/12/04 5:16 PM Page 259
PACS CONTROLLER AND ARCHIVE SERVER FUNCTIONS
259
system provides the entire PACS database with uninterruptible data transactions that guarantee no loss of data in the event of system failure or a disk crash. Besides its primary role of image indexing to support the retrieval of images, the database system is necessary to interface with the radiology information system (RIS) and the hospital information system (HIS), allowing the PACS database to collect additional patient information from these two health care databases (Chapter 12). 10.1.2.3 The Archive Library The archive library consists of multiple input/ output drives (usually DLT, although some older PAC systems may still use optical erasable, WORM disk, optical tape, or CD-ROM) and disk controllers, which allow concurrent archival and retrieval operations on all of its drives. The library must have a large storage capacity of terabytes and supports mixed storage media. Redundant power supply is essential for uninterrupted operation. The average overall throughputs for read and write operations between the magnetic disks of the archive server and the DLT library should be at least 1.0 Mbytes/s. 10.1.2.4 Backup Archive To build a fault tolerance in the PACS server, a backup archive system can be used. Two copies of identical images can be saved through two different paths in the PACS network to two archive libraries. Ideally, the two libraries should be in two different buildings in case of natural disaster. To reduce the cost of redundant archiving, the redundant unit can be another DLT library. The backup archive has become more important as many health care providers are relying on PACS for its daily operation, Section 10.6 discusses a backup archive model. 10.1.2.5 Communication Networks The PACS archive system is connected to both the PACS local area network (LAN) and the wide area network (WAN). The PACS LAN can have a two-tiered communication network composed of Ethernet and ATM or high-speed Ethernet networks. The WAN provides connection to remote sites and can consist of T1 lines, ATM, and fast Ethernet. The PACS LAN uses the high-speed ATM or Ethernet switch to transmit highvolume image data from the archive server to 1K and 2K display workstations. Ten or hundred Mbits/s Ethernet can be used for interconnecting slower-speed components to the PACS server, including acquisition gateway computers, RIS, and HIS, and as a backup of the ATM or the Gbit/s Ethernet. Failure of the high-speed network automatically triggers the archive server to reconfigure the communication network so that images could be transmitted to the 1K and 2K display workstations over slower Ethernet.
10.2
PACS CONTROLLER AND ARCHIVE SERVER FUNCTIONS
In the controller and archive server, processes of diverse functions run independently and communicate simultaneously with other processes using client-server programming, queuing control mechanisms, and job prioritizing mechanisms. Figure 10.3 shows the interprocess communications among the major processes running on the archive server, and Table 10.1 describes the functions of these processes. Because
Ch10.qxd 2/12/04 5:16 PM Page 260
260
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
Archive Server
send
image manager
recv.
send
Acquisition Computer acq_del
recv.
Display Workstation
arch_ack pre-fetch
wsreq ris_recv
HL7 Messages from HIS/RIS
display
stor arch.
retrv.
Digital Linear Tape Library Figure 10.3 Interprocess communications among the major processes running on a PACS archive server. Compare it with the work flow in a DICOM-compliant server as shown in Fig. 10.7. HL7, Health Level 7. Other symbols are defined in Table 10.1.
the functions of the controller and the archive server are closely related, we sometimes use the term archive server to represent both. Major tasks performed by the archive server include image receiving, image stacking, image routing, image archiving, studies grouping, platter management, RIS interfacing, PACS database updating, image retrieving, and image prefetching. The following subsections describe the functionality carried out by each of these tasks. Whenever appropriate, the DICOM standard is highlighted in these processes. 10.2.1
Image Receiving
Images acquired from various imaging devices in the gateway computers are converted into DICOM data format if they are not already in DICOM. DICOM images are then transmitted to the archive server via the Ethernet or ATM by using clientserver applications over standard TCP/IP protocols. The archive server can accept concurrent connections for receiving images from multiple acquisition computers. DICOM commands can take care of the send and receive processes. 10.2.2
Image Stacking
Images arrived in the archive server from various gateway computers are stored in its local magnetic disks or RAID (temporary archive) based on the DICOM data model and managed by the database. The archive server holds as many images in its several hundred gigabyte disks as possible and manages them on the basis of aging criteria. During a hospital stay, for example, images belonging to a given patient remain in the archive server’s temporary archive until the patient is dis-
Ch10.qxd 2/12/04 5:16 PM Page 261
PACS CONTROLLER AND ARCHIVE SERVER FUNCTIONS
TABLE 10.1
261
Major Processes and Their Functions in the Archive Server (See Fig. 10.3)
Process
Description
arch
Copy images from magnetic disks to temporary archive and to permanent archive (at patient discharge); update PACS data base; notify stor and arch_ack processes for successful archiving (DICOM)
arch_ack
Acknowledge gateway computers of successful archiving (DICOM)
acq_del
A process at the gateway computer to delete images from the gateway computer local magnetic disk
image_manager
Process image information; update PACS data base; notify send and arch processes
pre-fetch
Select historical images and relevant text data from PACS data base; notify retrv process
recv
Receive images from acquisition computers; notify image_manager process (DICOM)
ris_recv
Receive HL7 messages (e.g., patient admission, discharge, and transfer; examination scheduling; impression; diagnostic reports) from the RIS; notify arch process to group and copy images from temporary archive to permanent archive (at patient discharge, notify pre-fetch process (at scheduling of an examination), or update PACS database (at receipt of an impression or a diagnostic report) (DICOM broker)
retrv
Retrieve images from permanent archive; notify send process
send
Send images to destined workstations (DICOM)
stor
Manage magnetic storage of the archive server (DICOM)
wsreq
Handle retrieve requests from the display process at the display workstations (DICOM)
display
Acknowledge archive server for images received (DICOM)
charged or transferred. Thus all recent images that are not already in a display workstation’s local storage can be retrieved from the archive server’s high-speed short archive instead of the lower-speed DLT library. This feature is particularly convenient for radiologists or referring physicians who must retrieve images from different display workstations. In the client/server PACS model, the temporary archive is very large, some in terabytes of capacity. 10.2.3
Image Routing
In the stand-alone (or peer to peer) PACS model, images that have arrived in the archive server from various acquisition computers are immediately routed to their destination workstations. The routing process is driven by a predefined routing table composed of parameters including examination type, display workstation site, radiologist, and referring physician. All images are classified by examination type (1-view Chest, CT-Head, CT-Body, etc.) as defined in the DICOM standard. The
Ch10.qxd 2/12/04 5:16 PM Page 262
262
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
destination display workstations are classified by location (Chest, Pediatrics, CCU, etc.) as well as by resolution (1K or 2K). The routing algorithm performs table lookup based on the aforementioned parameters and determines an image’s destination(s). Images are transmitted to the 1K and 2K workstations over either Ethernet LAN or ATM and to remote sites over dedicated T1 lines or the ATM or high-speed WAN. 10.2.4
Image Archiving
Images arriving in the archive server from gateway computers are copied from temporary storage to the LTD library for longer-term storage. When the copy process is complete, the archive server acknowledges the corresponding acquisition gateway, allowing it to delete the images from its local storage and reclaim its disk space. In this way, the PACS always has two copies of an image on separate magnetic disk systems until the image is archived to the permanent storage. Images that belong to a given patient during a hospital stay with multiple examinations are scattered temporarily across the DLT Library. 10.2.5
Study Grouping
During a hospital stay, a patient may have different examinations on different days. Each of these examinations may consist of multiple studies. On discharge or transfer of the patient, images from these studies are regrouped from the DLT Library and copied contiguously to a single tape for permanent storage. Thus the studies grouping function allows all images belonging to a patient during a hospital stay to be archived contiguously to a single tape (about 20 to 40 Gbytes). In addition to saving the library storage space, logical grouping of consecutive examinations into one volume can reduce tape swapping time, hence speeding up the retrieval time for images residing in different tapes. 10.2.6
RIS and HIS Interfacing
The archive server accesses data from HIS/RIS through a PACS gateway computer. The HIS/RIS relays a patient admission, discharge, and transfer (ADT) message to the PACS only when a patient is scheduled for an examination in the radiology department or when a patient in the radiology department is discharged or transferred. Forwarding ADT messages to PACS not only supplies patient demographic data to the PACS but also provides information the archive server needs to initiate the prefetch, image archive, and studies grouping tasks. Exchange of messages among these heterogeneous computer systems can use the Health Level Seven (HL7) standard data format running TCP/IP communication protocols on a client/server basis as described in Section 7.2. In addition to receiving ADT messages, PACS receives examination data and diagnostic reports from the RIS. This information is used to update the PACS database, which can be queried and reviewed from any display workstation. Chapter 12 presents the RIS, HIS, and PACS interface in more detail.
Ch10.qxd 2/12/04 5:16 PM Page 263
PACS CONTROLLER AND ARCHIVE SERVER FUNCTIONS
10.2.7
263
PACS Database Updates
Data transactions performed in the archive server, such as insertion, deletion, selection, and update, are carried out by using SQL utilities in the database. Data in the PACS database are stored in predefined tables, with each table describing only one kind of entity. The design of these tables should follow the DICOM data model for operation efficiency. For example, the patient description table consists of master patient records, which store patient demographics; the study description table consists of study records describing individual radiological procedures; the archive directory table consists of archive records for individual images; and the diagnosis history table consists of diagnostic reports of individual examinations. Individual PACS processes running in the archive server with information extracted from the DICOM image header update these tables and the RIS interface to reflect any changes of the corresponding tables. 10.2.8
Image Retrieving
Image retrieval takes place at the display workstations. The display workstations are connected to the archive system through the communication networks. The archive library configured with multiple drives can support concurrent image retrievals from multiple tapes. The retrieved data are then transmitted from the archive library to the archive server via the SCSI data buses. The archive server handles retrieve requests from display workstations according to the priority level of these individual requests. Priority is assigned to individual display workstations and users based on different levels of needs. For example, the highest priority is always granted to a display workstation that is used for primary diagnosis or is in a conference session or at an intensive care unit. Thus a workstation used exclusively for research and teaching purposes is compromised to allow “fast service” to radiologists and referring physicians in the clinic for immediate patient care. The archive system supports image retrieval from 2K workstations for on-line primary diagnosis, 1K stations for ICU and review stations, and PC desktops for personal usage throughout the hospital. To retrieve images from the DLT library, the user at a display workstation can activate the retrieval function and request any number of images from the archive system. Image query/retrieval is mostly done with the DICOM commands described in Section 7.4.5. Image retrieval is discussed in more detail in Chapter 11. 10.2.9
Image Prefetching
The prefetching mechanism is initiated as soon as the archive server detects the arrival of a patient via the ADT message from HIS/RIS. Selected historical images, patient demographics, and relevant diagnostic reports are retrieved from the DLT library and the PACS database. Such data are distributed to the destination workstation(s) before the completion of the patient’s current examination. The prefetch algorithm is based on predefined parameters such as examination type, disease category, radiologist, referring physician, location of the workstation, and the number
Ch10.qxd 2/12/04 5:16 PM Page 264
264
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
and age of the patient’s archived images. These parameters determine which historical images should be retrieved, when, and to where.
10.3
PACS ARCHIVE SERVER SYSTEM OPERATIONS
The PACS server operates on a 24 hours a day, 7 days a week basis. All operations in a well-designed PACS should be software driven and automatic and should not require any manual operational procedures. The only nonautomatic procedures are removal of old or insertion of new storage media in the off-line archive operation. The DLT library archive software first disables an active tape in the library for off-line archive, the operator then manually inserts a new tape, the software activates the inserted tape as a new volume, and the archive server resumes normal operation. A fault-tolerant mechanism in the archive system is used to ensure data integrity and minimize system downtime. Major features of this mechanism include the following: 1. The uninterruptible power supply (UPS) system, which protects all archive components, including the archive server, the database servers, and the archive library, from power outages 2. A mirrored database system, which guarantees the integrity of the data directory 3. Multiple tape drives and robotic arms, which provide uninterrupted image archival and retrieval in the event of the failure of a tape drive or robotic arm 4. A central monitoring system, which automatically alerts quality control staff via wireless mobile phone or pagers to remedy any malfunctioning archive components or processes 5. Spare parts for immediate replacement of any malfunctioning computer components, which include network adapter boards, SCSI controllers, and the multi-CPU system board (archive server) 6. A 4-hour turnaround manufacturer’s on-site service, which minimizes system downtime due to hardware failure of any major archive component In Chapter 15, we discuss the concept of fault-tolerant PACS operation in further detail.
10.4 10.4.1
DICOM-COMPLIANT PACS ARCHIVE SERVER Advantages of a DICOM-Compliant PACS Archive Server
The purpose of the Digital Imaging and Communications in Medicine (DICOM) standard described in Section 7.4 is to promote a standard communication method for heterogeneous imaging systems, allowing the transfer of images and associated information among them. By using the DICOM standard, a PACS would be able to interconnect its individual components together and allow the acquisition gateways to link to imaging devices. However, imaging equipment vendors often select dif-
Ch10.qxd 2/12/04 5:16 PM Page 265
DICOM-COMPLIANT PACS ARCHIVE SERVER
265
ferent DICOM compliant implementations (Section 7.4.4) for their own convenience, which may lead to difficulties for these systems in interoperation. A welldesigned DICOM-compliant PACS server can use two mechanisms to ensure system integration. One mechanism is to connect to the acquisition gateway computer with DICOM providing reliable and efficient processes of acquiring images from imaging devices. The other mechanism is to develop specialized server software allowing interoperability of multivendor imaging systems. Both mechanisms can be incorporated in the DICOM-compliant PACS server. We described the basic principles of these two mechanisms in Chapters 6, 7, 8, and 9 at the component level. In this section, we integrate these mechanisms at the system level based on the knowledge learned in those chapters. 10.4.2
DICOM Communications in PACS Environment
In Section 7.4.5 we discussed two major DICOM communication Service Object Pair (SOP) classes for image communications: the Storage Service Class and the Query/Retrieve (Q/R) Service Class: •
•
Storage Service Class allows a PACS application running on system A (i.e., a CT scanner) to play the role of a Storage Service Class User (SCU) that initiates storage requests and transmits images to system B (i.e., an acquisition gateway computer), which serves as a Storage Service Class Provider (SCP), accepting images to its local storage device. Q/R Service Class allows PACS applications running on system A (i.e., a display workstation) to play the role of a Q/R SCU that queries and retrieves images from system B (i.e., an archive server), which serves as a Q/R SCP, processing query and retrieval requests.
Figure 10.4 illustrates the communication of images utilizing the Storage Service Class and Q/R Service Class in a PACS environment. These two service classes can be used to develop a DICOM-compliant PACS server. 10.4.3
DICOM-Compliant Image Acquisition Gateways
A DICOM-compliant acquisition gateway can be used to provide a reliable and efficient process for acquiring images from imaging devices. The DICOM-compliant software running on a gateway should support two types of image acquisition, the push-mode and the pull-mode operations (see also Section 8.2.2). 10.4.3.1 Push Mode Push-mode operation utilizes DICOM’s Storage SOP service. An imaging device such as a CT scanner takes the role of a storage SCU, initiating storage requests. The requesting gateway (storage SCP) accepts these requests and receives the images. 10.4.3.2 Pull Mode Pull-mode operation, on the other hand, utilizes DICOM’s Q/R SOP service. The gateway plays the role of a Q/R SCU, initiating query requests, selecting desired images, and retrieving images from an imaging device (Q/R SCP). The pull-mode operation requires the image acquisition process to
Ch10.qxd 2/12/04 5:16 PM Page 266
266
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
Acquisition Gateways
Images Storage SCU
Storage SCP
Images Storage SCP
Storage SCU
Storage SCP
Display Workstations
Archive Server Q/R SCP
Q/R Requests
Q/R SCU
Images
Storage SCU
Image Storage Subsystem
Imaging Devices
Figure 10.4 Image communication utilizing DICOM SOP services in PACS. The acquisition gateway computer via the Storage SOP service acquires images generated from imaging devices. These images are then transmitted to the archive server, where they are routed to the permanent archive subsystem and workstations. The archive server supports Query/ Retrieve (Q/R) SOP service, handling all Q/R requests from workstations. SOP, Service Object Pair; SCU, Service Class User; SCP, Service Class Provider.
incorporate with the local database to perform data integrity checks in the gateway computer. This checking mechanism ensures that no images are lost during the acquisition process. Figure 10.5 summarizes the characteristics of these two modes of operation. In the pull mode, the ImgTrack process in the gateway performs data integrity checks with the following procedures: (1) (2) (3) (4)
Queries study information from the scanners Generates acquisition status table Periodically checks acquisition status of individual image sequences Invokes DcmPull process to retrieve images from the scanners
The DcmPull process, when invoked by the ImgTrack process, retrieves the desired images from the scanner and updates the acquisition status table accordingly. Both push and pull modes are used in the acquisition gateway; the choice is dependent on the operation condition.
10.5
DICOM PACS ARCHIVE SERVER HARDWARE AND SOFTWARE
This section discusses the system architecture with generic hardware and basic software components of a DICOM PACS archive server.
Ch10.qxd 2/12/04 5:16 PM Page 267
DICOM PACS ARCHIVE SERVER HARDWARE AND SOFTWARE
267
Acquisition Gateway Push Mode Images from Scanners (storages SCU)
ImgRoute
DcmRcv Receive from (Storage SCP)
Pull Mode Images from Scanners (Q/R SCP)
DcmPull Query/Retrieve Images (Q/R SCU)
DB Update
DcmSend DB Check
Images to PACS Controller
Send Images (Storage SCU)
ImgTrack Local DB
Figure 10.5 Interprocess communication among the major processes running in a DICOMcompliant acquisition gateway that supports both push and pull operations for acquiring images from the scanners. The DcmPull process incorporates the gateway’s local database to perform data integrity check, ensuring no images are missing from any image sequences during the acquisition process. Dcm, DICOM.
10.5.1
Hardware Components
The PACS Archive Server generic hardware component consists of the PACS archive server computer, peripheral archive devices, and Fast Ethernet Interface and SCSI. For large PAC systems, the server computer used is a mostly UNIX-based machine. The fast Ethernet interfaces the PACS archive server to the fast Ethernet network, where acquisition gateway and display workstation are connected. The SCSI integrates peripheral archive devices with the PACS archive server. The main archive devices for PACS server include magnetic disk, RAID, DLT, and CD/DVD (Digital Versatile Disks) jukebox. RAID, because of its fast access speed and reliability, is extensively used as the short-term archive device in PACS. Because of its large data storage capacity and lower cost than magnetic disks, DLT is mostly used for long-term archive. Figure 10.6 shows an example of the server connection to the DLT and RAID with SCSI and to other PACS components with Fast Ethernet. Many different kinds of storage devices are available for PACS application; in the following we describe the two most popular ones, RAID and DLT. 10.5.1.1 RAID RAID is a disk array architecture developed for fast and reliable data access. A RAID groups several magnetic disks (e.g., 8 disks) as a disk array and connects the array to one or more RAID controllers. The size of RAID is usually several hundred gigabytes (e.g., 320 gigabytes for 8 disks) to terabytes. With the individual disk size increasing, the size of RAID can also be increased. The RAID controller has a SCSI interface to connect to the SCSI interface in the PACS server. Multiple RAID controllers with multiple SCSI interfaces can avoid the single-point failure in the RAID device. This is discussed further in Chapter 15.
Ch10.qxd 2/12/04 5:16 PM Page 268
268
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
Acquisition Gateway
Display Workstation
Fast Ethernnet Network
PACS Archive Server
Fast Eth. SCSI
SCSI Connection
RAID Controllers
DLT Drives
Figure 10.6 Basic hardware components in a PACS archive server with fast Ethernet and SCSI connection.
10.5.1.2 DLT DLT uses a multiple magnetic tape and drive system housed inside a library or jukebox for large volume and long-term archive. With current tape drive technology, the data storage size can reach 40 to 200 Gbytes per tape. One DLT can hold from 20 to hundreds of tapes. Therefore, the storage size of DLT can be from one to tens of Tbytes, which can hold PACS images from one to several years. DLT usually has multiple drives to read and write tapes. The tape drive is connected to the server through SCSI or fiber-optic connection. The data transmission speed is several megabytes per second for each drive. The tape loading time and data locating time are about several minutes (e.g., 3 min). Hence, in general, it takes several minutes to retrieve one CR image from DLT. PACS image data in DLT are usually prefetched to RAID for fast access time. 10.5.2
Archive Server Software
PACS archive server software is DICOM compliant and supports DICOM Storage Service Class and Query/Retrieve Service Class. Through DICOM communication, the archive server receives DICOM studies/images from the acquisition gateway, appends study information to the database, and stores the images in the archive device, including the RAID and DLT. It receives the DICOM query/retrieve request from display workstations and sends out the query/retrieve result (patient/study information or images) back to workstations. The DICOM services supported in PACS archive server are C-Store, C-Find, and C-Move (see Section 7.4.5). All software implemented in the archive server should be coded in standard programming languages—for example, C and C++ on the UNIX open systems architecture. PACS archive server software is composed of at least six independent components (processes), including receive, insert, routing, send, Q/R-server, and RetrieveSend. It also includes a PACS database. In the following we describe the
Ch10.qxd 2/12/04 5:16 PM Page 269
DICOM PACS ARCHIVE SERVER HARDWARE AND SOFTWARE
269
PACS Archive Server receive
Q/R-server
(Storage SCP)
(Q/R SCP)
Queue 1
insert Acquisition Gateways
PACS database
Queue 4 (Q/R SCU)
Queue 2
(Storage SCU)
Workstations (Storage SCP)
routing RetrieveSend (Storage SCU)
Queue 3
send (Storage SCU)
Figure 10.7 PACS archive server software components and data flow. The six components are Receive, Insert, Routing, Send, Q/R server, and RetrieveSend.
six most common processes. All of these processes run independently and simultaneously and communicate with other processes through Queue control mechanisms. Figure 10.7 shows PACS archive software components and data flow (compare with Fig. 10.3, which shows the general interprocess communications in a PACS archive server). 10.5.2.1 Image Receiving DICOM studies/images are transmitted from the acquisition gateway to the PACS archive server receive process through DICOM Storage Service Class. The receive process acts as Storage SCP. The receive process can simultaneously accept multiple connections from several DICOM storage SCUs at different acquisition gateways. 10.5.2.2 Data Insert and PACS Database The Patient/Study information in the DICOM header is extracted and inserted into the PACS database in the insert process. The PACS database is a series of relational tables based on the DICOM information object model. The database is built on a reliable commercial database system such as ORACLE. The database provides Patient/Study information management and image index function to support display workstations’ query/retrieve. 10.5.2.3 Image Routing Images from different modalities are sent to different display workstations for radiologists’ or physicians’ review. The routing process is designed to determine where to send. The routing process is performed based on a preconfigured routing table. The table includes several parameters, such as modality type, examination body part, display workstation site, and referring physician. The routing process passes the result of routing (destination workstation information) to the send process for sending images out through a Queue control mechanism.
Ch10.qxd 2/12/04 5:16 PM Page 270
270
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
10.5.2.4 Image Send Most PACS servers have two ways to distribute images to display workstations: auto push to workstations or manual query/retrieve from workstations. In auto push mode, the images are automatically sent out to workstations based on the result of routing. The auto push function is performed by the send process, which is acting as the DICOM storage SCU. 10.5.2.5 Image Query/Retrieve The other way to send images to workstations is through image Query/Retrieve. The Q/R-server process receives the DICOM Query/Retrieve request from workstations, searches the PACS database, and returns the matching result back to workstations. The Q/R-server here acts as Query/Retrieve SCP, and the process can simultaneously support multiple query/retrieves from different workstations. If the query/retrieve result is image, the image information is passed to the RetrieveSend process through the Queue control mechanism, which sends the images to the display workstation. 10.5.2.6 Retrieve/Send In DICOM Query/Retrieve Service, the images are not sent out by Query/Retrieve association; instead, this is done by a second DICOM storage association. Therefore, the images retrieved from the PACS server are not directly sent through Q/R-server process. The RetrieveSend process performs the transmission function. It receives the image information from Q/R-server and sends the images to the display workstation. 10.5.3
An Example
We use the example shown in Figure 10.8 to illustrate the image data flow by sending a CR image from the acquisition gateway through the PACS archive server. 1. A CR chest image is sent from the acquisition gateway to the receive process through DICOM CR Image Storage Service Class. The receive process receives the image and stores the image file on local computer disk. 2. The receive process adds the image file information to Queue 1 through the Queue control operation. 3. The insert process takes out the CR image file information from Queue 1, and reads in the image file from local disk. 4. The insert process extracts Patient/Study information from the DICOM header of the image and inserts the information into the patient/study tables of the PACS database. 5. The insert process adds the new image file information to Queue 2 through the Queue control operation. 6. The routing process reads the image file information, searches the routing information for this CR image, and finds the destination workstation information. 7. The routing process adds the image file information and destination workstation information to Queue 3. 8. The send process reads the image file information and destination workstation information out from Queue 3.
Ch10.qxd 2/12/04 5:16 PM Page 271
BACKUP ARCHIVE SERVER
271
PACS Archive Server 1 2
receive
Q/R-server
(Storage SCP)
(Q/R SCP)
11
Queue 1
3 5 Acquisition Gateway
insert
12
4 PACS database
12
Queue 4
Queue 2
14 6
(Storage SCU)
10
7
13
routing
(Q/R SCU)
Workstations (Storage SCP)
RetrieveSend (Storage SCU)
Queue 3
8
send
9
(Storage SCU)
Figure 10.8 Workflow in a PACS Archive Server. CR is used as an example, which is sent from the acquisition gateway, through the server, to the workstation. Numerals in the figure are explained in the text.
9. The send process sends the image file to the specified display workstation through DICOM CR Image Storage Class. 10. The display workstation can also query/retrieve the same CR image by Query/Retrieve Service Class. The Q/R-server process receives the Query/Retrieve request from the display workstation. 11. The Q/R-server process searches the PACS database and finds the matching result. 12. The Q/R-server process sends back the query/retrieve result to the display workstation. If the query/retrieve result is the CR image, the Q/R-server process adds the image file information and destination workstation information to Queue 4. 13. The RetrieveSend process reads the image file information and destination workstation information from Queue 4. 14. The RetrieveSend process sends the image file to the destination workstation.
10.6
BACKUP ARCHIVE SERVER
10.6.1 Backup Archive Using an Application Service Provider (ASP) Model 10.6.1.1 Concept of the Backup Archive Server The PACS archive server is the most important component in a PACS; even though it may have the faulttolerant feature, chances are it could fail occasionally. A backup archive server is necessary to guarantee its uninterrupted service. The backup archive server can be short term (3 months) or long term. The functions of a backup archive server are twofold: maintaining the PACS continuous operation and preventing loss of image data. Data loss is especially troublesome because if a major disaster
Ch10.qxd 2/12/04 5:16 PM Page 272
272
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
occurs, it is possible to lose an entire hospital’s PACS data. In addition, scheduled downtimes to the main PACS archive also impact a filmless institution greatly. Few current PACS archives feature disaster recovery or a backup archive, and designs are limited at best. Furthermore, current general disaster recovery solutions vary in the approach toward creating redundant copies of PACS data. One novel approach is to provide a short-term fault-tolerant backup archive server using the application service provider (ASP) model at an offsite location (Liu et al., 2002). The ASP backup archive provides instantaneous, automatic backup of acquired PACS image data and instantaneous recovery of stored PACS image data, all at a low operational cost because it utilizes the ASP business model. In addition, should the downtime event render the network communication inoperable, a portable solution is available with a data migrator. The data migrator is a portable laptop with a large-capacity hard disk that contains DICOM software for exporting and importing PACS exams. The data migrator can populate PACS exams that were stored on the backup archive server directly onto the clinical PACS within hours to allow the radiologists to continue to read previous PACS exams until new replacement hardware arrives and is installed or until a scheduled downtime event has been completed. 10.6.2
General Architecture
Clinical PACS Server
on C
T1 Router
T1 Router
PACS Storage Storag Offsite
PACS Gateway
Fault-Tolerant Backup Archive
T1
Hospital Site
ne
ct
io n
Figure 10.9 shows the general architecture of the ASP backup archive integrated with a clinical PACS. The sites are connected via a T1 line; if Internet 2 becomes available (see Section 9.6), the network should be upgraded accordingly. At the hospital site, any new exam acquired and distributed in PACS is sent to a DICOM gateway via the clinical PACS server. The DICOM Gateway is crucial to maintaining clinical work flow because it provides a buffer and manages the network transfers of the PACS exams by queuing the network transfer jobs. The DICOM gateway transfers an exam through the T1 router across the T1 line to a receiving T1 router at the off-site ASP. At the off-site PACS storage site, another gateway receives the PACS exams and queues them for storage in a fault-tolerant backup archive server. The backup archive should be designed as fault-tolerant (see Chapter 15). All PACS
DICOM Gateway
PACS
Figure 10.9 General architecture of the ASP backup archive server. One DICOM gateway and one PACS gateway are used as the buffers between the two sites. T1 is used for WAN.
Ch10.qxd 2/12/04 5:16 PM Page 273
BACKUP ARCHIVE SERVER
273
data transmitted throughout this architecture conform to the DICOM protocol standard. 10.6.3
Recovery Procedure
Figure 10.10 shows the recovery procedure of the PACS during a scheduled downtime or an unscheduled downtime such as a disaster. There are two scenarios: network communication still functioning between two sites and no network communication. If connectivity between the two sites is live, then backup PACS exams can be migrated back to the hospital site and imported directly into the PACS with DICOM compliant network protocols. In most disaster scenarios, there is a high likelihood that connectivity between the two sites is not functional. In this case, the backup PACS exams are imported into the hospital PACS with a portable data migrator. The data migrator exports PACS exams from the backup archive. It is then physically brought on-site (this scenario works best if the off-site is located in the same metropolitan area as the hospital) to the hospital and the PACS exams are imported directly into a workstation or temporary server. The data migrator is DICOM compliant, which means that the PACS exams can be imported without any additional software or translation. In addition, the data migrator contains upto-date PACS data because it is always synchronized with the clinical PACS work flow. In either scenario, the radiologist will have the previous and current PACS exams to continue with normal clinical work flow reading until replacement hardware is installed and the hospital PACS archive storage and server are brought back on-line. 10.6.4
Key Features
Clinical PACS Server
DICOM Gateway
PACS
on C
T1 Router
T1 Rout Router
PACS Storage Offsite
PACS Gateway
Fault-Tolerant Backup Archive
T1
Hospital Site
ne
ct
io n
For a backup archive server to be successful, the following key features are necessary:
Data Migrator
Figure 10.10 Disaster recovery procedure. In a disaster event, the components shown marked can be considered unavailable for use. In this scenario, a data migrator can be used physically to export PACS exams and import them directly into PACS with the appropriate DICOM protocols.
Ch10.qxd 2/12/04 5:16 PM Page 274
274 • • • •
•
• • •
PACS CONTROLLER AND IMAGE ARCHIVE SERVER
Copy of every PACS exam created and stored timely and automatically Backup archive server CA fault-tolerant with 99.999% availability No need for operator intervention Backup storage capacity easily configurable and expanded based on requirements and needs Data recovered and imported back into hospital PACS within 2 h if communication network is intact, 1 day if a portable data migrator is used DICOM compliant and ASP model based System does not impact normal clinical work flow Radiologists can read with previous exams until hospital PACS archive is recovered
10.6.5
General Setup Procedures of the ASP Model
The next few paragraphs describe a general step-by-step procedure of building this particular system. Included are suggestions of potential resources within the hospital as well as any necessary ancillary personnel that should be involved in each of the procedures. An important first step for implementing this system is to determine the configuration that will have the least impact on the hospital clinical work flow. The PACS system administrator and a clinical staff representative should be involved because they are familiar with day-to-day operations of the hospital PACS. The next few procedures focus on establishing communication network connectivity between the two sites. First, the best connectivity bandwidth and configuration should be determined on the basis of cost and availability. Some connectivity solutions involve a continuous operational fee, whereas others may involve a onetime initial cost for installation. The cost and availability also depend on the distance between the two sites. It is important to include any resource that can provide information on options such as information technology (IT) hospital staff members as well as the radiology administrator, as there will be budget constraints that may factor into the decision-making process. The next step involves ordering hardware and installation. Most likely, a vendor will be responsible for this step, but it is always beneficial to involve IT and telecommunications staff resources. Once connectivity has been established, testing should be performed for verification. The vendor performs this with additional input from both IT and telecommunications staff resources. At this point, the hospital and the off-site locations have connectivity established. Testing of the clinical PACS work flow and transmission of PACS exams to the backup archive server is performed to verify that the PACS exams are received without any data loss and to observe the impact on the clinical PACS. This crucial step in the procedure is important because the backup archive procedure should appear seamless to the clinical PACS user. The PACS system administrator plays a key role in determining the effects of the system on clinical work flow. If the new work flow is successfully integrated with the hospital PACS, the final step is to perform trial runs of disaster scenario simulations and PACS data recovery. Coordination between the two sites is key to ensure that results are observed as well
Ch10.qxd 2/12/04 5:16 PM Page 275
BACKUP ARCHIVE SERVER
275
as documented. Once all tests are completed and verified, the new work flow is ready for clinical production. The system described has been implemented successfully, and PACS exams from S. John’s Health Center (Santa Monica, CA) are backed up to an off-site faulttolerant archive server located within a 30-mile radius of the hospital at the IPI Laboratory, Marina del Rey, CA. The current backup storage capacity is short-term configured for 2 months. Section 15.8.4.2 and Chapter 18 describe the clinical operation and pitfalls of the backup archive server.
Ch11.qxd 2/12/04 5:19 PM Page 277
CHAPTER 11
Display Workstation
Generic PACS Components & Data Flow
HIS Database
Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS Controller & Archive Server
Application Servers
Workstations
Web Server
Figure 11.0 Workstations in a generic PACS.
11.1
BASICS OF A DISPLAY WORKSTATION
The display workstation is the interactive component in PACS that health care providers use for reviewing images and relevant patient information. The interpreted result becomes the diagnostic report that feeds back to the hospital and radiology information systems (HIS, RIS) as a permanent patient record along with the images. In this chapter, the terms softcopy workstation, display workstation, image workstation, or just workstation are used interchangeably. Figure 11.0 shows the logical position of workstations in the PACS workflow (no shade). The conventional method of reviewing radiological images uses films hung on an alternator or a light box. Table 11.1 shows the characteristics of a typical alternator. Because the advantages of an alternator are its large surface area, high luminance, and convenience in use, the design of a soft copy display workstation should incorporate the functions and convenience of the alternator whenever possible. PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
277
Ch11.qxd 2/12/04 5:19 PM Page 278
278
DISPLAY WORKSTATION
TABLE 11.1
Characteristics of a Typical Light Alternator
Dimension Width, in.
Viewing capability
Height, Table in. height, in.
72–100
78
30–32
No. of panels
No. of visible panels
20–50
2 (upper and lower)
Viewing surface per panel height ¥ width, in.2
Average luminance, ft-L
Viewing height (from table top to top of lower panel), in.
Time required to retrieve a panel, S.
16 ¥ 56 to 16 ¥ 82
500
32 + 16
6–12
An image workstation consists of four major hardware components: a host computer, an image display board, display monitors, and local storage devices. A communication network and application software connect these components with the PACS controller, as described in Figures 10.3, 10.4, 10.7, and 10.8 before. The computer and the image display board are responsible for transforming the image data for visualization on the monitors. Magnetic disks and RAID are used for local storage devices. The communication network is used for transmitting images into and out of the display workstation. Figure 11.1 shows the schematic of a typical twomonitor display workstation based on a PC computer. Section 11.1 describes the image display board and the display monitor in more detail. 11.1.1
Image Display Board
The image display board has two components: a processing unit and image memory. The image memory is used to supplement the host computer memory to increase the storage capacity and to speed up the image display. There are two types of computer memory, random access memory (RAM) and video RAM (VRAM). RAM usually comes with the computer and is less expensive than VRAM. VRAM has a very high input/output rate and is used to display images or graphics. A display workstation usually has more RAM than VRAM. Typical numbers are 512 Mbyte RAM and 16 Mbyte VRAM. An image file in the display workstation, coming either from the PACS archive server or from the internal disk, is first stored in the RAM. If the RAM is not large enough to store the entire image file, it is split between the RAM and the disk, and disk I/O and RAM swapping is needed. In this case, the image display speed would be slow. It is therefore advantageous to have a larger RAM to increase the display speed. After some operations, the processed image is moved to the VRAM before it is shown on the monitor. Figure 11.2 shows the data flow of an image from the magnetic disk to the display memory. Sometimes we use the term “4 mega pixel” or “5 mega pixel” for a display board, which represent its capability of displaying a 2K ¥ 2K or 2.5K ¥ 2K” image respectively. For color images it requires 24 bits/pixel, and for graphic overlay it is 1 extra bit/pixel.
Ch11.qxd 2/12/04 5:19 PM Page 279
BASICS OF A DISPLAY WORKSTATION
Dual Monitors
16,00 x 1,280
279
16,00 x 1,280
Local Storage
Display Board Host Computer
Magnetic Disk or RAID
PC ATM Adapter
ATM Switch
Ethernet Adapter
Ethernet Switch
Figure 11.1 Schematic of a typical two-monitor display workstation (WS). The WS is composed of a host computer (PC), a display board with image processing capability and image memory (RAM) and Video RAM, dual display monitors (1600 ¥ 1280 each), magnetic disk, and network connection.
DISK
Image File
Image Board RAM Image Memory
VRAM LUT
D/A
Processor Display Monitor
Figure 11.2 Data flow of an image from the magnetic disk to the display monitor. D/A, digital-to-analog conversion. LUT: look up table.
11.1.2
Display Monitor
The cathode ray tube (CRT) monitor used to be the de facto display in the workstation. Over the past several years, the quality of gray scale liquid crystal display (LCD) has been improved to the degree that it matches the quality of the best CRT video monitor. More and more PACS workstations are now changing to LCD display. In locations where physical constraints prevent the use of a CRT, an LCD display allows medical images to be displayed where otherwise impossible. Hallways, rooms with minimal cooling, and very bright rooms are inhospitable to CRT displays. Table 11.2 compares the advantages and disadvantages of CRT and LCD for displaying medical images. The data in Table 11.2 was collected in late 2001; by
Ch11.qxd 2/12/04 5:19 PM Page 280
280
DISPLAY WORKSTATION
TABLE 11.2
LCD vs. CRT: Advantages and Disadvantages LCD Advantages vs. CRT Disadvantages
LCD Advantages
CRT Disadvantages
Small (thin) Light Consumes 4.2 A for four-head system Maximum luminance 500 cd/m2 (nice in bright rooms) Flat display
Large Heavy Consumes 7 A for four-head system Maximum luminance 300–450 cd/m2 Display surface is not flat
LCD Disadvantages vs. CRT Advantages LCD Disadvantages Contrast only 500 : 1 (narrow viewing angle) Contrast only 45 : 1 (45° viewing angle) Screen artifact due to black between pixels Only 15,000 hours until back light replacement (although replacement is less than new unit)
CRT Advantages Contrast 2000 : 1 (narrow viewing angle) Contrast 1000 : 1 (45° viewing angle) Smooth transition between pixels 30,000 hours until CRT replacement
now, the disadvantages of the LCD have gradually disappeared. In this context, we use the word display, monitor, and screen interchangeably. 11.1.3
Resolution
The resolution of a display monitor is most commonly specified in terms of the number of lines. For example, the “1K monitor” has 1024 lines; “2K” means 2048 lines. In the strict sense of the definition, however, it is not sufficient to specify spatial resolution simply in terms of the number of lines because the actual resolving power of the monitor may be less. Consider a digitally generated line pair pattern (black and white lines in pairs, see Fig. 2.3). The maximum displayable number of these line pairs on a 1K monitor is 512. However, the monitor may not be able to resolve 1024 alternating black and white lines in the both the vertical and horizontal directions because of instrumentation. Several techniques are available for the measurement of resolution. The simplest and most commonly used method employs a test pattern that consists of varying widths of line pair objects in vertical, horizontal, and sometimes radial directions (See Fig. 2.3). It should be noted that this visual approach in measuring the resolution of the total display-perception system, including the visual acuity of the observer, is prone to subjective variations. Other techniques include the shrinking raster test, the scanning spot photometer, the slit analyzer, and measurement of the modulation transfer function (MTF) (Section 2.5.1.4). One additional factor worthy of attention is that resolution is a function of location on the monitor. Therefore, the resolution specification must describe the location of the monitor as well as the luminance uniformity of the monitor.
Ch11.qxd 2/12/04 5:19 PM Page 281
281
BASICS OF A DISPLAY WORKSTATION
11.1.4
Luminance and Contrast
Luminance measures the brightness in candelas per square meter (cd/m2) or in footlambert (ft-L): 1 ft-L = 3.426 cd/m2. There is more than one definition for contrast and contrast ratio. The one that is most often used for contrast C is the ratio of the difference between two luminances (L) to one of the larger luminance: C = (LO - LB ) LO
(11.1)
where LO is the luminance of the object and LB is the luminance of the background. Contrast ratio Cr is frequently defined as the ratio of the luminance of an object to that of the background. This is expressed by: C r = Lmax Lmin
(11.2)
where Lmax is the luminance emitted by the area of the greatest intensity and Lmin is the luminance emitted by the area of least intensity in the image. The contrast ratio depends not only on the luminance of the image but also on the intensity of the ambient light. For instance, in bright sunlight the display surface can have an apparent luminance of 3 ¥ 104 cd/m2. To achieve a contrast of 10, the luminance of the monitor must be 3 ¥ 105 cd/m2. 11.1.5
Human Perception
The luminance of the monitor affects the physiological response of the eye in perceiving image quality (spatial resolution and subject contrast). Two characteristics of the visual response are acuity, or the ability of the eye to detect fine detail, and the detection of luminance differences (threshold contrast) between the object and its background. Luminance differences can be measured by using an absolute parameter—just-noticeable differences (JNDs)—or a relative parameter—threshold contrast (TC)—related by: TC = JND LB
(11.3)
where LB is the background luminance. The relationship between the threshold contrast and the luminance can be described by the Weber–Fechner law. When the luminance is low (1 ft-L), in the double-log plot, the threshold contrast is a linear function of luminance, with a slope of -0.05 sometimes referred to as the Rose model.* When the luminance is high, the threshold contrast is governed by the Weber model, which is a constant function of the luminance, again in the double-log plot. In the Rose model region, when the object luminance LO is fixed, the JND and LB are related by JND = k1LO (LB )
1 2
(11.4)
* TC = -0.5LB in the double-log plot, or TC = k(LB)-1/2 in the standard plot, where k is a constant.
Ch11.qxd 2/12/04 5:19 PM Page 282
282
DISPLAY WORKSTATION
In the Weber model region, we write JND = k2 LO (LB )
(11.5)
where k1 and k2 are constants. In general, the detection of small luminance differences by the visual system is dependent on the presence of various noises measurable by their standard deviations, in particular: • • •
The fluctuations in the light photon flux The noise from the display monitor The noise in the visual system
Thus the JND depends on LO and LB, which in term are affected by the environment, the state of the display monitor, as well as the conditions of the human observer. 11.1.6
Color Display
Although the majority of radiographical images are monochromatic, Doppler US, nuclear medicine, and PET images use colors for enhancement. Also, recent developments in image-guided therapy and minimally invasive surgery use extensive color graphics superimposed on monochromatic images. To display a color image, three image memories (R, G, B) are needed. As discussed in Section 4.8.1.7, the composite video controller combines these three memories to form a color display (see Fig. 4.24). Color LCD monitors are of excellent quality for color medical image display.
11.2
ERGONOMICS OF IMAGE WORKSTATIONS
The human observer at various environments may perceive the image with different qualities on the monitor. Three major factors that may affect the viewing environment are glare, ambient light, and acoustic noise due to hardware. These factors are independent of the workstation design and quality. Therefore, we must understand these factors in the ergonomic design of an image workstation. 11.2.1
Glare
Glare, the most frequent complaint among workstation users, is the sensation produced within the visual field by luminance that is sufficiently greater than the luminance to which the eyes are adapted to cause annoyance, discomfort, or loss in visual performance and visibility. Glare can be caused by reflections of electric light sources, windows, and lightcolored objects including furniture and clothing.The magnitude of sensation of glare is a function of the size, position, and luminance of a source, the number of sources, and the luminance to which the eyes are adapted at the moment. It may be categorized according to its origin: direct or reflected glare. Direct glare may be caused
Ch11.qxd 2/12/04 5:19 PM Page 283
ERGONOMICS OF IMAGE WORKSTATIONS
283
by bright sources of light in the visual field of view (e.g., sunlight and lightbulbs). Reflected glare is caused by light reflected from the display screen. If the reflections are diffuse, they are referred to as veiling glare. Image reflections are both distracting and annoying, because the eye is induced to focus alternately between the displayed and reflected images. The reflected glare can be reduced by increasing the display contrast, by wearing dark-colored clothing, by correctly positioning the screen with respect to lights, windows, and other reflective objects, and by adjusting the screen angle. 11.2.2
Ambient Illuminance
An important issue related to the problem of glare is the proper illumination of the workstation area. Excessive lighting can increase the readability of documents but can also increase the reflected glare, whereas sufficient illumination can reduce glare but can make reading of source documents at the display workstation difficult. Ergonomic guidelines for the traditional office environment recommend a high level of lighting: 700 lux (an engineering unit for lighting) or more. A survey of 38 computer-aided design (CAD) operators who were allowed to adjust the ambient lighting indicated that the median illumination level is around 125 lux (125 at keyboard, etc.) with 90 % of the readings falling between 15 and 505 lux (Heiden, 1984). These levels are optimized for CRT viewing but certainly not for reading written documents. An illumination of 200 lux is normally considered inadequate for an office environment. Another study suggests a lower range (150–400 lux) for tasks that do not involve information transfer from paper documents. At these levels, lighting is sufficiently subdued to permit good display contrast in most cases. The higher range (400–550 lux) is suggested for tasks that require the reading of paper documents. Increasing ambient lighting above 550 lux reduces display contrast appreciably. If the paper documents contain small, low-contrast print, 550 lux may not provide adequate lighting. Such cases may call for supplementary special task lighting directed only at the document surface. This recommendation is based on the conditions needed to read text, not images, on a screen. Another recommendation specifies the use of a level of ambient illumination equal to the average luminance of an image on the display workstation screen. 11.2.3
Acoustic Noise Due to Hardware
An imaging workstation often includes components like RAID, image processors, and other arrays of hardware that produce heat and require electric fans for cooling. These fans often produce an intolerably high noise level. Even for a low-noise host computer attached to the display workstation, it is recommended that the computer be separated from the display workstation area to isolate the noise that would affect human performance. As personal computers become more and more common, the computer, the terminal, and the display monitor become an integrated system insofar as they are connected by very short cables. Most imaging workstations utilize a personal computer as the host, however, and because of the short cabling, the host computer and the image processor wind up in the same room as the terminal, display monitors, and
Ch11.qxd 2/12/04 5:19 PM Page 284
284
DISPLAY WORKSTATION
the keyboard. Failure of the image workstation designer to consider the consequences of having all these units together creates a very noisy environment at the imaging workstation, and it is difficult for the user to sustain concentration during long working hours. Care must be exercised in designing the workstation environment to avoid problems due to acoustic noise from the hardware.
11.3
EVOLUTION OF MEDICAL IMAGE DISPLAY TECHNOLOGIES
This section reviews the evolution of medical imaging display technologies. Basically, there are three phases—the early period, 1986–1994; the middle period, 1995–1999; and the modern period, 1999–2002. The early period contributions are the Sun workstation with 2K video monitors and ICU and pediatric 1K workstations. The middle period accomplishments are the production of competitive 2K monitors by several manufacturers, the beginning of the LCD display screen, and PC-based workstations. The modern period highlights are DICOM-compliant PCbased workstations and several manufacturers’ competing shares of the LCD display market. Tables 11.3, 11.4, and 11.5 summarize the specifications of these three periods of image technology evolution.
TABLE 11.3
Display Technology Evolution Phase 1—the Early Period; 1986–1994
Typical applications
Ultrasound acquisition and review Cardiocatheterization PACS X ray
Displays & resolutions
10–12 in. 640 ¥ 480 Barco, DataRay RS170 DataRay 1280 ¥ 1024 Siemens 2048 ¥ 2560 MegaScan
Display controllers
Color: VCR (for ultrasound), VGA Custom: direct from Fluoroscope, etc. Grayscale: VGA, Cemax-Ikon, Dome 256 shades of gray chosen from 1000 now possible
Performance
Image download: 20 MB/s on S-bus Real-time window level introduced on Dome RX16
Operating system
UNIX, DOS, Macintosh, Windows (starting)
Luminance
100–220 cd/m2
Reliability
10–15% annual failure rate
Video quality
2- to 5-ns rise time, 2-ns jitter Focus: Good in center, poor at edges
Calibration
Most equipment calibrated at factory, then some feed-forward correction for common aging mechanisms
Ch11.qxd 2/12/04 5:19 PM Page 285
TYPES OF IMAGE WORKSTATION
TABLE 11.4
285
Display Technology Evolution Phase 2—the Middle Period; 1995–1999
Typical applications
Ultrasound acquisition and review Cardiocatheterization PACS X ray
Displays & resolutions
12–15 in. 640 ¥ 480 Barco, DataRay, SonQ 1200 ¥ 1600 Image Systems, DataRay, Nortech 1280 ¥ 1024 Siemens, Image Systems 2048 ¥ 2560 MegaScan (declining), Image Systems, DataRay, Siemens, Barco First LCD from dpiX shown, never achieved volume production
Display controllers
Color: VCR (for ultrasound), VGA Grayscale: Dome, Metheus, Cemax 1000 simultaneous shades of gray possible {Metheus, Astro} First Digital controllers shown: Dome, Metheus
Performance
Image download: 55 MB/s on 2 MP, 90 MB/s on 5 MP
Operating system
UNIX, Macintosh (in decline), Windows
Spot size
<2.0 lp/mm
Contrast
CRT contrast ratio 800 : 1
Luminance
350 cd/m2 center, 280 cd/m2 typical
Reliability
5–10% annual failure rate
Video quality
0.7- to 2.3-ns rise time, 0.5- to 2-ns jitter Focus: Good all over screen when equiment new
Calibration
Luminance calibration (Dome), Equalizer (Metheus), MediCal (Barco) Some field sites starting to calibrate displays routinely, most don’t
11.4
TYPES OF IMAGE WORKSTATION
Image workstations can be loosely categorized into six types based on their applications: diagnostic, review, analysis, digitizing and printing, interactive teaching, and desktop workstations. 11.4.1
Diagnostic Workstation
A diagnostic workstation is used by radiologists for making primary diagnosis. The components in this type of workstation are of the best quality and the easiest to use. “Best quality” is used here in the sense of display quality, rapid display time (1–2 for the first image), and the availability of user-friendly and useful display functions. If the workstation is used for displaying projection radiographs, multiple 2K monitors are needed. On the other hand, if the workstation is used for CT and MR images, multiple 1K monitors are sufficient. A diagnostic workstation requires a digital Dictaphone to report the findings. The workstation provides software to append the digital voice report to the images. If the radiologist inputs the report
Ch11.qxd 2/12/04 5:19 PM Page 286
286
DISPLAY WORKSTATION
TABLE 11.5
Display Technology Evolution Phase 3—the Modern Period, 1998–2002
Typical applications
Ultrasound Acquisition and Review Cardio and Angio PACS X ray 3-D reconstructions Mammography (starting)
Displays & resolutions
640 ¥ 480 CRT Barco 1200 ¥ 1600 CRT Barco, Siemens, Image Systems, DataRay 1280 ¥ 1024 CRT Siemens, Image Systems, Barco 2048 ¥ 2560 CRT Barco, Siemens, Clinton 1200 ¥ 1600 gray scale LCD Totoku, Eizo 1536 ¥ 2048 gray scale LCD Dome, Barco 2048 ¥ 2560 gray scale LCD Dome
Display controllers
Color: VCR (for ultrasound), VGA Gray scale analog: Barco(Metheus), Dome, (Astro, Matrox, Wide all starting) Gray scale digital: Dome 256 shades chosen from 766, Barco 1000 simultaneous shades RealVision 256 shades chosen from 766
Performance
Image download: 100 MB/s on 2MP, 120 MB/s on 3MP, 120 MB/s on 5MP Ability to synchronize image download with vertical refresh added Real-time window/level operation on Barco 5MP
Operating system
UNIX (in decline), Windows
Spot size
⬃2.1 lp/mm CRT ⬃2.5 lp/mm LCD
Contrast
CRT contrast ratio 2000 : 1 over wide viewing angle LCD contrast ratio 500 : 1 over narrow viewing angle
Luminance
CRT 350 cd/m2 center, 280 cd/m2 typical Luminance uniformity introduced on Barco CRT—300 cd/m2 over entire screen Luminance noise of P104 CRT phosphor noted, P45 becomes dominant LCD initially 700 cd/m2, calibrated
Reliability & lifetime
<3% annual failure rate CRT 30,000 hour calibrated life at full luminance LCD 15,000 hour calibrated life for back light, LCD have ⬃10% cost for backlight replacement
Video quality
Analog 0.5- to 1.8-ns rise time, 0.2- to 1-ns jitter Analog focus: Typically good over entire screen Digital bit errors exist, limited to 1 per billion
Calibration
TQA(Planar-Dome), MediCal Pro (Barco), Smfit (Siemens) Many field sites calibrate displays routinely, many still feel it takes too long Intervention-free front calibration introduced for LCD panels (Barco) Remote network monitoring of conformance introduced (Barco, Dome)
Ch11.qxd 2/12/04 5:19 PM Page 287
TYPES OF IMAGE WORKSTATION
287
himself/herself, the DICOM structured report function should be available in the workstation. Figure 11.3 shows a generic 2K display workstation with two LCD monitors showing P-A and lateral views of two CR chest images. 11.4.2
Review Workstation
A review workstation is used by radiologists and referring physicians to review cases in the hospital wards.The dictation or the transcribed report should already be available with the corresponding images at the workstation. A review workstation may not require 5 mega pixel monitors, because images have already been read by the radiologist from the diagnostic workstation. With the report already available, the referring physicians can use the 3 mega pixel or even the 1K monitors to visualize the pathology from the monitors. Diagnostic and review workstations can be combined as one single workstation sharing both diagnostic and review functions like an alternator. Figure 11.4 shows a generic two-monitor 1K (1600 lines ¥ 1024 pixels) video workstation used in the intensive care unit. 11.4.3
Analysis Workstation
An analysis workstation differs from the diagnostic and review workstations in that the former is used to extract useful parameters from images. Some parameters are easy to extract from a simple region of interest (ROI) operation, which can be done from a diagnostic or review workstation; others (i.e., blood flow measurements from DSA, 3-D reconstruction from sequential CT images) are computationally inten-
Figure 11.3 A generic 2K display workstation with two LCD monitors showing PA and lateral views of CR chest images.
Ch11.qxd 2/12/04 5:19 PM Page 288
288
DISPLAY WORKSTATION
Figure 11.4 Two-monitor 1K (1600 lines) ICU display workstation showing two CR images. Left hand CRT monitor shows the current image; all previous images can be accessed within one second on the right-hand monitor by clicking the two lower icons (Previous and Next). Image processing functions are controlled by the icons located at the bottom of the screens.
sive and require an analysis workstation with a more powerful image processor and high-performance software (see, for example, Fig. 4.21B, an analysis workstation displaying a 3-D rendering of fused MRI and fMRI images). 11.4.4
Digitizing and Printing Workstation
The digitizing and printing workstation is for radiology department technologists or film librarians who must digitize historical films and films from outside the department. The workstation is also used for printing soft copy images to hard copy on film or paper. In addition to the standard workstation components already described, this workstation requires a laser film scanner (see Section 3.2.3), a laser film imager, and a good-quality paper printer. The paper printer is used for pictorial report generation from the diagnostic, review, and editorial and research workstations. A 1K display monitor for quality control purposes would be sufficient for this type of workstation. 11.4.5
Interactive Teaching Workstation
A teaching workstation is used for interactive teaching. It emulates the role of teaching files in the film library but with more interactive features. Figure 11.5 shows a digital mammography teaching workstation for breast imaging.
Ch11.qxd 2/12/04 5:19 PM Page 289
IMAGE DISPLAY AND MEASUREMENT FUNCTIONS
289
Figure 11.5 Four mammograms shown in a 2K two-CRT monitor digital mammography teaching workstation: Left: left and right craniocaudal views: Middle left and right mediolateral oblique views. Right: text monitor with icons for image display and manipulation at the workstation.
11.4.6
Desktop Workstation
The desktop workstation is for physicians or researchers to generate lecture slides and teaching and research materials from images and related data in the PACS database. This workstation uses standard desktop computer equipment to facilitate the user’s daily workload. The desktop workstation can also be used as a web client to access images and related information from a web server connected to the PACS controller and archive server. Desktop workstation is best for viewing ePR with image distribution. Image workstations that directly interact with radiologists and physicians are the most important and visible component in a PACS. To design them effectively, a thorough understanding of the clinical operation environment requirements is necessary.
11.5
IMAGE DISPLAY AND MEASUREMENT FUNCTIONS
This section discusses some daily used image display and measurement functions in the display workstations described in Section 11.4. 11.5.1
Zoom and Scroll
Zooming and scrolling is an interactive command manipulated via a trackball or a mouse. The operator first uses the trackball to scroll about the image, centering the region of interest (ROI) on the screen. The ROI can then be magnified by pressing a designated button to perform the zoom. The image becomes more blocky as the zoom factor increases, reflecting the greater number of times each pixel is replicated.
Ch11.qxd 2/12/04 5:19 PM Page 290
290
DISPLAY WORKSTATION
Although it is useful to magnify and scroll the image on the screen, the field of view decreases in proportion to the square of the magnification factor. Magnification is commonly performed via pixel replication or interpolation. In the former, one pixel value repeats itself several times in both the horizontal and vertical directions, and in the latter the pixel value is replaced by interpolation of its neighbors. For example, to magnify the image by 2 by replication is to replicate the image 2 ¥ 2 times. 11.5.2
Window and Level
The window and level feature allows the user to control the interval of gray levels to be displayed on the monitor. The center of this interval is called the level value, and the range is called the window value. The selected gray level range will be distributed over the entire dynamic range of the display monitor; thus using a smaller window value will cause the contrast in the resulting image on the screen to increase. Gray levels present in the image outside the defined interval are clipped to either black or white (or both), according to the side of the interval on which they are positioned. This function is also controlled by the user via a trackball or mouse. For example, moving the trackball in the vertical direction typically controls the window value whereas the horizontal direction controls the level of which gray levels are displayed on the monitor. Window and level operations can be performed in real time by using an image processor with a fast access memory called a lookup table (LUT).A 256 value LUT inputs an 8-bit (a 4096 value LUT inputs an 12-bit) address whose memory location contains the value of the desired gray level transformation (linear scaling with clipping). The memory address for the LUT is provided by the original pixel value. Figure 11.6 illustrates the concept of the LUT. In the figure, the original pixel “5” is mapped to “36” via the LUT. If the window value is fixed to 1 and the level value is changed by 1 from 0 to 255 continuously, and if the image is displayed during each change, then the monitor is continuously displaying all 256 values of the image, with 1 gray level at a time. 11.5.3
Histogram Modification
A function that is useful for enhancing the display image is histogram modification. A histogram of the original image is first obtained and then modified by rescaling each pixel value in the original image. The new enhanced image that is formed will show the desired modification. An example of histogram modification is histogram equalization, in which the shape of the modified histogram is adjusted to be as uniform as possible for all gray levels. The rescaling factor (or the histogram equalization transfer function) is given by: g = (gmax - gmin )P ( f ) + gmin
(11.6)
where g is the output (modified) gray level, gmax and gmin are the maximum and minimum gray level of the modified image, respectively, f is the input (original) gray level, and P(f) is the cumulative distribution function (or integrated histogram) of f.
BASICS OF A DISPLAY WORKSTATION
Figure 11.6 Concept of a lookup table (LUT). In this case, the pixel value 5 is mapped to 36 through a preset LUT.
Ch11.qxd 2/12/04 5:19 PM Page 291
291
Ch11.qxd 2/12/04 5:19 PM Page 292
292
DISPLAY WORKSTATION
Figure 11.7 shows an example of modifying a chest X-ray image overexposed in the lungs with the histogram equalization method. In this example, the frequency of occurrence of some lower gray level values in the modified histogram has been changed to zero to enforce uniformity. It is seen that some details in the lung have been restored. 11.5.4
Image Reverse
An LUT can be used to reverse the dark and light pixels of an image. In this function, the LUT is loaded with a reverse ramp such that for an 8-bit image, the value 255 becomes 0 and 0 becomes 255. Image reverse is used to locate external objects— for example, intrathoracic tubes in ICU X-ray examinations. 11.5.5
Distance, Area, and Average Gray Level Measurements
Three simple measurement functions are important for immediate quantitative assessment, because they allow the user to perform physical measurement with the image displayed on the monitor by calibrating the dimensions of each pixel to some physical units or the gray level value to the optical density.
Figure 11.7 Concept of histogram equalization. (A) Chest X ray with lung region overexposed, showing relatively low contrast. (B) The histogram of the chest image. (C) Lung region in the image enhanced with histogram equalization. (D) The modified histogram.
Ch11.qxd 2/12/04 5:19 PM Page 293
IMAGE DISPLAY AND MEASUREMENT FUNCTIONS
293
The distance calibration procedure is performed by moving a cursor over the image to define the physical distance between two pixels. Best results are obtained when the image contains a calibration ring or other object of known size. To perform optical density calibration, the user moves a cursor over many different gray levels and makes queries from a menu to determine the corresponding optical densities. Finally, an interactive procedure allows the user to trace a region of interest from which the area and average gray level can be computed.
11.5.6
Optimization of Image Perception in Soft Copy
There are three sequential steps to optimize an image for soft copy display: remove the unnecessary background, determine the anatomical region of interests, and correct for the gamma response of the monitor. These steps can be implemented through properly chosen LUT. 11.5.6.1 Background Removal For display CR or DR, we discuss the importance of background removal in Section 3.3.4. Figures 11.8A and B show a CR image with background and the corresponding histogram, respectively. After the background of Figure 11.8A is removed, the new histogram (Fig. 11.8D) has no values above 710. The new LUT based on the new histogram produces Figure 11.8C, which has a better visual quality than Figure 11.8A. 11.5.6.2 Anatomical Regions of Interest It is necessary to adjust for the display based on the anatomical regions of interest because tissue contrast varies in different body regions. For example, in CT chest examinations, there are lung, soft tissue, and bone windows (LUT) to highlight the lungs, heart tissue, and bone, respectively. This method has been used since the dawn of body CT imaging. By the same token, in CR and DR there are also specially designed LUTs for different regions of the body. Figure 11.9 shows four transfer curves used to adjust for the pixel values in the head, bone, chest, and abdomen regions in CR. 11.5.6.3 Gamma Curve Correction The pixel value versus its corresponding brightness in a monitor is the gamma curve, which is nonlinear and different from monitor to monitor. An adjustment of this gamma curve to a linear curve will improve the visual quality of the image. For a new monitor, a calibration procedure is necessary to determine this gamma curve, which is then used to modify the LUT. Figure 11.10 shows the gamma curve from two different monitors and their linear correction. A monitor must be recalibrated periodically to maintain its performance.
11.5.7
Montage
A montage represents a selected set of individual images from a CT, MR, US or any other modality image series. Such groupings are useful because only a few images from most series show the particular pathology or features of interest to the
Ch11.qxd 2/12/04 5:19 PM Page 294
294
DISPLAY WORKSTATION
A
0 20 B
C
0 20 D
1024
710
1024
Figure 11.8 Results after background removal. (A) Original pediatric CR image with background (arrows, white area near the borders). (B) Corresponding histogram. (C) Same CR image after background removal, displayed with a different LUT based on the new histogram shown in (D). (D) Corresponding histogram of the background removed CR image shown in (C). All pixels with values greater than 710 have been removed.
referring physicians or radiologists. For example, an average MR examination may contain half a dozen sequences with an average of 30 images per sequence, which gives rise to 180 images in the study. A typical montage would have 20 images, containing the most significant features representing the exam. So, typically, only 10% of the images taken in an examination are essential and the rest are supplemental. A montage image collection selected by the radiologist would allow the capture of the most important images in a single display screen. Each image selected in the montage can be tagged in its own DICOM header for future quick reference and display.
Ch11.qxd 2/12/04 5:19 PM Page 295
IMAGE DISPLAY AND MEASUREMENT FUNCTIONS
295
Figure 11.9 Pixel value adjustment for CR images in different body regions: head (H), bone (B), chest (C), and abdomen (A). (Courtesy of Dr. J. Zhang.)
Figure 11.10 (A) and (B) Gamma curves of two monitors. (C) The linear response curve of both monitors after the gamma correction. (Courtesy of Dr. J. Zhang.)
Ch11.qxd 2/12/04 5:19 PM Page 296
296
DISPLAY WORKSTATION
11.6 WORKSTATION USER INTERFACE AND BASIC DISPLAY FUNCTIONS 11.6.1
Basic Software Functions in a Display Workstation
Some of the basic software functions described in Section 11.5 are necessary in a display workstation to facilitate its operation. These functions should be easily used with a single click of the mouse through the patient’s directory, study list, and image processing icons on the various monitors. The keyboard is only used for retrieving information not stored in the workstation’s local disks. In this case, the user must input either the patient’s name or ID or a disease category as the key for searching the long-term archive. Table 11.6 lists some basic software functions required in a display workstation. Among these functions, the most often used are: select patient, sort patient directory, library search, select image, cine mode, zoom/scroll, and window and level. Results of using some of these functions are shown in Figures 11.11–11.14. Figure 11.11 shows the Patient Directory with the patient list (with fictitious names), ID, date and time of the study, modality, procedure, and physician’s name. The leftmost column is a read icon to delineate whether the study had been read by a radiologist. Figure 11.12A depicts a single monitor displaying three views of an MRI study, Figure 11.12B shows a two-monitor workstation displaying a sequence of transverse view on the left and a coronal view on the right monitor. The bottom shows user interface icons described in the Section 11.6.2. Figure 11.13 shows a two-monitor view of a CT chest exam with the soft tissue window. Figure 11.14 shows a two-monitor view of an obstetrics US exam showing a fetus.
TABLE 11.6
Important Software Functions in a Display Workstation
Function Directory Patient directory Study list Display Screen reconfiguration Monitor selection Display Image manipulation Dials LUT Cine Rotation Negative Utilities Montage Image discharge Library search Report Measurements
Description Name, ID, age, sex, date of current exam Type of exam, anatomical area, date studies taken Reconfigures each screen for the convenience of image display Left, right Displays images according to screen configuration and monitor selected Brightness, contrast, zoom, and scroll Predefined lookup tables (bone, soft tissue, brain, etc.) Single or multiple cine on multimonitors for CT and MR images Rotates an image Reverses gray scale Selects images to form a montage Deletes images of discharged patients (a privileged operation) Retrieves historical examinations (requires keyboard operation) Retrieves reports from RIS Linear and region of interest
Ch11.qxd 2/12/04 5:19 PM Page 297
WORKSTATION USER INTERFACE AND BASIC DISPLAY FUNCTIONS
297
Figure 11.11 Patient Directory with the patient list (with fictitious names), ID, date and time of the study, modality, procedure, and physician’s name.
11.6.2
Workstation User Interface
Most PACS manufacturers have implemented the aforementioned display and measurement functions in their workstations in the form of a library. The user can use a pull down manual to customize the user interface at the workstation. The 12 icons in the bottom of each display window in Figures 11.11–11.14 are examples of a
Ch11.qxd 2/12/04 5:19 PM Page 298
298
DISPLAY WORKSTATION
(A)
(B) Figure 11.12 (A) A single LCD monitor displaying three views of an MRI study. (B) TwoLCD monitor workstation displaying transverse view on the left and coronal view on the right monitor.
Ch11.qxd 2/12/04 5:19 PM Page 299
WORKSTATION USER INTERFACE AND BASIC DISPLAY FUNCTIONS
299
Figure 11.13 Two-LCD monitor workstation showing a CT chest exam with the soft tissue window.
Figure 11.14 Two-LCD monitor workstation showing an obstetrics US exam of a fetus.
Ch11.qxd 2/12/04 5:19 PM Page 300
300
DISPLAY WORKSTATION
1
2
3
4
5
6
7
8
9
10
11
12
Figure 11.15 User interface display toolbars and icons.
TABLE 11.7 Description of the User Interface Icons and Tool Bars Shown in Figure 11.15 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
Print the selected image Save the selected image Zoom in and out of the image Show a list of display layouts and set the layout Set the bone window/level Set the soft tissue window/level Set the auto display window/level Edge enhancement filter Image measurement functions Invert the image Reset the image display Select the text level in the image (a lot of text, some, minimum)
customized user interface ICON tool bars designed by the user using a pull-down manual. Figure 11.15 shows these 12 icons, and their descriptions are given in Table 11.7.
11.7
DICOM PC-BASED DISPLAY WORKSTATION
Most PACS workstations nowadays are personal computer (PC) based with either Windows 98 and up or XP operation system. This trend is very natural for the integration of PACS and the electronic patient record (ePR), because the latter is strictly a PC-based system. In this section we give a more precise architecture of the DICOM PC-based display workstation in both hardware and software. There may be some duplication of materials from previous sections, but, from a system integration point of view, this section becomes a self-contained guideline. 11.7.1
Hardware Configuration
The hardware configuration of the workstation is shown in Figure 11.1. 11.7.1.1 Host Computer The host computer of the workstation can be a PC based with a Intel Pentium 4 processor at over 1 GHz, 256–512 Mbytes RAM, PCI bus structure, and 40-GB disks as local storage. 11.7.1.2 Display Devices Display devices include single or multiple display LCD monitors and video boards. The display monitor can be from 24-in. 1600 ¥ 1280
Ch11.qxd 2/12/04 5:19 PM Page 301
DICOM PC-BASED DISPLAY WORKSTATION
301
to 2.5 K ¥ 2K resolutions in portrait mode.The display board should be 4 mega pixels or 5 mega pixels. 11.7.1.3 Networking Equipment In the display workstation, asynchronous transfer mode (ATM) or fast Ethernet technology can be used as the primary means of communication, and conventional Ethernet as the backup. 11.7.2
Software System
The software can be developed based on a Microsoft Windows or XP platform and in Visual C/C++ programming environment. WinSock communication over TCP/IP, Microsoft Foundation Class (MFC) libraries, standard image processing library, UC Davis DICOM library, and Windows-based PACS API libraries can be used as development tools. The user interface of the display workstation is icon/menudriven with user-friendly graphic. 11.7.2.1 Software Architecture The architecture of the software system is divided into four layers: application interface layer, application libraries layer, system libraries layer, and operating system (OS) driver layer, which is over the hardware layer. Figure 11.16 shows the software architecture of the display workstation.
Display Workstation User Interface DICOM Query and retrieve
Image Communication Software Package
Patient Folder Management
Image Display Program
Application Interface Layer
PACS API Libraries UC Davis DICOM Network Transport Libraries
DIMSE – C Libraries
Vendor’s Image Processing Library
Windows System Libraries API
Image Board Driver Video Board
Microsoft Foundation Class (MFC)
Windows Operating System Drivers ATM Driver
100 Based-T Ethernet Driver
155 Mbps ATM Adapter
100 Base – T Ethernet Adapter
PC
Application Library Layer
System Library Layers OS Driver Layer
Hardware Layer
Figure 11.16 Software architecture of a PC-based display workstation. (Courtesy of Lei, Zhang, and Wong).
Ch11.qxd 2/12/04 5:19 PM Page 302
302
DISPLAY WORKSTATION
The application interface layer is the top layer of the software system that interfaces with the end user of the display workstation. This layer is composed of four modules: (1) image communication software package, (2) patient folder management, (3) image display program, and (4) DICOM query and retrieve software package. This layer directly supports any application that requires accessing PACS and radiological images. In the application library layer, the PACS API libraries provide all library functions to support four modules in the application interface layer. Here UC Davis DICOM Network Transport libraries and DIMSE-C libraries ensure DICOM communication protocols and functions, and the specific vendor’s Image Processing Library supplies library functions for image display of the workstation. The system library layer is responsible for providing Windows system libraries, API functions, and MFC to serve as a developmental platform. The OS driver layer provides Windows OS and its drivers for connecting with hardware components, which include the vendor’s driver for its image board, ATM and high-speed Ethernet for a 155 Mbps ATM adapter and 100 based-T Ethernet driver for an Ethernet adapter. All data flow between the layers of the software is shown in Figure 11.17.
User
Image Display
Image Display
Local Storage Workstation Applications
Data Insertion and Patient Folder
DICOM Decoding
DIMSE – C Services
PACS Controller (Central Archive)
Figure 11.17 Data flow in the DICOM-Compliant PC-based display workstation. (Courtesy of Lei, Zhang, and Wong.)
Ch11.qxd 2/12/04 5:19 PM Page 303
DICOM PC-BASED DISPLAY WORKSTATION
303
11.7.2.2 Software Modules in the Application Interface Layer In the workstation, the user has access only at the application interface layer, which is composed of the four modules described below. Image Communication The module is responsible for supporting DICOM services with DICOM communication protocols over TCP/IP to perform two DICOM services: Storage Service Class Provider (SCP) and Storage Service Class User (SCU). The DICOM services include C-Echo for verification, C-Store for storage, C-Find for querying, and C-Move for retrieving. Patient Folder Management This module manages the local storage with hierarchical, or tree-structure, directories to organize patient folders within the display workstation. The DICOM decoder is used to extract patient demographic data and examination records from the header of a DICOM image. The reformatter of the module changes the image from DICOM format to the vendor’s image board format for display. The extracted data via the DICOM decoder, and the reformatted image by the reformatter, are inserted into an individual patient folder. A patient folder contains three hierarchical levels: patient level, study level, and series level. The hierarchy starts with a root directory in the local storage system, that is hard disks of the display workstation. Figure 11.18 is a diagram of the patient folder structure.
Study #1
Series #1
#1: Image, Data #2: Image, Data
Series #2…
#q: Image, Data
. . . Series #m Patient #1
Study #2… . . . Study #h
Root
Patient #2… . . . Patient #n
#1: Image, Data #2: Image, Data . . .
#r: Image, Data
Series #1 Series #2… . . . Series #p
Study #1…
. . .
#1: Image, Data #2: Image, Data . . .
#s: Image, Data
#1: Image, Data #2: Image, Data . . .
#t: Image, Data
Study #2… . . . Study #k…
Figure 11.18 Three-level hierarchy of the patient folders managed by the display workstation. (Courtesy of Lei, Zhang, and Wong.)
Ch11.qxd 2/12/04 5:19 PM Page 304
304
DISPLAY WORKSTATION
A patient folder is automatically created in the workstation on receipt of the first image of the patient. Subsequent images from individual studies and series are inserted into the patient folder accordingly. The patient folder can be automatically deleted from the workstation based on certain aging criteria such as the number of days since the folder was created or discharge or transfer of the patient. Figure 11.19 presents the interface of three hierarchical levels of patient folders. Image Display Program The image display program supports both single and dual 24-in. 1600 ¥ 1280 (up to 2.5K ¥ 2.K)-resolution portrait LCD monitors to display patient information and radiological images in the workstation. Images with the header format in a patient folder can be displayed via the image display program. The screen layout of the workstation is user adjustable with one image on one monitor, two on one, four on one, etc. The display program can support multimodality display for CT, MR, US, and CR/DR in the sense that one monitor can display one modality while the second monitor’s other modality images. Image manipulation functions such as zoom, pan, rotation, flip, window and level adjustment, and invert should be available. Automatic defaulted window and level preset function is used during imaging loading to minimize the manipulation time. Real-time zoom and contrast adjustment should be easily performed by using the mouse.
Patient Name
PID
Birth Date…
1234567 3232323 1020304 . . .
02/12/45 11/15/61 08/14/56 . . .
Patient Name
Study ID
Study Date…
Smith, John Smith, John Smith, John Smith, John
4401 4402 4416 4420
01/31/97 02/25/97 04/13/97 11/15/97
Patient Name
Study ID
Series No…
Smith, John Smith, John Smith, John Smith, John Smith, John Smith, John
4402 4402 4402 4402 4402 4402
1 2 3 4 5 6
Anderson, Alan Smith, John Thompson, James . . .
Patient List (Demographic Information)
Study List
(Study Information)
Series List
(Series Information)
Figure 11.19 Patient folders in the diagnostic workstation. Each folder contains three hierarchical levels: Patient Level, Study Level, and Series Level. (Courtesy of Lei, Zhang, and Wong.)
Ch11.qxd 2/12/04 5:19 PM Page 305
DICOM PC-BASED DISPLAY WORKSTATION
305
Query and Retrieve This module is a DICOM query/retrieve service class user (Q/R SCU) to query and retrieve patient studies from the PACS long-term archive or directly from radiological imaging systems. The query and retrieve module supports DICOM C-Echo, C-Store, C-Find, and C-Move services. With this module, the workstation has access capability to Query/Retrieve Service Class Providers, which use the Q/R information models of patient root and study root.
Ch12.qxd 2/12/04 5:20 PM Page 307
CHAPTER 12
Integration of HIS, RIS, PACS, and ePR
HIS/RIS Database
Generic PACS Components & Data Flow Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS Controller & Archive Server
Application Servers
Workstations
Web Server
Figure 12.0 Integration of HIS, RIS, and PACS.
PACS is an imaging management system that requires pertinent data from other medical information systems for effective operation. Among these systems, data from the hospital information system (HIS), and radiology information system (RIS) are of most importance. Many functions in the PACS controller and archive server described in Chapter 10 (image routing, prefetching, automatic grouping, etc.) rely on data extracted from both HIS and RIS. This chapter presents some HIS and RIS data that are important to the PACS operation and discusses how to interface HIS and RIS with PACS to obtain this data. Another topic of importance related to PACS image data distribution is the electronic patient record (ePR), or electronic medical record (eMR), which is to be discussed in Section 12.5. Figure 12.0 illustrates key components (no shade) in the system integration of HIS, RIS, PACS, and ePR.
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
307
Ch12.qxd 2/12/04 5:20 PM Page 308
308
12.1
INTEGRATION OF HIS, RIS, PACS, AND ePR
HOSPITAL INFORMATION SYSTEM
The HIS is a computerized management system for handling three categories of tasks in a health care environment: 1. Support clinical and medical patient care activities in the hospital. 2. Administer the hospital’s daily business transactions (financial, personnel, payroll, bed census, etc.). 3. Evaluate hospital performances and costs, and project the long-term forecast. Many clinical departments in a health care center, such as radiology, pathology, pharmacy, clinical laboratories, and other units, have their own specific operational requirements that differ from the general hospital operation. For this reason, special information systems may be needed in these departments. Often, these information systems are under the umbrella of the HIS, which maintains their operations. Other departments may have their own separate information systems, and some interface mechanisms are built to integrate data between these systems and the HIS. For example, RIS was originally a component of HIS; later, independent RIS was developed because of the limited support offered by HIS to handle special information required by the radiology operation. However, the integration of these two systems is still extremely important for the health care center to operate as a total functional entity. Large-scale HIS mostly use mainframe computers. These can be purchased through a manufacturer with certain customization software or home grown through the integration of many commercial health information systems, progressively over years. A home-grown system may contain many reliable legacy components but have out-of-date technology. Therefore, to interface HIS to PACS, caution must be taken to circumvent the legacy problem. Figure 12.1 shows major components in a typical HIS. Most HIS are an integration of many information data systems, starting the day the health care data center was established, and older components may have since been replaced by newer ones over many years of operation. In addition to taking care of the clinical operation, the HIS also supports hospital and health care center business and administrative functions. It provides automation for such events as patient registration and admissions, discharges, and transfers (ADT), as well as patient accounting. It also provides on-line access to patient clinical results (e.g., laboratory, pathology, microbiology, pharmacy, radiology). The system broadcasts in real time the patient demographics and encounters information with HL7 standards to the RIS. Through this path, ADT and other pertinent data can be transmitted to the RIS and the PACS (see Fig. 12.2).
12.2
RADIOLOGY INFORMATION SYSTEM
The RIS is designed to support both the administrative and clinical operation of a radiology department, to reduce administrative overhead, and to improve the quality of radiological examination delivery. Therefore, the RIS manages general radiology patient demographics and billing information, procedure descriptions and
Ch12.qxd 2/12/04 5:20 PM Page 309
RADIOLOGY INFORMATION SYSTEM
309
Hospital Business and Administration Component _____________________________________________ Material Services
Accumulate Payments
Recharges
Budgeting
General Ledger
Patient ADT/ Billing / Accounts Receivable
Payroll
Cost Accounting
Hospital Operation Component _____________________________________________ OR Scheduling
Nursing Management
Clinical Appointment
Dietary
Doctor ID System
Employee Health System
Medical Records System
Medical Record Abstracting
Patient ADT
Pharmacy System
Radiology System
Pathology System
Referring Doctor System
Cancer Registry System
STOR – (Summary True Oriented Results Reporting)
To RIS PACS
Figure 12.1 Major components in a typical HIS. The software package STOR (a trade name, other HIS may use a different name) provides a path for the HIS to distribute HL7formatted data to the outside world.
Ch12.qxd 2/12/04 5:20 PM Page 310
310
INTEGRATION OF HIS, RIS, PACS, AND ePR
RIS RIS Workstation
PACS
PACS Workstation Emulates a RIS Workstation
Figure 12.2 A PACS workstation emulates a RIS workstation. It can access RIS data but cannot deposit to PACS.
scheduling, diagnostic reports, patient arrival scheduling, film location, film movement, and examination room scheduling. The RIS configuration is very similar to the HIS except that it is on a smaller scale. RIS equipment consists of a computer system with peripheral devices such as RIS workstations (normally no image display), printers, and bar code readers. Most independent RIS are autonomous systems with limited access to HIS. However, some HIS offer embedded RIS as a subsystem with a higher degree of integration. The RIS maintains many types of patient- and examination-related information, including medical, administrative, patient demographics, examination scheduling, diagnostic reporting, and billing information. The major tasks of the system include: • • • • • • •
Process patient and film folder records. Monitor the status of patients, examinations, and examination resources. Schedule examinations. Create, format, and store diagnostic reports with digital signatures. Tracking film folders. Maintain timely billing information. Perform profile and statistics analysis.
The RIS interfaces to PACS based on the HL7 standard through TCP/IP over Ethernet on a client/server model using a trigger mechanism shown in Figure 12.6.
Ch12.qxd 2/12/04 5:20 PM Page 311
INTERFACING PACS WITH HIS AND RIS
311
(Section 12.3.7.1). Events such as examination scheduling, patient arrivals, and examination begin and end times trigger the RIS to send previously selected information (patient demographics, examination description, diagnostic report, etc.) associated with the event to the PACS in real time.
12.3
INTERFACING PACS WITH HIS AND RIS
There are three methods of transmitting data between information systems: through workstation emulation, database-to-database transfer, and an interface engine. 12.3.1
Workstation Emulation
This method allows a workstation of an information system to emulate a workstation of a second system. As a result, data from the second information system can be accessed by the first system. For example, a PACS workstation can be connected to the RIS with a simple computer program that emulates a RIS workstation. From the PACS workstation, the user can perform any RIS function such as scheduling a new examination, updating patient demographics, recording a film movement, and viewing the diagnostic reports. This method has two disadvantages. First, there is no data exchange between RIS and PACS. Second, the user is required to know how to use both systems. Also, a RIS or HIS workstation cannot be used to emulate a PACS workstation because the latter is too specific for HIS and RIS to emulate. Figure 12.2 shows the mechanism of the workstation emulation method. 12.3.2
Database-to-Database Transfer
The database-to-database transfer method allows two or more networked information systems to share a subset of data by storing them in a common local area. For example, the ADT data from the HIS can be reformatted to HL7 standard and broadcasted periodically to a certain local database in the HIS. A TCP/IP communication protocol can be set up between the HIS and the RIS, allowing the HIS to initiate the local database and broadcast the ADT data to the RIS through either a pull or push operation. This method is most often used to share information between the HIS and the RIS as shown in Figure 12.3. 12.3.3
Interface Engine
The interface engine provides a single interface and language to access distributed data in networked heterogeneous information systems. In operation, it appears that the user is operating on a single integrated database from his/her workstation. In the interface engine, a query protocol is responsible for analyzing the requested information, identifying the required databases, fetching the data, assembling the results in a standard format, and presenting them at the workstation. Ideally, all these processes are done transparently to the user and without affecting the autonomy of each database system. To build a universal interface engine is not a simple task. Most currently available commercial interface engines are tailored to limited
Ch12.qxd 2/12/04 5:20 PM Page 312
312
INTEGRATION OF HIS, RIS, PACS, AND ePR
HIS
HL7 STOR
TCP/IP
RIS
HL7
Database HL7
TCP/IP
Figure 12.3 Database-to-database transfer using common data format (HL7) and communication protocol (TCP/IP). Data from HIS is accumulated periodically at STOR and broadcast to RIS.
specific information systems. Figure 12.4 illustrates the concept of an interface engine for HIS, RIS, and PACS integration. The DICOM Broker/IE shown in Figure 6.1 is an example of using the interface engine concept to integrate RIS with PACS. 12.3.4
Reasons for Interfacing PACS with HIS and RIS
In a hospital environment, interfacing the PACS, RIS, and HIS has become necessary to enhance diagnostic process, PACS image management, RIS administration, and research and training as described in the following subsections. 12.3.4.1 Diagnostic Process The diagnostic process at the PACS display workstation includes the retrieval of not only images of interest but also pertinent textual information describing patient history and studies. Along with the image data and the image description, a PACS also provides all related text information acquired and managed by the RIS and the HIS in a way that is useful to a radiologist during the diagnostic process. RIS and HIS information such as clinical diagnosis, radiological reports, and patient history are necessary at the PACS workstation to complement the images from the examination under consideration.
Ch12.qxd 2/12/04 5:20 PM Page 313
INTERFACING PACS WITH HIS AND RIS
313
HL7
RIS
TCP/IP
Interface Engine
HL7 HL7
Pathology
TCP/IP
TCP/IP
PACS DICOM Visible Light Images
HL7 DICOM
TCP/IP HL7
Pharmacy
TCP/IP
Information Systems
LOINC NDC UMDNS IUPAC HOI UMLS SNOMED ICD
Digital Microscopy
Imaging Systems
ePR Figure 12.4 The principle of an interface engine. Left: HL7 textual data. Right: DICOM image data. Bottom: Electronic patient record (ePR, Section 12.5) can have image and textual data and messages. Message standards: LOINC: logical observation identifier names and codes; NDC: national drug codes; UMDNS: universal medical device nomenclature system; IUPAC: International Union of Pure and Applied Chemistry; HOI: health outcomes institute; UMLS: unified medical language system; SNOMED: systemized nomenclature of medicine; ICD (ICD-9-CM): the International Classification of Diseases, ninth edition, clinical modification.
12.3.4.2 PACS Image Management Some information provided by the RIS can be integrated into PACS image management algorithms to optimize the grouping and routing of image data on the network to the requesting locations (see Section 10.2). In the PACS database, which archives huge volumes of images, a sophisticated image management system is required to handle the depository and distribution of this image data. 12.3.4.3 RIS Administration Planning of a digital-based radiology department requires the reorganization of some administrative operations carried out by the
Ch12.qxd 2/12/04 5:20 PM Page 314
314
INTEGRATION OF HIS, RIS, PACS, AND ePR
RIS. For example, the PACS can provide the image archive status and the image data file information to the RIS. RIS administration operations would also benefit from the HIS by gaining knowledge about patient ADT. 12.3.4.4 Research and Training Much research and teaching in radiology involves mass screening of clinical cases and determining what constitutes normal versus abnormal conditions for a given patient population. The corresponding knowledge includes diverse types of information that need to be correlated, such as image data, results from analyzed images, medical diagnosis, patient demographics, study description, and various patient conditions. Some mechanisms are needed to access and to retrieve data from the HIS and the RIS during a search for detailed medical and patient information related to image data. Standardization and cooperation between diverse medical database systems such as HIS, RIS, and PACS is therefore critical to the successful management of research and teaching in radiology. 12.3.5
Some Common Guidelines
To interface the HIS, RIS, and PACS, some common guidelines are necessary. (1) Each system (HIS, RIS, PACS) remains unchanged in its configuration, data, and functions performed. (2) Each system is extended in both hardware and software for allowing communication with other systems. (3) Only data are shared; functions remain local. For example, RIS functions cannot be performed at the PACS or the HIS workstation. Keeping each system specific and autonomous will simplify the interface process, because database updates are not allowed at a global level. Following these guidelines, successfully interfacing HIS, RIS, and PACS requires the following steps: (1) Identify the subset data that will be shared by other systems. Set up access rights and authorization. (2) Convert the subset data to HL7 standard form. This step, consisting of designing a high-level presentation, solving data inconsistencies, and naming conventions, can be accomplished by using a common data model and data language and by defining rules of correspondence between various data definitions. (3) Define the protocol of data transfer (e.g., TCP/IP or DICOM). 12.3.6
Common Data in HIS, RIS and PACS
The system software in the PACS archive server described in Section 10.2 requires certain data from HIS and RIS that is necessary for it to archive images and associated data in permanent storage and distribute them to workstations properly and timely. Figure 12.5 illustrates data common to the HIS, RIS, and PACS. Table 12.1
Ch12.qxd 2/12/04 5:20 PM Page 315
315
INTERFACING PACS WITH HIS AND RIS
HIS (1 )
(5, 6, 7)
(2)
Patient Admission
(8)
(9)
Patient transfer
Patient discharge
Report approved
Update Patient Information
Clean up patient Information
Insert report text
Migrate case
Clean up image folder
Report impression Accessible
Order Entry
RIS (3) Insert patient demographics
Insert Patient demographics
Patient arrival
Order received (scheduling)
Forward Image folder
Prefetch, table lookup, schedule retrieval
(4)
(4)
Exam Canceled
Procedure complete
Removal retrieving
Insert procedure description and imges
PACS
Figure 12.5 Information transfer from the HIS to the RIS and from the RIS to the PACS.
describes data definition, the origin and the destination, and the action that triggers the system software functions. 12.3.7
Implementation of RIS-PACS Interface
The RIS-PACS interface can be implemented by either the trigger mechanisms or the query protocol described below. 12.3.7.1 Trigger Mechanism Between Two Databases The PACS is notified of the following events in HL7 format when they occur in the RIS: ADT, order received, patient arrival, examination canceled, procedure complete, report
Ch12.qxd 2/12/04 5:20 PM Page 316
316
INTEGRATION OF HIS, RIS, PACS, AND ePR
TABLE 12.1 Information Transferred Between HIS, RIS, and PACS Triggered by the PACS Server Events
Message
From
To
Action
Location
(1) Admission
Previous images/ reports
HIS/RIS
PACS server
Preselect images and reports, transfer from permanent archive to workstations
WS at FL, RR
(2) Order entry
Previous images/ reports
RIS
PACS server, scanner
Check event (1) for completion
WS at FL, RR
(3) Arrival
PT arrival
RIS
PACS server, scanner
Check events (1) and (2) for completion
WS at FL, RR
(4) Exam Complete
New images
Scanner
RIS, PACS server
New images to folder manager, WS
Temporary archive; WS at FL, RR
(5) Dictation
“Wet” reading
RR
Digital Dictaphone
Dictation recorded on DD, digital voice to folder manager and to WS
DD; WS at FL, RR
(6) Transcript
Preliminary report
RR
RIS, PACS server
Preliminary report to RIS, temporary archive and to WS, dictation erased from DD
RIS; temporary archive: WS at FL, RR
(7) Signature
Final report
RR
RIS, PACS server
Final report to RIS, to WS, and to temporary archive. Prelim report erased.
RIS: temporary archive; WS at FL, RR
(8) Transfer
Patient transfer
HIS/RIS
PACS server
Transfer image files, server
WS at new location
(9) Discharge
Images, report
HIS/RIS
PACS server
Patient folder copied from temporary to permanent storage, patient folder erased from WS
WS at FL, RR; temporary and permanent storage
DD, digital Dictaphone; FL, floors in the ward; RR, reading rooms in the Radiology Department; WS, workstations.
Ch12.qxd 2/12/04 5:20 PM Page 317
INTERFACING PACS WITH HIS AND RIS
317
approved. The application level of the interface software waits for the occurrence of one of these events and triggers the corresponding data to be sent. The communication level transfers the HL7 file to the PACS server with the processes send (to PACS) and ris_recv. The PACS server receives this file and archives it in the database tables for subsequent use. Figure 12.6 shows the trigger mechanism interface. The trigger mechanism is used in a systematic and timely fashion when a small amount of predefined information from the RIS is needed in the PACS. In addition to requiring additional storage allocation in both databases, this method is tedious for information updating and is not suitable for user queries. 12.3.7.2 Query Protocol The query protocol allows access to information from the HIS, RIS, and PACS databases by using an application-layer software on top of these heterogeneous database systems. From a PACS workstation, users can retrieve information uniformly from any of these systems and automatically integrate them to the PACS database. Figure 12.7 illustrates the query protocol. The DICOM query/retrieve service class described in Section 7.4.5.2 is one method to implement such a query mechanism. The application-layer software utilizes the following standards: • • • •
SQL as the global query language Relational data model as the global data model TCP/IP communication protocols HL7 data format
12.3.8
An Example—The IHE Patient Information Reconciliation Profile
We use the IHE patient information reconciliation profile as an example to explain how the work flow would benefit from the integration of HIS, RIS, and PACS (Fig. 12.8). This integration profile extends the scheduled work flow profile by providing the means to match images acquired for an unidentified patient (for example, during a trauma case in which the patient ID is not known) with the patient’s registration and order history. In the example of the trauma case, this allows subsequent reconciliation of the patient record with images acquired (either without a prior registration or under a generic registration) before the patient’s identity could be determined. Enabling this after-the-fact matching greatly simplifies these
Read RIS/HIS data and Store into PACS database
Ethernet TCP/IP
send-to-PACS
PACS Database Application Programs
rec-form RIS
triggering mechanism
Retrieve RIS data related to the event from the RIS database, reformat to HL7
new RIS Event
RIS Database
Figure 12.6 RIS-PACS interface architecture implemented with a database-to-database transfer using a trigger mechanism.
Ch12.qxd 2/12/04 5:20 PM Page 318
318
PACS Database
PACS query To RIS
Send- to-RIS
PACS TCP/IP
Receive-from PACS
INTEGRATION OF HIS, RIS, PACS, AND ePR
RIS RIS Database
Translation to RIS query program
Start Request to RIS
Execute query
TCP/IP
Send–to-PACS
Use results for PACS program
Receive-from-RIS
End Format results and package to HL7
Figure 12.7 RIS-PACS interface with a query protocol.
exception-handling situations. Information systems involved in this integration profile are: •
•
•
Enterprise-wide information systems that manage patient registration and services ordering (ADT/registration system, HIS) Radiology departmental information systems that manage department scheduling (RIS) and image management/archiving (PACS) Acquisition modalities
12.4 12.4.1
INTERFACING PACS WITH OTHER MEDICAL DATABASES Multimedia Medical Data
The many consultation functions of a radiology specialist in a health care center include consulting with primary and referring physicians on prescribing proper radiological examinations for patients, performing procedures, reading images from procedures, dictating and confirming diagnostic reports, and reviewing images with referring physicians and providing consultation. On the basis of these radiological images, reports, and consultations, the referring physicians prescribe proper treatment plans for their patients. Radiologists also use images from examinations and reports to train fellows, residents, and medical students. In their practice, radiologists often request necessary patient information (e.g., demographic data, laboratory tests, and consultation reports from other medical specialists) from medical records. Radiologists also review literature from the library information systems and give formal rounds to educate colleagues on radiological procedures and new radiological techniques. Thus, the practice of radiology requires integrating various types of information—voice, text, medical records,
J Doe
Later, actual patient name and demographics entered
Acquisition completed
Patient and procedure information retrieved by the modality
Modality worklist for John Doe
Position patient and acquire image
(4) Modality
Updated info changes demographic data stored with images
Updated patient images
Image Manager and Archive
(5)
PACS
Acquisition completed
Images retrieved
Diagnostic image and demographic info presented to radiologist
(6) Diagnostic Workstation
t repor
John Doe Images stored
Diagnostic image and demographic info stored for network access
(7) Report Repository
INTERFACING PACS WITH OTHER MEDICAL DATABASES
Figure 12.8 The IHE patient reconciliation profile as an example for the HIS-RIS-PACS interface (Carr, 2003). Unidentified patient exam: Steps 1–7. Reconciliation of patient record: Steps 8, 9, 2, . . . , 7.
Break down order into procedure steps
(3) Orders Filled
Update patient info
Goal Add to Scheduled Workflow the ability to reconcile trauma patients and other exception cases
Updated demographic information sent to the order Placing and Order Filing t
(9) Registration
(8) Patient later identified enter proper data
Update patient info
Examination orders for John Doe
RIS
(2) Orders placed
ER physician orders radiology d
HIS
Emergency Registration
(1) Enter patient as John Doe
Ch12.qxd 2/12/04 5:20 PM Page 319
319
Ch12.qxd 2/12/04 5:20 PM Page 320
320
INTEGRATION OF HIS, RIS, PACS, AND ePR
images, and video recordings—into proper files/databases for easy and quick retrieval. These various types of information exist on different media and are stored in data systems of different types. Advances in computer and communication technologies allow the possibility of integration of these various types of information to facilitate the practice of radiology. We have already discussed two such information systems, namely, HIS and RIS. 12.4.2
Multimedia in the Radiology Environment
“Multimedia” has different meanings depending on the context. In the radiology environment, the term refers to the integration of medical information related to radiology practice. This information is stored in various databases and media either in voice form or as text records, images, or video loops. Among these data, patient demographic information, clinical laboratory test results, pharmacy information, and pathological reports are stored in the HIS. The radiological images are stored in the PACS permanent archive system, and the corresponding reports are in the reporting system of the RIS. Electronic mail and files are stored in the personal computer system database. Digital learning files are categorized in the learning laboratory or the library in the department of radiology. Some of these databases may exist in primitive legacy systems; others, for example like PACS, can be very advanced. Thus the challenge of developing multimedia in the radiology environment is to establish infrastructure for the seamless integration of this medical information systems by means of blending different technologies, while providing an acceptable data transmission rate to various parts of the department and to various sites in the health care center. Once the multimedia infrastructure is established, different medical information can exist as modules with common standards and be interfaced with this infrastructure. In the multimedia environment, radiologists (or their medical colleagues) can access this information through user-friendly, inexpensive, efficient, and reliable interactive workstations. RIS, HIS, electronic mail, and files involve textual information requiring from 1 K to 2 K bytes per transaction. For such small data file sizes, although developing interface with each information system is tedious, the technology involved is manageable. On the other hand, PACS contains image files that can be in the neighborhood of 20–40 Mbytes. The transmission and storage requirements for PACS are manyfold those of text information. For this reason, PACS becomes central in developing multimedia in the radiology environment. 12.4.3
An Example—The IHE Radiology Information Integration Profile
We use the IHE radiology information integration profile as an example to explain how to use the IHE profile to access various radiology information. The access to radiology information integration profile (Fig. 12.9) specifies support of a number of query transactions providing access to radiology information, including images and related reports. Such access is useful both to the radiology department and to other departments such as pathology, surgery, and oncology. Nonradiology information (such as lab reports) may also be accessed if made available in DICOM format.
Ch12.qxd 2/12/04 5:20 PM Page 321
INTERFACING PACS WITH OTHER MEDICAL DATABASES
Referring physician
321
Emergency Department
Referring physician
Other Depts. ∑ ∑ ∑ ∑ ∑
Remote Clinics
Oncology Pediatrics Neurology Surgery etc.
Electronic Patient Record
Figure 12.9 The IHE access to radiology information integration profile as an example for multimedia in the radiology environment.
This profile includes both enterprise-wide and radiology department imaging and reporting systems such as: •
•
• •
Review or diagnostics display workstations (stand-alone or integrated with a HIS, RIS, PACS, or modality) Reporting workstations (stand-alone or integrated with a HIS, RIS, PACS, or modality) Picture archiving and communication system (PACS) Report repositories (stand-alone or integrated with a HIS, RIS, or PACS)
12.4.4
Integration of Heterogeneous Databases
12.4.4.1 Other Related Databases For multimedia to operate effectively in the radiology environment, at least six heterogeneous databases must be integrated, namely the HIS, RIS, PACS, electronic mail and file, digital voice dictation system, and electronic patient record (ePR). In Section 12.3, we described the HIS/RIS interface. We describe the digital voice system in this section and the ePR in Section 12.5. 12.4.4.2 Interfacing Digital Voice with PACS Typically, radiological reports are archived and transmitted independently from the image files. They are first dictated by the radiologist and recorded on an audiocassette recorder from which a textual form is transcribed and inserted into the RIS several hours later. The interface between the RIS and the PACS allows for sending and inserting these reports into the PACS database, from which a report corresponding to the images can be
Ch12.qxd 2/12/04 5:20 PM Page 322
322
INTEGRATION OF HIS, RIS, PACS, AND ePR
displayed on the PACS workstation on request by the user. This process is not efficient because the delay imposed by the transcription prevents the textural report from reaching the referring physician in a timely manner. The ideal method is to use a voice recognition system that automatically translates voice into text. Although several such systems are available in the market, because of the quality of the translation, its acceptance has not yet been popular. Figure 12.10 shows a method of interfacing a digital voice system directly to the PACS database. The concept of interfacing these is to have the digital voice database associated with the PACS image database; thus, before the written report becomes available, the referring physician can look at the images and listen to the report simultaneously. Following Figure 12.10, the radiologist views images from the
RADIOLOGIST 1) 2)
TRANSCRIBER
DICTAPHONE 1)
Types voice reports at RIS Station
2)
PACS 1) 2)
Views images at a PACS workstation Dictates reports with Dictaphone system
Links reports at RIS with images Instructs Dictaphone system to delete voice reports
PACS
Transmits digital voice reports to voice message server Informs PACS that voice reports are available
Links voice to images
Radiologist At the PACS Primary Diagnosis workstation
CLINICIAN View images at ICUs or wards with voice or typed reports
Clinician At the PACS ICU/Ward Workstation
Figure 12.10 The operational procedure of a digital voice system connected with the PACS starts with the radiologist at the primary diagnostic workstation dictating a report with the digital Dictaphone system.
Ch12.qxd 2/12/04 5:20 PM Page 323
ELECTRONIC PATIENT RECORD (ePR)
323
PACS workstation and uses the digital Dictaphone system to dictate the report, which converts it from analog signals to digital format and stores the result in the voice message server. The voice message server in turn sends a message to the PACS data server, which links the voice with the images. The referring physicians at the workstation can, for example, in an intensive care unit, request to review certain images and at the same time listen to the voice report through the voice message server linked to the images. Later, the transcriber transcribes the voice by using the RIS. The transcribed report is inserted into the RIS database server automatically. The RIS server sends a message to the PACS database server. The latter appends the transcribed report to the PACS image file and signals the voice message server to delete the voice message. Note that although the interface is between the voice database and the PACS database, the RIS database also comes into the picture. When the DICOM structured report standard becomes available (see Section 7.4.6), radiologists can directly enter the report through the structured report format while reviewing the images. Thus the digital voice dictation system may see less use once the DICOM structured report becomes more acceptable to radiologists.
12.5
ELECTRONIC PATIENT RECORD (ePR)
12.5.1
Current Status of ePR
The electronic medical record (eMR) or electronic patient record (ePR) is the ultimate information system in a health care enterprise. In an even broader sense, if the information system includes the health record of an individual, then it is called eHR (electronic health record). In this context, we concentrate on ePR. Currently, only small subsets of ePR are actually in clinical operation. One can consider ePR as the big picture of the future health care information system. Although the development of a universal ePR as a commercial product is still years away, its eventual impact on the health care delivery system should not be underestimated. An ePR consists of five major functions: 1. 2. 3. 4. 5.
Accepts direct digital input of patient data. Analyzes across patients and providers. Provides clinical decision support and suggests courses of treatment. Performs outcome analysis and patient and physician profiling. Distributes information across different platforms and health information systems.
HIS and RIS, which deal with patient nonimaging data management and hospital operation, can be considered components of eMR. An integrated HIS-RIS-PACS system, which extends the patient data to include imaging, forms a cornerstone of ePR. Existing ePRs have certain commonalties. They have large data dictionaries with time stamped in their contents and can query and display data flexibly. Examples of successfully implemented eMRs are: COSTAR (Computer-Stored Ambulatory Record) developed at Massachusetts General Hospital (in the public domain), Regenstrief Medical Record System at Indiana University, HELP (Health
Ch12.qxd 2/12/04 5:20 PM Page 324
324
INTEGRATION OF HIS, RIS, PACS, AND ePR
Evaluation through Logical Processing) system developed at the University of Utah and Latter-Day Saints (LDS) Hospital, and the VAHE [Department of Veterans Affairs Healthcare Enterprise information system (see Section 1.6.1)]. Among these systems, the VAHE is properly the most advanced system in the sense that it is being used daily in many of the VA Medical Centers and it includes images in the ePR. For these reasons, we describe the VAHE in more detail in Section 12.5.2 as an example of ePR integrated with images. Just like any other medical information system, development of ePR faces several obstacles: • • • •
Common method to input patient examination and related data to the system Development of an across-the-board data and communication standard Buy-in from manufacturers to adopt the standards Acceptance by health care providers
An integrated HIS-RIS-PACS system provides solutions for some of these obstacles. • •
•
It has adopted DICOM and HL7 standards for imaging and text, respectively. Images and patient-related data are entered into the system almost automatically. The majority of imaging manufacturers have adopted DICOM and HL7 as de facto industrial standards.
Therefore, in the course of developing an integrated PACS, one should keep in mind the big picture, the ePR. Anticipated future connections of the integrated PACS as a subsystem of ePR should be considered thoroughly. Figure 12.4 illustrates the concept of using an interface engine as a possible connection of the integrated PACS to ePR. 12.5.2
Integration of ePR with Images—An Example
The United States Department of Veterans Affairs Healthcare Enterprise (VAHE) information system, VistA, is probably the most advanced enterprise-level ePR integrated with images compared with others in the field (Dayhoff, 2000). In this section we describe its basic system architecture, data flow, and operation. 12.5.2.1 The VistA Information System Architecture VAHE consists of a nation-wide network of 172 VA Medical Centers (VAMCs) and numerous outpatient clinics serving a patient population of 25 million veterans. VAHE is organized according to locations as VISNs, which is an area-level VA entity consisting of several VA hospitals, clinics, outpatient centers, etc. There are 22 VISNs. The VAHE healthcare information system consists of four major components: • •
VistA, the hospital information system VistA, the radiology information system (tightly coupled with HIS)
Ch12.qxd 2/12/04 5:20 PM Page 325
ELECTRONIC PATIENT RECORD (ePR)
• •
325
VistARad for diagnostic radiology VistA Imaging
The VistA Hospital and Radiology Information Systems (HIS/RIS) is used at all VAHE sites. It is an internally developed comprehensive HIS that supports all the clinical services and uses a client/server architecture that allows clinical workstation clients to communicate with HIS servers. Its database is Mumps beetree. The database files are all in Mumps, some stored on the hospital side to enable access from anywhere in the facility, and others kept locally. VistARAD is radiology PACS for primary image diagnosis. Some VAMCs purchase PACS from vendors; other, smaller centers rely on the VAHE IT Department in supporting a home-grown PACS. VistA Imaging is an ePR integrated with images that is a multimedia imaging distribution system for clinicians and health care providers. The VAHE IT Department supports all Centers in the integration of the above four components. The connectivity of these four components is shown in Figure 12.11. 12.5.2.2
VistA Imaging
Current Status The VistA Imaging System is an extension to the Veterans Health Information System Technology Architecture (VistA) hospital information system
VistA Imaging Network Topology Main VistA HIS CWS
CWS
CWS
CWS
CWS
CWS
CWS
CWS
Switch
Main hospital backbone
Switch
CWS
Fast Ethernet
Commercial EKG System
Image File Servers DxWS
DxWS
DxWS
DxWS
DxWS
DxWS
DICOM Gateway(s)
Background Processor
NT File Servers
NT File Server Jukebox
Gigabit Ethernet
Switch
DICOM Device
DICOM Device
DICOM Device
DICOM Device
DICOM Device
DICOM Device
Figure 12.11 The connectivity of the four components in the VistA information system. (Courtesy of Dr. H. Rutherford; Dayhoff, 2000.)
Ch12.qxd 2/12/04 5:20 PM Page 326
326
INTEGRATION OF HIS, RIS, PACS, AND ePR
that captures images and makes them part of the patient’s electronic patient record (ePR). VistA Imaging is the multimedia access and display component of the VistA. There are about 60 VistA Imaging Centers, of which about 50 are fully implemented. The number of VistARad implementations is about 40. The size of Jukebox in the VistA Imaging, archiving 1,000,000 images plus, is about 10 terabytes. All images come through the DICOM Gateways, are placed on the server’s RAID, and archived via the background processor to the Jukebox (Fig. 12.11). Figure 12.12, A and B, show a typical workstation and its connectivity used in the VistA HIS and VistA Imaging, respectively. Figure 12.13 depicts the diagnostic and clinical discipline supported by VistA Imaging. Figure 12.14A shows a VistA Imaging WS displaying images with patient record. Figure 12.14B displays thumbnail images, microscopic images, MRI, and EKGs. VistA Imaging Work Flow The window used to input patient information in the VistA Imaging is the CPRS (Computerized Patient Record System), which is the main workstation used by the medical support personnel for logging, scheduling procedures, and viewing data, reports, and images in the “patient folder.” VistA Imaging is launched from a menu item in CPRS, and all patient images, reports, and notes are available through the VistA Imaging Clinical Display Workstation thus launched. When the medical provider has completed its review of the data, VistA Imaging may be closed and the control reverts back to CPRS to access another patient. Each VistA Imaging clinical workstation may be used to address the patient information for images, reports, and notes. Furthermore, CPRS may be launched from the clinical workstation, enabling a link to the medical data about a patient in addition to the images and reports. The imaging system is operable from a workstation through access to patient information in the HIS and RIS, or it is accessible through the general CPRS hospital capabilities where orders, chart records, lab results, documents, and other items connected with the patient that may or may not be associated with images. Text information utility (TIU) is the primary method of connecting a patient with
Multimedia Workstation Network Architecture Remote Site
Background Processor NT File Server Jukebox
DICOM Gateway DICOM Devices
A
VistA HIS
Remote Procedure Calls
VistA Multimedia Workstation
Commercial API Patient Database
Images NT File Server
Commercial EKG System
Image Files
EKG Files
Figure 12.12 A typical clinical workstation and its connectivity used in the VistA HIS and VistA Imaging. (Courtesy of Dr. H. Rutherford; Dayhoff, 2000.)
B
Ch12.qxd 2/12/04 5:20 PM Page 327
ELECTRONIC PATIENT RECORD (ePR)
327
Input from Image and Data Sources Heart Cardiology
Colon Endoscopy
Wave Form Dental X-Ray
Lungs Bronchoscopy Network-WAN/LAN
VistARad
VistA Imaging Clinical & Radiology Home care
Outpatient Digital Camera
Commercial
CWS
CWS
DxWS
DxWS
Ultra Sound
Cardiology
Off Site Pathology Documents
Documents
Documents
Figure 12.13 Clinical disciplines supported by VistA Imaging. (Courtesy of Dr. H. Rutherford; Dayhoff, 2000.)
a medical order and the results from that order. Figure 12.15 depicts the multimedia object database schema with an example showing multiple access to three database files. Prefetch and Routing are facilities which pay attention to the fact that a patient has checked in for a procedure. The RIS automatically sends an HL7 message to be processed by the DICOM gateways. As a result, the prior cases are prefetched and made available on the appropriate workstation for the user. Also, the appropriate cases are routed to a predetermined remote site to be read as required. Both of these operations are accomplished by placing an entry in the background processor queue. Additionally, the background processor runs a purge process to detect images on hard disk that have a time stamp older than a user-determined date. The one maintaining the system must start this process. As a result, there is a schedule whereby in the morning certain parameters are examined and potential problems isolated regarding space and image availability for the user. VistA Imaging Operation VistA Imaging is a mechanism to enable physicians and other care providers to have complete and immediate access to all patient data, including images. Documents can also be scanned in under document imaging. It is an ePR data model architecture.
Ch12.qxd 2/12/04 5:20 PM Page 328
328
INTEGRATION OF HIS, RIS, PACS, AND ePR
VistA Combines Images with the Traditional Patient Record Components
A
VistA Provides Clinicians with access to all images and EKGs
B Figure 12.14 A: VistA Imaging displays the patient record with images. (Courtesy of Dr. H. Rutherford; Dayhoff, 2000.) B:VistA Imaging displays thumbnail images, microscopic images, MRI, and EKG. (Courtesy of Dr. H. Rutherford; Dayhoff, 2000.) (See color insert.)
Ch12.qxd 2/12/04 5:20 PM Page 329
ELECTRONIC PATIENT RECORD (ePR)
329
VistA Multimedia Object Database Schema Radiology Report Table
Multimedia Object Table Image File A
.. .
Medicine Procedure Table
Multimedia Group N Image File B
Image File C
File A Type Q Patient X
Group N
File B Type Q Patient X
Group N
File C Type Q Patient X
Group N
...
.. . .. . .. . .. . Progress Notes Table
... .. .
Figure 12.15 VistA Imaging multimedia object database schema. (Courtesy of Dr. H. Rutherford; Dayhoff, 2000.)
The patient record includes notes that schedule a procedure, reports on findings in images, and attached images that result from the procedure. The HIS has CPRS as the GUI front end and the VistA database (with the RIS database for radiology) to provide lookup of patient information. The access is via a few letters of the patient’s last name or the unique patient Social Security (SS) Number. There is the usual problem with duplication of names when the patient communicates verbally rather than with a SS card, driver’s license, or birth certificate. Even with one of these forms of identification, the clerk may click on an adjacent patient from the list. One method for confirmation is to verify that the patient with the presented identification has coverage in the system. Radiology images are pulled or pushed from the radiology modalities through the DICOM text and image gateways. HL7 messaging is used to “listen” on the gateway end, and the HIS initiates the TCP/IP channel communication to send the images from the modalities. Like all other PAC systems, one of the ongoing problems is to determine when a study is complete. This arises when the technologist wants to send “just one more” image, for whatever reason. Timing out can be used, but there are occasions when even a sensible timeout is exceeded (see Section 8.2.2.2). Modality dictionary file entries and modality worklist entries are set up for each DICOM modality. This also includes the “-oscopies” (endo-, etc.), ophthalmology, dental, and others as the DICOM communications are put into place. Once images are acquired in the DICOM gateway, a task called “image processing” assigns an image identifier in the master image file that is maintained on the HIS to make it available from anywhere in the enterprise. A queue entry is generated to the background processor queue to 1) archive the image to the Jukebox, 2) generate a thumbnail image as a preview (abstract) image, and 3) place the
Ch12.qxd 2/12/04 5:20 PM Page 330
330
INTEGRATION OF HIS, RIS, PACS, AND ePR
images and the created abstract on a server for viewing access. When images are part of a group, an entry is made to indicate with an abstract image for the group associated with it. Because the system was designed when DICOM was not a common capability, the images are separated into a text file (the DICOM header) and an image with a limited header attached. The image is subsampled to support viewing on standard 1280 ¥ 1024 monitors (1024 ¥ 768 monitors are also used for physician desktop use).
c13.qxd 2/12/04 5:21 PM Page 331
PART III
PACS OPERATION
c13.qxd 2/12/04 5:21 PM Page 333
CHAPTER 13
PACS Data Management and Web-Based Image Distribution
HIS Database
Generic PACS Components & Data Flow Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
CS Controller & Archive Server
Application Servers
Workstations
Web Server
Figure 13.0 PACS data management and web-based image distribution. This chapter discusses three topics: PACS data management, patient folder management, and Web-based server. Figure 13.0 illustrates the logical positions of these concepts in the PACS work flow (no shade).
13.1
PACS DATA MANAGEMENT
In Section 10.2 we discussed the PACS controller and archive server functions, including image receiving, stacking, routing, archiving, study grouping, RIS and HIS interfacing, PACS database updates, image retrieving, and prefetching. These software functions are needed for optimal image and information management, distribution, and retrieval at the display workstation. The existence of IHE has facilitated the integration of these functions into the work flow of the PACS operation. This section describes methods and essentials of grouping patients’ image and data effectively with the patient folder manager concept. PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
333
c13.qxd 2/12/04 5:21 PM Page 334
334
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
13.1.1
Concept of the Patient Folder Manager
Folder Manager (FM) is a software package residing in the PACS controller that manages the PACS by means of event trigger mechanisms. The concept emphasizes standardization, modularity, and portability. The infrastructure of FM includes the following components: • • • • •
HIS-RIS-PACS interface (Sections 10.2.6, 12.3) Image routing (Section 10.2.3) Study grouping (Section 10.2.5) Online radiology reports Patient folder management
The first three functions are presented in previous sections. We discuss the on-line reports and folder management here, as well as supplementing them with the concept of IHE integration profiles discussed in Section 7.5.3, listed again below: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.1.2
Scheduled work flow profile Patient information reconciliation profile Consistent presentation of image profile Presentation of grouped procedures profile Access to radiology information profile Key image note profile Simple image and numeric report (web-based image distribution) profile Basic security profile Charge posting profile Postprocessing work flow profile Reporting work flow profile Evidence documents profile Online Radiology Reports
The PACS receives radiology reports and impressions from RIS via the RIS-PACS interface, and either it stores these text information files in the PACS database or the PACS controller has access to it. These reports and impressions can be displayed instantaneously at a PACS display workstation along with the associated images from a given examination. The PACS also supports on-line queries of reports and impressions from a display workstation. In Chapter 12, we discussed three methods of on-line radiology reporting mechanism: •
•
Digital voice dictation as a temporary reporting system for PACS until the written report becomes available (Section 12.4.4.2) The DICOM structured report standard as a simple reporting mechanism (Section 12.4.4.2)
c13.qxd 2/12/04 5:21 PM Page 335
335
PATIENT FOLDER MANAGEMENT
•
Access to the IHE radiology information integration profile, which highlights the reporting stations and report repositories (Section 12.4.3)
These reporting systems form a component of the patient folder manager, and their implementation can be facilitated by using some of the IHE profiles described in later sections. 13.2
PATIENT FOLDER MANAGEMENT
The PACS controller and archive server manages patient studies in folders. Each folder consists of a given patient’s demographics, examination descriptions, images from current examinations, selected images from previous examinations, and relevant reports and impressions. A patient’s folder will remain on-line at specific display workstations during the patient’s entire hospital stay or visit. On discharge or transfer of the patient, the folder is automatically deleted from the display workstations. Readers should perceive the patient folder manager as the prelude of the ePR discussed in Section 12.5, except that FM emphasizes images whereas ePR is more global in the nature of its information. Three basic software modules are in the patient folder manager: • • •
Archive management Network management Display/server management
Table 13.1 describes these three modules and associated submodules and gives their essential level for the PACS operation. TABLE 13.1
Summary of Patient Folder Manager Software Modules
Module
Submodules
Essential level
Archive manager
Image archive Image retrieve HL7 message parsing Event trigger Image prefetch
1 1 2 2 2
Network manager
Image Send (DICOM) Image Receive (DICOM) Image routing Job prioritizing and recover mechanism
1 1 1 1
Display/server manager
Image selection Image sequencing (IHE Profile) Window/level preset Report/impression display
2 3 2 1
Essential Level (1: Highest, 3: Lowest) 1—Minimum requirements to run the PACS 2—Requirements for an efficient PACS 3—Advanced features
c13.qxd 2/12/04 5:21 PM Page 336
336
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
13.2.1
Archive Management
The archive manager module provides the following functionalities: • • •
Manages distribution of images in multiple storage media Optimizes archiving and retrieving operations for PACS Prefetches historical studies and forwards to display workstations
Mechanisms supporting these functions include event triggering, image prefetching, job prioritization, and storage allocation. 13.2.1.1 Event Triggering Event triggering can be achieved by means of the following algorithm. Algorithm Description Event Triggering Events occurring at the RIS are sent to the PACS in HL7 format over TCP/IP, which then triggers the PACS controller to carry out specific tasks such as image retrieval and prefetching, PACS database update, storage allocation, and patient folder cleanup. Events sent from RIS include patient admission, discharge, and transfer (ADT), patient arrival, examination scheduling, cancellation, and completion, and report approval. Essential components for the algorithm to be functional are HIS-RIS-PACS interface, HL7 message parsing, image prefetching, PACS database update, patient folder management, and storage allocation in memory, disk, and tape. 13.2.1.2 Image Prefetching The PACS controller and archive server initiates the prefetch mechanism as soon as it detects the arrival of a patient from the ADT message from the RIS. Selected historical images, patient demographics, and relevant radiology reports are retrieved from the permanent archive and the PACS database based on a predetermined algorithm from the user. These data are distributed to the destination display workstation(s) before the patient’s current examination. The prefetch algorithm is based on predefined parameters such as examination type, section code, radiologist, referring physician, location of display station, and number and age of the patient’s archived images.These parameters determine which historical images should be retrieved. Figure 13.1 shows a four-dimensional prefetching table: the current examination type, disease category, section radiologist, and referring physician. This table determines which historical images should be retrieved from the central archive system. For example, for a patient scheduled for chest examination, the n-tuple entries in the chest column (2, 1, 0, 2, 0, . . .) represent an image folder consisting of two single-view chest images, one two-view chest image, no CT head scan, two CT body studies, no angiographic image, and so on. In addition to this lookup table, the prefetch mechanism also utilizes patient origin, referring physician, location of display workstation, number of archived images of this patient, and age of the individual archived images in determining the number of images from each examination type to be retrieved. The Prefetch Algorithm Description The prefetch mechanism is carried out by several processes within the archive server (see Section 10.2.9). Each process runs
c13.qxd 2/12/04 5:21 PM Page 337
PATIENT FOLDER MANAGEMENT
∑
∑
337
∑
Disease 2 Disease1 Standard Rule Disease category Sp Cu ec rre ia Ty nt Ex lty pe am ,
Pediatrics Radiol
Neuroradiology
Chest
Bone
∑ ∑ ∑
Default
Ref Phys
1 - View Chest
2
2 - View Chest
1
CT - Head
0
CT - Body
2
Angio
0
Conventional Radiography
∑
MR
∑
US
∑
Fluoro ∑ ∑ ∑
Figure 13.1 Example of a four-dimensional prefetching table with examination type, disease category, section radiologist, and referring physician. The column under Chest is used for illustration presenting retrieved images in the patient image folder.
independently and communicates simultaneously with other processes utilizing client–server programming, queuing control mechanisms, and job prioritizing mechanisms. The prefetch algorithm can be described in the following algorithm. Description. The prefetching mechanism is triggered when any of the examination scheduled, examination canceled, or patient arrived events occurs at the RIS. Selected historical images, patient demographics, and relevant radiology reports are retrieved from the long-term archive and the PACS database. These data are distributed to the destination display workstation(s) before the patient’s current examination starts. The prefetch algorithm is based on predefined parameters including examination type, section code, radiologist, referring physician, location of display workstation(s), and number and age of the patient’s archived images. 13.2.1.3 Job Prioritization The PACS controller manages its processes by prioritizing job control to optimize the archiving and retrieving activities. For example, a request from a display workstation to retrieve an image set from the permanent archive will have the highest priority and be processed immediately, and, on
c13.qxd 2/12/04 5:21 PM Page 338
338
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
completion of the retrieval, the image will be queued for transmission with a priority higher than other images that have just arrived from the image acquisition nodes waiting for transmission. By the same token, an archive process must be compromised if there is any retrieval job executing or pending. The use of automatic job prioritizing and compromising between PACS processes will result in a dramatic decrease in the delay in servicing radiologists and referring physicians for their urgent needs. 13.2.1.4 Storage Allocation During a hospital stay or visit, the patient’s current images from different examinations are copied to a temporary disk storage device. On discharge or transfer of the patient, these images are then grouped from the temporary storage and copied contiguously to the permanent storage. 13.2.2
Network Management
The network manager handles the image/data distribution from the PACS controller and the archive server. This module, which controls the image traffic across the entire PACS network, is a routing mechanism based on some predefined parameters (see Section 10.2.3). It includes a routing table composed of predefined parameters and a routing algorithm driven by the routing table. The routing table is also designed to facilitate image/data updating as needed. Any changes of parameters in the routing table should be possible without modification of the routing algorithm. In addition to routing image/data to their designated display workstations, the network manager also performs the following tasks: •
•
•
Queue images in the PACS controller for future process should the network or workstation fail. Switch network circuit from the primary network, e.g., ATM, to the secondary network, e.g., conventional Ethernet, should the primary network fail. Distribute image/data based on different priority levels.
The image-sending mechanism can be described in the following algorithm. Image Distribution Algorithm: Description. The send process catches a ready-tosend signal from the routing manager, establishes a TCP/IP connection to the destination host, and transmits the image data to the destination host. On successful transmission, the send process dequeues current job and logs a SUCCESS status. Otherwise, it requeues the job for a later retry and logs a RETRY status. Essential components in the image distribution algorithm are TCP connect, dequeuing, requeuing, and event logging. The DICOM communication standard with the DIMSEs can handle the image send very efficiently. 13.2.3
Display Server Management
Display server management includes the following tasks: • •
Image sequencing Image selection
c13.qxd 2/12/04 5:21 PM Page 339
PATIENT FOLDER MANAGEMENT
• •
339
Window/level preset Coordination with reports and impressions
Window/level preset and coordination with reports and impressions have been described in the IHE profiles. Image sequencing is one of the most difficult tasks in the display server management because it involves users’ habits and subjective opinion and thus it does not have universal rules to govern these preferences. Algorithm development requires certain artificial intelligence. Implementation of this algorithm is based on specific application, and it requires heavy customization. Image selection can be handled by the display server management using basic rules through user interaction by following the algorithm described below. Image Selection Algorithm: Description. The image selection process allows a user to select a subset of images from a given image sequence (as in an MR or CT study) on the display monitor(s). These selected images are then extracted from the original sequence and grouped into a new sequence (e.g., a montage; see Section 11.5.7) for future display. The essential components in the image selection algorithm are image display system description, montage function, and PACS database updates (Section 10.2.7). The following subsections describe some IHE integration profiles that can be used for facilitating the display server management. In particular, Integration Profiles 3, “Presentation grouped procedures,” 4, “Access to radiology information” (presented in Section 12.4.3), 6, “Simple image and numeric report,” and 7, “Key image note,” give detailed descriptions on the data flow requirements for display images. For more details, refer to (Carr, 2003). 13.2.3.1 IHE Presentation of Grouped Procedures (PGP) Profile The presentation of grouped procedures (PGP) integration profile shown in Figure 13.2 addresses the complex information management problems entailed when information for multiple procedures is obtained in a single acquisition step (for example CT of the chest, abdomen, and pelvis). During some clinical applications (requested procedures), only subsets of these images in this single acquisition step are needed. During this situation, the PGP provides the ability to view image subsets resulting from a single acquisition and relates each image subset to a different requested procedure. A single acquired image set is produced, but the combined use of scheduled work flow and consistent presentation of images transactions allows separate viewing and interpretation of subsets of images related to each clinical application. Among other benefits, this allows generating reports that match assigned radiologist reading and local billing policies without additional intervention. The PGP integration profile requires the scheduled work flow integration profile for the selection of image subsets. Subsystems in the PACS involved include: • • • •
Image acquisition modalities PACS acquisition gateways Radiology information system (RIS) Image display workstations
c13.qxd 2/12/04 5:21 PM Page 340
340
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
Requested Procedure CHEST
Report: CHEST
Report: ABDOMEN
Requested Procedure ABDOMEN IMAGE
RIS/PACS
IMAGE abdomen view
(3)
(4)
MODALITY Performs a single exam
(1) IMAGE
Operator Groups 2 Procedures
abdomen view
(2)
Figure 13.2 IHE presentation of grouped procedures (PGP) profile. Example shows that a single CT acquisition procedure is used to acquire both chest and abdominal scans.The scheduled work flow profile provides information for the PGP to split the acquired images into two subsets, one for chest and the other for abdomen (Carr, 2003).
13.2.3.2 IHE Key Image Note Profile Recall in that Section 11.5.7 we discussed the montage image function to tag images of interest for future review. The key image note integration profile (Fig. 13.3) provides a work flow to allow the montage to function by specifying a transaction that enables the user to tag images in a study by referencing them in a note linked with the study. This note includes a title stating the purpose of the tagged images and a user comment field. These notes will be properly stored, archived, and displayed as the images move among systems that support the profile. Physicians may attach key image notes to images for a variety of purposes: referring physician access, teaching file selection, consultation with other departments, and image quality issues, to name a few. This integration profile requires the following resources: • • •
Review or diagnostics display workstations PACS acquisition gateways Acquisition modalities
c13.qxd 2/12/04 5:21 PM Page 341
DISTRIBUTED IMAGE FILE SERVER
The Referring Physician
The Radiologist Selects the Key Images and creates Notes for the physician.
... ... ...
341
Views the Key Images and Notes prepared by the radiologist.
Cerebella r tumor
Cerebellar tumor
Hydrocephalus
Hydrocephalus
Improved communication between the radiologist and referring physician.
Figure 13.3 IHE key image note profile. A radiologist can use this profile to tag images with notes for future review or for communicating with referring physicians (Carr, 2003).
13.2.3.3 IHE Simple Image and Numeric Report Profile The simple image and numeric report integration profile shown in Figure 13.4 can be used to facilitate the use of digital dictation, voice recognition, and specialized reporting packages by separating the functions of reporting into discrete actors for creation, management, storage, and viewing. Designated numeric codes are used to identify anatomical structures, locations, diseases, etc. Separating these functions while defining transactions to exchange the reports between them enables the user to include one or more of these functions in an actual system. The reports exchanged have a simple structure: a title; an observation context; and one or more sections each with a heading, text, image references, and, optionally, coded measurements. Some elements can also be coded to facilitate computer searches. Such reports can be input to the formal radiology report, thus avoiding reentry of information. This integration profile requires the following resources: • • •
13.3
Review or diagnostics display workstations Reporting stations RIS with the reporting and repository functions
DISTRIBUTED IMAGE FILE SERVER
PACS was first developed to meet the needs of image management in the radiology department. As the PACS concept evolved, the need for using PACS for other applications increased. For this reason, the hospital-integrated PACS design includes
c13.qxd 2/12/04 5:21 PM Page 342
342
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
View Section
Diameters and Area Measurements
Measurement Name
Cellebellar tumor
Measurement Abbreviation
xx
Mean Diameter
XXXXX
Short Axis
XXXXX
Long Axis
XXXXX
Area
XXXXX
Best Illustration of finding
XXXXX
Best Illustration of finding
XXXXX
Measurement Name Measurement Abbreviation
XXXXX XXXXX
Figure 13.4 Sample of a page in the display of the PACS workstation using the IHE simple image and numeric report profile (Carr, 2003).
distributed image file servers to provide integrated image and textual data for other departments in the medical center and for the entire health care enterprise. The PACS components and data flow diagrammed in Figure 13.0 represent an open architecture design: The display component can accommodate many types of display workstations as described in Section 11.4. However, when the number of display workstations (e.g., physician desktop workstations) increases, each with its own special applications and communication protocols, the numerous queries and retrievals generated by the more active workstations will effect the performance of the PACS controller (see Section 10.1). Under such circumstances, distributed image servers linked to the PACS controller should be designed in the PACS infrastructure to alleviate the workload of the controller.As an example, Figure 13.5, an extension of Figure 13.0, shows a multiple-image server design connected to the controller with a particular one for physician’s desktop computers. During the past several years, web-based image distribution has become very popular and useful in image distribution because of the easy-to-use web technology, and its potential integration with the ePR infrastructure. In the next section, we discuss the web-based image/data distribution concept, its infrastructure design, and applications.
c13.qxd 2/12/04 5:21 PM Page 343
343
WEB SERVER
Report HIS/RIS
PACS Controller Database, Archive
Acquisition Computer
Imaging Systems
HIS/RIS/PACS Data
Display
•••
Internet
Physician’s Desktop Server Physician Workstation
Physician Workstation
Laser Printers
Library Slide Makers
Departmental Image File Server
Departmental Image File Server
Other Databases
Radiology Network Email Server
Teaching File Server
Conference Room
Figure 13.5 Several distributed image file servers connected to the PACS controller. Each server provides specific applications for a given cluster of users. The physician desktop server is used as example for illustration. The concept of the web server is described in Figure 13.6.
13.4
WEB SERVER
Section 13.4 describes the concept of the distributed image file server for easy access to PACS image/data for different applications. In this section we present an image file server design based on the World Wide Web (web) technology that allows the access to PACS image/data for both inter- and intrahospital application. 13.4.1
Web Technology
The Internet was developed by the U.S. government originally for military applications. Through the years, its utilization has been extended to many other applications. The Internet can be loosely defined as a set of computers connected together by various wiring methods that transmit information among each other through TCP/IP network protocols (Section 9.1.2) using a public communication network. The intranet, on the other hand, is a private internet that transmits information within an administrative entity through a secured network environment. The World Wide Web is a collection of Internet protocols that provide easy access to many large databases through Internet connections. Web is based on the hypertext transfer protocol (HTTP), which supports the transmission of hypertext document on all
c13.qxd 2/12/04 5:21 PM Page 344
344
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
computers accessible through the Internet. The two most popular languages for Web applications that allow for the display of formatted and multimedia documents to be independent of the computers used are the hypertext markup language (HTML) and Java language (just another vague acronym; Sun Microsystems, Mountain View, CA). Most recently, the XML (Section 7.6.5) is used extensively as an image as well as text standard language. Currently, most web browsers, such as Microsoft’s Internet Explorer and Netscape Navigator, support JPEG (Joint Photo-graphics Experts Group, 24 bits for color images) or GIF (graphics interchange format, 8 bits) image rendering. In web terminology, there are the web server and clients (or sites, or browsers). The web site can use trigger processes to access information on the web server through the HTTP (Hypertext Transfer Protocol). During the past several years, applications of web technology have been extended to health care information applications. Some web sites now support access to textual information from electronic patient record (ePR) systems. These web-based ePR systems can be categorized according to their characteristics such as completeness and detail of the information model, methods of coupling between the web-based and legacy hospital information systems, quality of the data, and the degree of customization. The use of the web server as a means to access PACS image/data has been implemented by both academic centers and manufacturers. In Section 13.4.2, we present the design of a web-based image file server in the PACS environment as a means for accessing PACS image/data for both intra- and interhospital applications. 13.4.2
Concept of the Web Server in PACS Environment
Consider a web-based image file server as shown in Figure 13.6. It must:
WEB SERVER PACS
D I C O M
PACS TECHNOLOGY
DICOM
D I C O M
DICOM/HTTP Interpreter
H T T P
HTTP
H T T P
WEB BROWSER
WEB TECHNOLOGY
Figure 13.6 The basic architecture of a web server allowing web browsers to query and retrieve image/data from PACS through the web-based server. The DICOM/HTTP interpreter is the key component.
c13.qxd 2/12/04 5:21 PM Page 345
COMPONENT-BASED WEB SERVER FOR IMAGE DISTRIBUTION AND DISPLAY
345
1. Support web browsers connected through the Internet 2. Interpret queries from the browser written in HTML or Java and convert the queries to DICOM and HL7 standards. 3. Support DICOM query/retrieve SOP to query and retrieve image/data from the PACS controller 4. Provide a translator to convert DICOM images and HL7 text to HTTP Figure 13.6 shows the basic architecture of a web server allowing Web browsers to query and retrieve PACS image/data. Figure 13.7 illustrates eight basic steps involved in a typical query/retrieve session from the web browser to the web server for image/data stored in the PACS controller. The web server is a good concept utilizing existing Internet technology available in everybody’s desktop computer to access PACS image/data. It works very well in intranet environments because most intranets now use well-designed high-speed gigabit Ethernet switches or ATM LAN, which has sufficient bandwidth for image transmission. However, there are drawbacks to using the current Internet for WAN applications for two reasons. First, the response time for image transmission from the Internet is too slow because of the constraint of the WAN speed (Section 9.6). As a result, it is useful only if the number and size of images retrieved are both small. Another problem is that web technology is not designed for high-resolution gray scale image display, especially when real-time lookup table operation is required. In this case, the waiting time for image display and manipulation would be too long to be tolerated. In Section 13.5 we give an example of a web-based image distribution in the clinical PACS environment.
13.5 COMPONENT-BASED WEB SERVER FOR IMAGE DISTRIBUTION AND DISPLAY In this section we discuss a method of developing a web-based distribution of PACS image/data with software component technologies. This method, combining both web-based and PACS technologies, can preserve the 12 bits/pixel image quality and display full-resolution DICOM images by using a display and processing component. 13.5.1
Component Technologies
Traditional software development requires application executables to be compiled and linked with their dependencies. Every time a developer wants to use different processing logics or new capabilities, it must modify and recompile the primary application to support them. As a result, it requires additional resources for software development, which inhibits fast turnaround time of the product and costs more. Component software technologies, which are now widely used in current software engineering, especially in enterprise-level software development, were introduced into software development to alleviate this problem. Component technologies usually consist of two parts: the component software architecture and the component software itself.
c13.qxd 2/12/04 5:21 PM Page 346
346
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
DB
PACS Archive Server CFind SCP
2
CStore SCU
CMove SCP
3
6
CFind SCU
Storage
7
CMove SCU
CStore SCP
Web Server with a Query/Retrieve Broker Query Interpreter
1
Result Forwarder
4
Query Requester
List Displayer
Retrieval Interpreter
5
Retrieval Requester
Image Forwarder
Temp Storage
8
Image Display
Web Query/Retrieve Client
Figure 13.7 A typical query/retrieval session from the web browser through the web server requesting images and related data from the PACS controller. The session requires eight steps involving both web and PACS technologies. The resources required in the web server (Fig. 13.6) for such a session are detailed in the web server with a Q/R broker.
Component software architecture is a static framework or skeleton (structure or set of conventions) that provides the foundation for the component software to build on. The architecture defines how the parts relate to each other, including constraints governing how they can relate. The software components are generally any software (or subsystems) that can be factored out and have potential of standardizing or reusable exposed interface. They usually use their interfaces
c13.qxd 2/12/04 5:21 PM Page 347
COMPONENT-BASED WEB SERVER FOR IMAGE DISTRIBUTION AND DISPLAY
347
(importing/exporting) to provide special functions and services or to interact with other components or software modules. Three component software technologies are used widely: CORBA (Common Object Request Broker Architecture) (CORBA@ BASIC), The JavaBeans (JAVA), COM (Component Object Model), or Active X [www.Microsoft.com/technet]. The latter two are best supported and more often used in Microsoft Windows platforms, such as Windows 98, NT, Windows 2000, or XP. 13.5.2 The Architecture of Component-Based Web Server Let us consider a web-based PACS image/data distribution server based on the component architecture and the DP (display and processing) component described in Section 13.5.4. The XML is used for text messages exchanging between the browsers and web server. Other standard components are the Microsoft Internet Information Server (IIS) as the web server and Internet Explorer (5.0 or higher) as the default browser supported by the component-based Web server, and Active X Plugin Pro from Ncompass. Figure 13.8 shows the component-based web server architecture (see also Fig. 13.6 for the general PACS-web interface). In this architecture, the image processing and manipulation functions are moved to the browser side (Fig. 13.6, right, see Section 13.5.4). The ASP (Active Server Pages) objects, as web accessing interfacing objects, bridge the components distributed in the web server (DICOM services and image database) and the browser display and processing component (DP), and make them interoperate with each other. 13.5.3 The Data Flow of the Component-Based Web Server The Active X control component of display and processing is first resided in the web server as a cabinet file. It can be downloaded to any browser to access images in the web server and registered as a plug-in object in the client computer. A client can use a browser the first time to access the web pages containing ASP objects in the web server. Later on, the user can display and process DICOM images managed by the web server the same way as in a component-based diagnostic display workstation (Section 13.5.4).
DICOM (Storage, Q/R)
PACS Server
Image ASP DICOM Services Database Objects
Web Server (IIS)
http (DICOM files, XML)
Display & Processing
Browsers
Figure 13.8 The architecture of the component-based web server for image/data distribution and display (Zhang et al., 2003). IIS, Microsoft Internet information server; ASP, active server pages.
c13.qxd 2/12/04 5:21 PM Page 348
348
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
13.5.3.1 Query/Retrieve DICOM Image/Data Resided in the Web Server When the user wants to use the browser to access DICOM images managed by the image database in the web server, the browser sends the http request to the server with MIME (multipurpose internet mail extension) message (XML format, encoded by DP component). The web server decodes the XML-formatted message, queries its local database with SQL based on the content of the XML message, compiles the queried results (including patient/study/series/image/files information) to XML format, and sends the http response with XML-formatted MIME message to the browser. Later on, the browser requests the web server sending the DICOM files, pointed by URL (uniform resource locator) in the request message, to the browser based on the user selection from the patient list. The web server decodes the XML message sent from the browser and uploads the DICOM files to the browser through the http response, in which the type and content of MIME message are the binary and DICOM files. The browser or DP component receives the DICOM files, decodes the files, and displays the images. Note that the full resolution of the image pixel is preserved and this guarantees that the user can manipulate the images the same way as in a single-monitor diagnostic workstation. 13.5.3.2 Query/Retrieve DICOM Image/Data Resided in the PACS Archive Server Users can also query and retrieve the DICOM images stored in the PACS archiving server by using the browser. The operation and the communications between the browser and web server for PACS image query and retrieval are similar to that of the web server image. However, the interoperation between the web server and PACS server is through DICOM services. For example, when the XMLformatted Query message sent from a client browser and received by the web server, the message is posted to the component of the DICOM communication service, and converted to a DICOM Query object and sent to the PACS server through DICOM C-FIND service by the DICOM component.When the DICOM component receives the queried result from the PACS server, it converts it to a XML message, and sends the XML message to the browser through the web server. For the retrieval operation, the data flow is similar with the query, but the DICOM communication services between the web server and the PACS server are C-MOVE and C-STORE. Figure 13.9 shows the data flow of the Query/Retrieval operation between
(1)' ¯
(1)” DICOM (C-FIND)
(1) Query (XML)
PACS Server
(2)” DICOM (C-MOVE)
(3)” DICOM (C-STORE)
DICOM Services
(2)' ¯
Database Manager (3')
Web Server
ASP Objects
(2) Retrieval (XML)
(3) DICOM Image
Display & Processing
Browser
Figure 13.9 The data flow of the query/retrieve operation in the component-based web server, which preserves the 12 bits/pixel DICOM image in the browser. The DICOM images can be displayed with its full resolution by the display and processing component in the browser (Zhang et al., 2003).
c13.qxd 2/12/04 5:21 PM Page 349
COMPONENT-BASED WEB SERVER FOR IMAGE DISTRIBUTION AND DISPLAY
349
the browser, the web server and the PACS server. In Figure 13.9, (1)-(1)¢-(1)≤ denotes the query procedure, and (2)-(2)¢-(2)≤-(3)≤-(3)¢-(3) denotes the retrieval procedure. There is certain similarity in the query/retrieve operation by the browser using the DICOM protocol and format and that using the web protocol and format. The difference is that in the former discussed in this section, 12 bits/pixel is preserved, whereas in the latter, the web image format only allows 8 bits/pixel (see Fig. 13.7, where Step 8 can only return 8 bits/pixel because of the web technology used). 13.5.4
Component-Based Architecture of Diagnostic Display Workstation
The purpose of the component-based diagnostic display workstation is to augment web-based technologies for displaying and processing full-resolution DICOM images, which the web-based technology cannot do. The component software architecture of the web-based image display workstation consists of four components: DICOM communication component, image database component, image processing and display component, and GUI (graphic user interface) component. All these components are integrated into one computer with image cache storage of hard disks and memory, as well as a powerful CPU and high-speed network interface. They are interoperated through the standardized or privately defined component interfaces, as shown in Figure 13.10. The example shown here uses the Active Template Library (ATL) to develop the components of DICOM communication services and image database and Microsoft Foundation Classes (MFC) to develop the components of GUI and the key component of display and processing. The standardized interfaces used in the display workstation are compliant with DICOM standard and ODBC (Open Database Connectivity).The three software programs: iViewer, iProcessor, and iController are developed for image processing and display interfaces.
Graphic User Interface
iViewer ODBC API
ODBC API
DICOM Comm. Services
Memory manager
iController
Memory manager
iProcessor
Image Processor
Viewer Manager
Display & Processing
DICOM
Network Figure 13.10 The component architecture of a diagnostic display workstation for display and process DICOM images in a web-based server (Zhang et al., 2003). API, application program interface; ODBC, open database connectivity.
c13.qxd 2/12/04 5:21 PM Page 350
350
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
There are three kinds of object arrays and one memory bulk object inside the component of display and processing (DP): the array of processing objects, the array of viewing objects, and the array of window objects, as well as bulk memory storing image objects, which is managed by the memory manager. The interoperation and data exchanges between these different kinds of object arrays are done through the interfaces of iController, iProcessor, and iViewer shown in Figure 13.11. There is also a message queue, which collects the messages generated from different windows and dispatches them to the proper image processing pipelines, which are formed by the three objects of processing, viewing, and window. Internally, the DP component creates an event-driven window environment to let users use input devices, such as mouse or keyboard, to manipulate the images displayed and processed through the windows and pipelines in the component with multithreading processing capability. Because the window objects inside the component can be attached to different display devices through the proper software run-time configuration, this component-based display architecture can be implemented in different display systems. The DP component uses the Active X control component technology. This component can perform all major display functions found in a PACS diagnostic display workstation and support multimonitor display systems and also can be reused in different Windows platforms and applications, such as Windows- and web-based applications. The DP component can be plugged in to any Web browser to interact with the component-based Web server described in previous sections to display and manipulate DICOM images. 13.5.5
Performance Evaluation
Because the browser’s DP component preserves full-resolution DICOM images and provides most required diagnostic functions, the client workstation can be treated
iController
iProcessor
Processing Object 1
Memory Manager Of Image Objects
Processing Object 2
Window 1
Viewing Object 1
Window 2
Viewing Object 2
• • • Processing Object n
iViewer
• • • Viewing Object n
Messages
Message Queue
Messages
• • •
Window n
Display and Processing (DP) Component
Figure 13.11 The workflow of the display and processing (DP) component in a web-based server for display and processing of DICOM images (Zhang et al., 2003).
c13.qxd 2/12/04 5:21 PM Page 351
COMPONENT-BASED WEB SERVER FOR IMAGE DISTRIBUTION AND DISPLAY
351
as a regular PACS workstation except that the query/retrieve images are through the web-based technology. The question raised is, How does such a DP component in the client’s browser built on top of the web-based technology perform in terms of image loading and display time compared with that of a standard web client display? Two experiments were performed (Zhang et al., 2003), with results shown in Figure 13.12, A and B, to answer this question. The experiments used two workstations, one with a PACS diagnostic display workstation and the other a component-based web display
MB/Sec 14 Diagnostic Display
12 10 8 6
Web Display
4 2 1
2 3 4 5 Number of CR images
(A)
6
MB/Sec
CT
6 5
CR
4
MR
3 2 1
1
(B)
4 2 3 Number of Client Computers
Figure 13.12 (A): Averaged speeds of image loading and display from one to six CR images of one diagnostic display workstation compared with the web server distributing images to one client. (B): Averaged speeds of distributing different modality images (CT, MR, CR) from the web server to one to four clients. All clients requested the web server at same time (Zhang et al., 2003).
c13.qxd 2/12/04 5:21 PM Page 352
352
PACS DATA MANAGEMENT AND WEB-BASED IMAGE DISTRIBUTION
system. Both used the same the DP component with 100 MB/s Ethernet connected to gigabyte Ethernet switch under a controlled research environment. Figure 13.12A shows the loading and display speed of from one to six CR images to both workstations. Figure 13.12B depicts the loading and display speed for the web server to distribute different modality images (CT, MRI, CR) to one to four clients. From Figure 13.12A, we see that the speed of image loading and display in the diagnostic workstation was reduced from 15 MB/s to 8 MB/s with more CR images selected. On the other hand, the speed of image loading and display from the web server to the web clients remained at 4 MB/s, with only a slight decrease. Figure 13.12B reveals two interesting observations. One is that the shape of the declined speed curves for loading and displaying different modality images with an increasing number of clients were almost the same and that they have shapes somewhat similar to those measured from the diagnostic display workstation shown in Figure 13.12A. The second observation is that the speed of loading CT and MR images declined a little faster than that of CR images. These results can be used to estimate the performance of image loading and display between the PACS workstation and the web workstation with full DICOM viewing capability. For details of the explanation, see (Zhang et al., 2003). Currently, there are two operation models in radiology departments: One is organ based, and the other is modality based. For organ-based radiology departments, the store-forward image delivery method is used quite often, but, for modality-based radiology departments, the query-retrieval method is preferred to get images from PACS. There are two types of PACS image display workstations: One is for primary diagnosis, and the other is for different kinds of medical image applications. Using web technologies and web servers to access, view and manipulate PACS images is a good alternative solution for medical image applications in intranet and Internet environments, especially when images are to be integrated with ePR. The component-based web server for image distribution and display can enable users using web browsers to access, view, and manipulate PACS DICOM images just as in a typical PACS display workstation.
c14.qxd 2/12/04 5:22 PM Page 353
CHAPTER 14
Telemedicine and Teleradiology
14.1
INTRODUCTION
Telemedicine and teleradiology have become increasingly important as our country’s health care delivery system gradually changes from fee-for-service to managed, capitated care. During the past several years, we have seen the trend of primary care physicians joining health maintenance organizations (HMOs). HMOs purchase smaller hospitals and form hospital groups under the umbrella of HMOs. Also, academic institutions form consortia to compete with other local hospitals and HMOs. This consolidation allows the elimination of duplication and the streamlining of health care services among hospitals. As a result, costs are reduced, but at the same time because of the downsizing, the number of experts available for service also decreases. Utilization of telemedicine and teleradiology is a method of cost saving and compensating for the loss of expertise. One of the most comprehensive reviews in assessing telecommunications in health care was a study reported by the Institute of Medicine Committee on Evaluating Clinical Applications of Telemedicine in 1996. Various issues in telemedicine including technical and human context, policy, and methods of evaluation are discussed in detail in this report, which is an excellent source for reference. Telemedicine, in terms of applications, can be simply defined as the delivery of health care using telecommunications, computer, and imaging technologies. The well-established consultation by means of telephone conversation alone would not qualify as telemedicine because it only uses telecommunications and not computer and imaging technologies. There are two models in telemedicine and teleradiology: The referring physician or health care provider can either consult with specialists at various places through a network or can request opinions from a consolidated expert center where different types of consultation services are provided. In this chapter, we concentrate on the expert center model because this is the trend in the health care delivery system. In the expert center consultation process, three modes are possible: telediagnosis, teleconsultation, and telemanagement. In telediagnosis, the patient’s examination results and imaging studies are done at the referring physician’s site and data and images are transmitted to the expert center for diagnosis. The urgency of this service is nominal, and the turnaround time can take from four hours to a day. In teleconsultation, the patient may be still waiting at the examination site while the PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
353
c14.qxd 2/12/04 5:22 PM Page 354
354
TELEMEDICINE AND TELERADIOLOGY
referring doctor requests a second opinion or diagnosis from the expert center, the turnaround time is in the neighborhood of half an hour. In telemanagement, the patient may still be on the gantry or in the examination room at the remote site and the expert is required to provide immediate management care to the patient in situ. Because of these three different operational modes, the technology requirements in telemedicine and teleradiology would be different.
14.2
TELEMEDICINE
Teleradiology is a subset of telemedicine dealing with the transmission and display of images, in addition to other patient-related information, between a remote site and an expert center. The technology requirement for teleradiology is more stringent than that of general telemedicine because the former involves images. Basically, telemedicine without teleradiology requires only very simple technology: A computer gathers all necessary patient information, examination results, and diagnostic reports, arranges them in proper order with or without a standard format at the referring site, and transmits them through telecommunication technology to a second computer at the expert center where the information is displayed as soft copy on the monitor. In modern hospitals or clinics, the information gathering and the arrangement of the information in proper order can be handled by the hospital information system (HIS). In a private practice group or an individual physician’s office, these two steps can be contracted out to a computer application vendor. Another requirement of telemedicine is to design communication protocols for sending this prearranged information to the expert center. This requirement needs special hardware and software components. Hardware and telecommunication choices vary according to the required data throughput. The simplest hardware component includes a pair of communication boards and modems, a phone line connecting the two computers, one at the referring site and the other at the expert center. The type and cost of such hardware depends on which telecommunication service is selected. Depending upon the transmission speed required, the line can be a regular telephone line, a DS-0 (Digital Service, 56 Kbits/s), an ISDN (Integrated Service Digital Network, from 56 Kbits/s to 1.544 Mbits/s), or a DS-1 or Private Line (T-1) with 1.544 Mbits/s. The cost of these lines is related to the transmission speed and the distance between sites. For telemedicine applications without images, a regular telephone line, DS-0, or a single ISDN or DSL line would be sufficient. A local ISDN or DSL line in a large metropolitan area costs $30–$40 per month. The software component includes information display with good GUI (graphic user interface) and some quality-assurance and communication protocol programs. All software programs can be supported by either the HIS department or a vendor. Efficient information gathering and selecting proper subsets of information for timely transmission are critical to the success of the telesystem. Once diagnostic results are available from the expert center, they can be transmitted back to the referring site with the same communication chain or through a standard FAX machine. Figure 14.1 shows a generic telemedicine communication chain. In teleradiology, because it involves images, the technologies required are more demanding. We discuss this topic in more detail in Section 14.3.
c14.qxd 2/12/04 5:22 PM Page 355
TELERADIOLOGY
Patient Record
Physical Exam
Clinical Results
355
Chemistry
Softcopy Display Pharmacy
Software To Gather and Select Information
Display Software
Telecommunication Protocol
Telecommunication Protocol
Referring Site
Telecommunication: Telephone DS-0 ISDN DSL
Expert Center
T1 Figure 14.1 A generic telemedicine communication chain. Without images, a T-1 line is sufficient for most applications.
14.3 14.3.1
TELERADIOLOGY Background
As discussed in Section 14.1, the managed, capitated care trend in the health care industry creates an opportunity for forming expert centers in radiology practice. In this model, radiological images and related data are transmitted between examination sites and diagnostic centers through telecommunications. This type of radiology practice is loosely called teleradiology. Figure 14.2 shows an expert center model in teleradiology. In this expert model, rural clinics, community hospitals, and HMOs rely on radiologists at the center for consultation. Figure 14.2 clearly shows that in teleradiology, the turnaround time requirement is different depending on the modes of service, which in turn, determine the technology required and cost involved. 14.3.1.1 Why Do We Need Teleradiology? The managed care trend in health care delivery expedites the formation of teleradiology expert centers. However, even without health care reform, teleradiology is still an extremely important component in radiology practice for the following reasons. First, teleradiology secures
c14.qxd 2/12/04 5:22 PM Page 356
356
TELEMEDICINE AND TELERADIOLOGY
- Teleradiology PrimPrimary Care Physicians/Patients from Rural Clinics,
Community Hospitals, HMOs Telemanagement almost real-time
Radiological Examinations Telediagnosis, 4-24 hrs
Images and Information Transmitted to the Expert Center
Teleconsultation, 1/2 hr
Expert Center Radiologists make the diagnosis Figure 14.2 The expert center model in teleradiology. The three modes of operation are telediagnosis, teleconsultation, and telemanagement.
images for radiologists to read so that no images will be accidentally lost in transit. Second, teleradiology reduces the reading cycle time from when the image was formed to when the report is completed. Third, because radiology is subdivided into many subspecialties, a general radiologist requires an expert’s second opinion on occasion. The availability of teleradiology will facilitate seeking a second opinion. Fourth, teleradiology increases radiologists’ income because no images are accidentally lost and subsequently not read. The health care reform adds two more reasons: (1) It saves health care costs because an expert center can serve multiple sites, reducing the number of radiologists required. (2) It improves the efficiency and effectiveness of health care because the turnaround time is faster and there is no loss of images. 14.3.1.2 What is Teleradiology? Generally speaking, teleradiology means that an image is sent from the examination site to a remote site where an expert radiologist will make the diagnosis. The report is sent to the examination site, where a primary physician can then prescribe the patient’s treatment immediately. Teleradiology can be very simple or extremely complicated. In the simple case, an image is sent from a CT scanner, for example, in the evening to the radiologist’s home, with low-quality teleradiology equipment and slow-speed communication technology for a second opinion. During off-hours, evenings and weekends, there may not be a radiologist at the examination site to cover the service. A radiology resident is normally in charge, and requires consultation occasionally from the radiologist at home during these off-hours. This type of teleradiology does not require highly sophisticated equipment. A conventional telephone and a simple desktop personal com-
c14.qxd 2/12/04 5:22 PM Page 357
TELERADIOLOGY
TABLE 14.1
Four Models in Teleradiology
Most simplistic Simplistic Complicated Most complicated
TABLE 14.2 PACS
357
Historical Images/RIS
Archive
No Yes No Yes
No No Yes Yes
Differences Between Teleradiology and
Function
Telerad
PACS
Image capture Display technology Networking Storage Compression
Digitizer, DICOM Same WAN Short Yes
DICOM Same LAN Long Maybe
puter with modem connection and display software are sufficient to perform the teleradiology operation. This type of application originated in early 1970. The second type of teleradiology is more complicated, with four different models ranging from simple to complicated as shown in Table 14.1. The complications occur when the current examination requires historical images for comparison, and when the expert needs information from the radiology information system (RIS) to make a diagnosis. In addition, complications arise when the examination and dictation are required to be archived and appended to the patient’s image data file. Teleradiology is relatively simple to operate when no archiving is required. However, when archiving and retrieval of previous information of the same patient are required, the operation becomes extremely complicated. 14.3.1.3 Teleradiology and PACS When the teleradiology service requires a patient’s historical images as well as related information, teleradiology and PACS become very similar. Table 14.2 shows the differences between teleradiology and PACS. The major difference between them is in the methods of image capture. Some current teleradiology operations still use a digitizer as the primary method of converting a film image to digital format, although the trend is moving toward DICOM standard. In PACS, direct digital image capture using DICOM is mostly used. In networking, teleradiology uses slower-speed wide area networks (WAN) compared with the higher-speed local area network (LAN) used in PACS. In teleradiology, image storage is mostly short term, whereas in PACS it is long term. Teleradiology relies heavily on image compression, whereas PACS may or may not. In Table 2.1, the second column gives sizes of some common medical images. In clinical applications, one single image is not sufficient for making diagnosis. In general, a typical examination generates between 10 and 20 Mbytes. The fourth column of Table 2.1 shows an average size of one typical examination in each of these image modalities. One high extreme is in digital mammography, which
c14.qxd 2/12/04 5:22 PM Page 358
358
TELEMEDICINE AND TELERADIOLOGY
requires 160 Mbytes. To transmit 160 Mbytes of information through a WAN requires a very high-bandwidth communication technology. One urgent topic to be resolved in telemammography is how to transmit this large file size through WAN with acceptable speed and cost. Telemammography is discussed in more detail in Section 14.4. 14.3.2
Teleradiology Components
Table 14.3 lists the teleradiology components, and Figure 14.3 shows a generic schematic of their connections. Among these components, reporting and billing are common knowledge and are not discussed here. Devices generating images in teleradiology applications include computed tomography (CT), magnetic resonance TABLE 14.3 • • • • • • • •
Teleradiology Components
Imaging acquisition device Image capture Data reformatting Transmission Storage Display Reporting Billing
DS O
Direct Digital DSL
Images Digitizer: Vidicon, Laser Scanner
HIS/ RI S
Formatted Image Data
DS 3 ATM Internet 2 Workstation
Patient Related Information
Management Software
Fax/Phone
Fax/Phone Phone line
Referring Site
Expert Center
Figure 14.3 A generic teleradiology setup. Left: Referring site. Right: Expert center.
c14.qxd 2/12/04 5:22 PM Page 359
TELERADIOLOGY
359
imaging (MRI), computed and digital radiography (CR, DR), ultrasound imaging (US), nuclear medicine (NM), digital subtraction angiography-digital fluorography (DSA, DF), and film digitizer. Images from these acquisition devices are first generated from the examination site and then sent through the communication network to the expert center if they are already in digital format. Or, if these images are stored on films, then they must be digitized by a film scanner at the examination site. 14.3.2.1 Image Capture In image capture, if the original image data are on film, then either a video frame grabber or a laser film digitizer is used to convert them to digital format. A video frame grabber produces low-quality digital images but is faster and cheaper. On the other hand, laser film digitizers produce very high-quality digital data but take longer and cost more compared with the video frame grabber. During the past several years, direct DICOM output images from CR, DR, CT, and MR have been used extensively in teleradiology. 14.3.2.2 Data Reformatting After images are captured, it is advantageous to convert these images and related data to some industry standard because multiple vendors’ equipment can be used in the teleradiology chain. The two common standards used in the imaging industry are the Digital Imaging and Communication in Medicine (DICOM) for images and Health Level seven (HL7) for textual data. The DICOM standard includes both the image format as well as the communication protocols, whereas HL7 is only for textual data. The communication of textual information generally uses TCP/IP communication protocols. The description of these two standards is given in Chapter 7. 14.3.2.3 Image Storage At the receiving end of the teleradiology chain, a local storage device is needed before the image is displayed. The capacity of this device can range from several hundred megabytes to many gigabytes. At the expert center, a long-term archive, such as a small DLT tape library, is needed for teleradiology applications that require historical images, related patient information, and current examination and diagnosis. The architecture of the long-term storage device is very similar to that used in PACS, as discussed in Chapter 10.1. 14.3.2.4 Display Workstation For an inexpensive teleradiology system, a lowcost 512-line single monitor can be used for displaying images. However, sophisticated multimonitor display workstations are needed for primary diagnosis. The state-of-the-art technology in image workstation design is presented in Chapter 11 and will be revisited in Section 14.3.3.2. 14.3.2.5 Communication Networking An important component in teleradiology is the communication network used for the transmission of images and related data from the acquisition site to the expert center for diagnosis. Because most teleradiology applications are not within the same hospital complex, but through interhealth care facilities in metropolitan areas or at longer distances, the communication technology involved requires WAN technology. WAN can be wireless (Section 9.7) or with cables. In wireless WAN, some available technologies are microwave transmission and communication satellites. Wireless WAN has not been used extensively
c14.qxd 2/12/04 5:22 PM Page 360
360
TELEMEDICINE AND TELERADIOLOGY
because of its cost. We have discussed WAN technology in great detail in Chapter 9, Table 9.5 shows the cable technology available in WAN. In the following, we summarize it for the convenience of reference in teleradiology application. WAN starts from the low-communication rate DS-0 with 56 kbits/s, to DSL (digital subscriber line, 144 Kbits/s to 8 Mbits/s, depending on data traffic and the subscription), to very high-communication rate DS-3 with 45 Mbits/s. These WAN technologies are available through a long-distance or local telephone carrier, or both. The cost of using a WAN is a function of transmission speed and the distance between sites.Thus, within a fixed distance, for a DS-0 line that has low transmission rate, the cost is fairly low compared with DS-3, which is faster but very expensive. Most of the private lines, for example T-1 and T-3, are point to point; the cost depends on the distance between connections. Table 14.4 gives an example showing the relative cost of the DSL and the T-1 between the University of Southern California and St. John’s Health Center about 20 miles apart in the greater Los Angeles metropolitan area. Table 14.4 reveals that the initial investment for the DSL is minimal because the WAN carrier pays for the DSL modem for the network connection. The lowest monthly cost is about $40/month. On the other hand, for T-1 service, the up-front investment is $4000 for the two T1 routers and $1000 for installation and the monthly cost is $600. The up-front investment for the T-1 is much higher than for the DSL, and for longer distances, its monthly charge is expensive. For example, the charge between Los Angeles and Washington, DC for a T-1 line could be as high as $10,000 per month. However, T1 is a point-to-point private line, and it guarantees its 1.5 mbits/s specification. The disadvantages of DSL are (1) it is through shared networks and hence has no security, (2) its performance depends on the load of the DSL carrier at that moment, and (3) it is not available everywhere. Using T-1 and DSL for teleradiology is very popular. Some larger IT (information technology) companies lease several T-1 lines from telephone carriers and sublease portions of them to smaller companies for teleradiology applications. 14.3.2.6 User Friendliness One component not listed in Table 14.3 is the user friendliness in a teleradiology system. User friendliness includes both the connections of the teleradiology equipment at the examination site and the expert center and the simplicity of using the display workstation at the expert center.
TABLE 14.4 Wide Area Network Cost Using DSL (144 Kbits/s—8 Mbits/s) and T-1 (1.5 Mbits/s) between USC and St. John’s Hospital—20 Miles DSL Up-front investment Modems (2)
Installation (2) Monthly charge: (the lowest rate)
T-1 Minimal
Up-front investment
None
T1 DSU/CSU* WAN interface (2) Router (2)
$4000
Minimal
T-1 installation
$1000
$40
T-1 monthly charge:
* DSU/CSU: Data service unit/Channel service unit as of June, 2003.
$5000
$600
c14.qxd 2/12/04 5:22 PM Page 361
TELERADIOLOGY
361
User friendliness means that the complete teleradiology operation should be as automatic as possible, requiring only minimal user intervention. For the userfriendly image workstation there are three criteria: (1) Automatic image and related data prefetch (2) Automatic image sequencing at the monitors (3) Automatic lookup table, image rotation, and unwanted background removal from the image Image and related data prefetch means that for the same patient examination all historical images and related data required for comparison by the radiologist should be prefetched from the patient folder before image transmission and display. When the radiologist at the expert center starts to review the case, these prefetched images and related data are already available. Automatic image sequencing at the display workstation means that all these images and related data are sequentially arranged so that, at the touch of the mouse, properly arranged images and information are immediately shown on the monitors. Prearranged data minimize the time required for the searching and organizing of data by the radiologist at the expert center. This translates to an effective and efficient teleradiology operation. The third factor, automatic lookup table, rotation, and background removal, is necessary because images acquired at the distant site might not have the proper lookup table set up for optimal visual display, images might not be generated in the proper orientation, and they may have some unwanted white background in the image because of radiographic collimation. All these parameters will have an effect on the proper diagnosis of the image. These factors are discussed in detail in Chapter 11. The IHE Integration Profiles 3, 4, 5, 6, and 7, discussed in Chapters 7 and 13, facilitate the set up of user friendliness. 14.3.3
State-of-the-Art Technology
In Section 14.3.2 we discussed the components in a teleradiology operation. In this section, we present the state-of-the-art technology in teleradiology, especially in communication technology, display workstation, image compression, and security. 14.3.3.1 Wide Area Network—Asynchronous Transfer Mode (ATM), Broadband DSL, Internet 2, and Wireless Technology ATM technology, discussed in Section 9.1.3, Internet 2 (I2), discussed in Section 9.6, and the broadband DSL are emerging communication technologies that are being used for WAN teleradiology communications. ATM is a component of the Internet backbone and is also used as the backbone of the SONET ring for metropolitan multiple-campus connection. The current commercially available ATM drop in the SONET ring is the OC3 with 155 Mbits/s. Using ATM for data communication between two nodes requires one adapter board at the computer of each node and an ATM switch connecting both adapters at each node with fiber-optic cables. The use of ATM technology still has the obstacle of expensive long-distance carriers. Table 14.5 shows a comparison using T-1 and ATM for communication of images between Mt. Zion Hospital and UCSF. The results demonstrate that ATM
c14.qxd 2/12/04 5:22 PM Page 362
362
TELEMEDICINE AND TELERADIOLOGY
TABLE 14.5 Time Required to Send a 10-MB X ray or a 40-MB CT Study in the San Francisco Bay Area with T1 and ATM (OC3) (No Compression) One X-Ray Exam 2 K ¥ 2.5 K ¥ 2 byte (10 MB)
One CT Study (40 MB)
One image
Two images
One study
One current + One historical
T1 (1.5 mbits/s) realization 100 KB/s
100 s (1.6 min)
200 s (3.9 min)
400 s (6.7 min)
800 s (13.4 min)
ATM (155 mbits/s) realization 60 mbits/s (7.5 MB/s)
1.3 s
2.7 s
5.3 s
10.7 s
is almost two orders of magnitude faster than T-1. The cost for using ATM technology is expensive for long-distance connection, not because of the technology difficulty but because of the charges by the long-distance carrier. One way to make the ATM an affordable WAN communication method is to have the carrier lower the fiber-optic cable utility cost. Figure 14.4 shows a teleradiology testbed between UCSF and SFVA Medical Center using a CR system, a two-monitor display workstation, and the ATM SONET ring. DSL using broadband WAN and Internet technology is becoming very popular for web-based teleradiology application because of its flexible bandwidth and affordable costs. The current bandwidth of DSL has a specification of from 144 Kbits/s to 8 Mbits/s, and can be improved with multiple DSL connections. I2 technology, discussed in Section 9.6, is ideal for teleradiology application because of its speed of transmission and low cost of operation after the site is connected to the I2 backbone. I2 is an infrastructure of high-speed communication (over 10 Gbits/s), currently consisting of the vBNS (very high-performance Backbone Network Service), the CalREN-2 (California Research and Education Network), and the Abilene. At the global level, vBNS, Abilene, and CalREN-2 provide readily available high-speed backbones and administrative infrastructure. At the local level, the users have to learn how to connect the hospital and clinic environments to these backbones. The advantages of using I2 for teleradiology are its high speed and low operational cost once the local site is connected to the backbones. The disadvantages are (1) the local site must upgrade its conventional Internet infrastructure to be compatible with the high-speed I2 performance, which is costly; (2) not enough experts know how to connect from the radiology department to the backbone; and (3) I2 is not yet open for commercial use. The wireless network, as discussed in Section 9.7, is a potential technology for radiology application because of its flexibility without the burden of wire connection. At the moment, wireless for imaging application is in its infancy because of performance and security issues. 14.3.3.2 Display Workstation Table 14.6 shows the specifications of a 2000-line and a 1600-line workstation used for teleradiology primary readings. These state-ofthe-art technology diagnostic workstations, discussed in Chapter 11, use two monitors with over 2 Gbytes of local storage and display images and reports in 1–2 s. A
c14.qxd 2/12/04 5:22 PM Page 363
TELERADIOLOGY
363
(A)
(B)
(C)
Figure 14.4 A teleradiology testbed for interhospital operation between UCSF and the San Francisco VA Medical Center (SFVA) using the ATM Sonet ring with OC-3 drops (155 Mbits/s). (A) FCR 9000 at UCSF. (B) ATM Switch. (C) 1600-line, two-monitor workstation at SFVA. Radiographs generated by the CR system are transmitted through the ATM at UCSF to the ATM main switch at Pacific Bell in Oakland and then to the ATM switch at the SFVA Medical Center, where they are displayed at the 1600-line workstation. The completed process takes less than 2 s after the generation of the images. (Courtesy of Dr. Gretchen Gooding.)
c14.qxd 2/12/04 5:22 PM Page 364
364
TELEMEDICINE AND TELERADIOLOGY
TABLE 14.6 Specifications of High-End 2000- and 1600Line Workstations for Teleradiology • • • • •
Two LCD monitors 1–2 week local storage for current + previous exam 1–2 s display of images and reports from local storage DICOM conformance Simple image processing functions
single 2000-line LCD monitor workstation costs $20,000, and a 1600 line costs from $10,000. User-friendly software is required for easy and convenient use by the radiologist at the workstation. 14.3.3.3 Image Compression Teleradiology requires image compression (see Chapter 5) because of the slow speed and high cost of using WAN. For lossless image compression, current technology can achieve between 3 : 1 to 2 : 1 compression ratios, whereas in lossy compression using cosine transform-based MPEG and JPEG hardware or software, 20 : 1 to 10 : 1 compression ratios can be obtained with acceptable image quality. The latest advance in image compression technology is the wavelet transform, which has the advantages over cosine transform of higher compression ratio and better image quality; however, hardware wavelet compression is not yet available. Some web-based teleradiology systems use a progressive wavelet image compression technique. In this technique, image reconstruction from the compressed file is performed in a progressive manner in that a lower-resolution image is first reconstructed almost instantaneously and displayed on request. The user would have the psychological effect that the image is transmitted through the network almost in real time. Higher-quality images are continuously reconstructed to replace the previous ones until the original image is reconstructed and displayed (see Section 5.6.4.7). Another method is only reconstruct a region of interest instead a full image. With advances in communications technology, image workstation design, and image compression, teleradiology has become as an integrated diagnostic tool in daily radiology practice. 14.3.3.4 Image/Data Privacy, Authenticity, and Integrity Image transmission in teleradiology is mostly through public networks; for this reason, trust in image data becomes an important issue. Trust in image data is characterized in terms of privacy, authenticity, and integrity of the data. Privacy refers to denial of access to information by unauthorized individuals.Authenticity refers to validating the source of the image. Integrity refers to the assurance that the image has not been modified accidentally or deliberately during the transmission. Privacy is the responsibility of the public network provider and is based on firewall and password technologies, whereas authenticity and integrity are the responsibility of the end user. Authenticity and integrity are mostly based on the concept of public and private key digital signature encrypted with mathematical algorithms after image generation and before its transmission. In general, the public and private key digital signature concept consists of seven steps: (1) Private and public keys: to set up a method in assigning public and private keys between the examination site and the expert center
c14.qxd 2/12/04 5:22 PM Page 365
TELERADIOLOGY
365
(2) Image preprocessing: to segment the object of interest in the image from the background (for example, the head in a CT image is the object of interest) and extract patient information from the DICOM image header at the examination site while the image is being generated (3) Image digest: to compute the image digest (digital signature) of the object of interest in the image based on its characteristics using mathematical algorithms (4) Data encryption: to produce a digital envelope containing the encrypted image digest and the corresponding patient information from the image header (5) Data embedding: to embed the digital envelope into the background of the image as a further security (6) The image with the embedded digital envelope is sent to the expert site. (7) The expert center receives Item (6), decrypts the image and the signature. It compares two digital signatures: One comes with the image, and the second is computed from the received image to validate the image integrity. Chapter 16 describes the image/data security issue in more technical detail. 14.3.4
Teleradiology Models
We present three teleradiology models that are common in current practice. 14.3.4.1 Off-Hour Reading The purpose of the off-hour reading model is to take care of off-hour reading including evenings, weekends, and holidays, when most radiologists are not available at the examination sites. In this setup, image acquisition devices at different examination sites including hospitals and clinics are connected to an off-hour reading center with medium- or low-grade transmission speed (like the DSL) because the turnaround time is not critical except for emergency cases. The connections are mostly direct digital with the DICOM standard. The reading center is equipped with network switches and various types of workstations compatible with the images generated by imaging devices at examination sites. Staffing includes technical personnel taking care of the communication networks and workstations and radiologists who come in during the evening, weekend, and holiday shifts and perform on-line digital reading. They provide a preliminary impression and transmit it to the examination site instantaneously after reading. The regular radiologists at the examination sites verify the reading and sign off on the reports the next day. This type of teleradiology setup is low technology, but it serves its purpose of solving the shortage of radiologists during off-hours. 14.3.4.2 ASP Model The ASP (application service provider) model is a business venture taking care of radiological image diagnosis for examination sites where onsite radiology interpretations are not available. This model can used be for equipment only or for both equipment and radiologists. In the former use, an ASP entity sets up a technical center housing network equipment and workstations. It also provides turnkey connectivity for the examination site where images would be trans-
c14.qxd 2/12/04 5:22 PM Page 366
366
TELEMEDICINE AND TELERADIOLOGY
mitted to the center. The examination site can hire its own radiologists to perform reading at the center. In the latter, the center provides both technical support and radiologists for reading. 14.3.4.3 Web-Based Teleradiology Web-based teleradiology is mostly used by hospital or larger clinics to distribute images to various parts of a hospital or clinic or outside the hospital. A web server is designed in which filtered images from PAC systems are either pushed from the PACS server or pulled by the web server. Filtered images mean that the web server has a predetermined directory to manage the image distribution based on certain criteria like what types of images to where and to whom. The clients can view these filtered images from workstations connected to the web server. The clients can be referring physicians who just want to take a look at the images or radiologists who need to make a remote diagnosis. Webbased teleradiology is very convenient and low cost to set up because most technologies are readily available, especially within the hospital intranet environment. The drawback is that because web technology is a general technology, the viewing capability and conditions are not as good as those in a regular PACS workstation, where the setup is geared for radiology diagnosis. 14.3.5
Some Important Issues in Teleradiology
14.3.5.1 Teleradiology Trade-Off Parameters There are two sets of trade-off parameters in teleradiology. The first set consists of image quality, turnaround time, and cost; and the second set is the data security, including patient confidentiality, image authenticity, and image integrity. Table 14.7 shows the teleradiology trade-off parameters between image quality, turnaround time, and cost. These three parameters are affected by four factors: the method of image capture, the type of workstation used, the amount of image compressed, and the selected communication technology. In terms of data security, we must consider patient confidentiality as well as image authenticity. Because teleradiology uses a public communication method to transmit images that has no security, the question arises as to what type of protection one should provide to ensure patient confidentiality and to authenticate the sender. The second issue is image integrity. After the image is created in digital form, can we ensure that the image created has not been altered either intentionally or unintentionally during the transmission? To guarantee patient confidentiality and image authenticity, methods such as fire walls can be set up. To protect image integrity, data encryption and digital signatures can be used. These techniques have been in the domain of defense research for many years which can be modified for teleradiology application. If high security is imposed on image data, it will increase
TABLE 14.7
Teleradiology Trade-Off Parameters
Quality Turnaround time Cost
Image Capture
Workstation
Compression
X X X
X
X X X
X
Communication Technology X X
c14.qxd 2/12/04 5:22 PM Page 367
TELEMAMMOGRAPHY
367
Figure 14.5 Left: A CT scan of the chest. Right: The same scan with a digitally inserted tumor (arrow). The insertion process requires minimal image processing.
the cost of decryption and decrease the easy access because of the many layers of passwords. The trade-off between cost and performance, confidentiality, and reliability has become a major socioeconomic issue in teleradiology. Figure 14.5 shows a CT image that has been altered digitally by inserting a tumor on the lung (see arrows). Because altering a digital image is fairly easy, developing methods to protect integrity of image data is essential in teleradiology applications. We discuss data integrity and patient confidentiality in more detail in Chapter 16. 14.3.5.2 Medical-Legal Issues There are four major medical-legal issues in teleradiology: privacy issues, licensure issues, credentialing issues, and malpractice liability issues. The ACR Standard for Teleradiology adopted in 1994 defines guidelines for “qualifications of both physician and nonphysician personnel, equipment specifications, quality improvement, licensure, staff credentialing, and liability.” Guidelines to these topics, although much is still uncertain, have been discussed extensively by James, Berger and Cepelewicz, and Kamp. Readers are encouraged to review the ACR Standard and these authors’ publications for current status on these issues.
14.4 14.4.1
TELEMAMMOGRAPHY Why Do We Need Telemammography?
Breast cancer is the fourth most common cause of death among women in the United States. There is no known means of preventing the disease, and available
c14.qxd 2/12/04 5:22 PM Page 368
368
TELEMEDICINE AND TELERADIOLOGY
therapy has been unsuccessful in reducing the national mortality rate over the past 60 years. Current attempts at controlling breast cancer concentrate on early detection by means of mass screening, using periodic mammography and physical examination, because ample evidence indicates that such screening indeed can be effective in lowering the death rate. To perform massive screening requires digital mammography (Section 3.4) and telemammography. Full-field direct digital mammography (FFDDM, Section 3.4.2) can overcome many inherent problems in the screen/film combination detector and at the same time provide better spatial and density resolutions and a higher signal-to-noise ratio in the digital mammogram than that from a digitized mammogram. Real-time telemammography adds to these advantages the utilization of expert mammographers (rather than general radiologists) as interpreters of the mammography examinations at the expert center. Setting up a quality telemammography service requires the FFDDM at the examination site, a high-speed teleimaging network connecting the examination site with the mammography expert center (because of the large size of digital mammograms), and high-resolution digital mammogram display workstations for interpretation.
14.4.2
Concept of the Expert Center
Telemammography is built on the concept of an expert center. This allows those radiologists with the greatest interpretive expertise to manage and read in real time all mammography examinations, thereby adding to the advantages of the FFDDM. Specifically, telemammography increases efficiency, facilitates consultation and second reading, improves patient compliance, facilitates operation, supports computer-aided detection and diagnosis, allows centralized distributive archives, and enhances education through telemonitoring. Real time is defined in this context as a very rapid turnaround time between examination and interpretation. In addition, mammography screening in mobile units will be made more efficient, not only by overcoming the need to transport films from the site of examination to the site of interpretation, but also by permitting image interpretation while patients are still available for repeat or additional exposures. Furthermore, telemammography can be used to facilitate second opinion interpretation, in effect making world-class mammography expertise immediately accessible to community practice radiologists. There are three protocols in telemammography: telediagnosis, teleconsultation, and telemanagement. Telediagnosis uses experts to interpret digital mammograms sent from a remote site. Teleconsultation is used to improve the efficacy in problem cases. Without telemammography, consultations are tedious, time consuming, and logistically complex. Telemanagement is used to replace the on-site general radiologist with remotely located expert mammographers for patient management.
14.4.3
Technical Issues
Telemammography services require FFDDM at the examination site, an image compression algorithm, a high-speed teleimaging WAN connecting the examination
c14.qxd 2/12/04 5:22 PM Page 369
TELEMICROSCOPY
369
site with the mammography expert center, and a high-resolution and efficient digital mammography display workstation for interpretation. Present technologies available for telemammography applications include the FFDDM with 50-mm spatial resolution with 12 bits/pixel (Section 3.4), ATM or Internet 2 for both LAN and WAN connections, a digital linear tape library for long-term archive, and RAID for rapid image retrieval and display. The telemammograms can be displayed with multiple 2K ¥ 2.5K resolution LCD monitors with 80–100 ft-L brightness. Figure 14.6 depicts the schematic of a telemammography workstation with possible viewing formats and examples of digital mammograms on the 2K display workstation. Figure 14.7 shows the schematic of a telemammography system used for telediagnosis, teleconsultation, and telemanagement applications. Telemammography is still in the experimental stage; issues to be considered are the image quality at the expert’s workstation, speed of communication, image/data security, and the cost of assembling a high-quality and efficient digital mammography workstation. 14.5 14.5.1
TELEMICROSCOPY Telemicroscopy and Teleradiology
Telemicroscopy is the transmission of digital microscopic images (Section 4.8) through a WAN. Under the umbrella of telemedicine applications, telemicroscopy and teleradiology can be combined into one system. Figure 14.8 shows an example of a generic combined telemicroscopy and teleradiology (low resolution) system; its major components and their functions are described as follows: (1) A 1K/2K film scanner for digitization (2) A 512-line display system for quality control (3) A personal computer (PC) with four functions: (a) For teleradiology applications, software for image acquisition, patient data input, and data communication (b) For telemicroscopy applications, software for automatic focusing, x-y stage motion, color filter switching, and frame grabbing. These tools are needed for converting slides to digital images. (c) A database for managing images and textual data (d) A standard communication protocol for LAN and WAN (4) A light microscope with automatic focusing, an x-y stepping motor controlled stage, R-G-B color filters, and a CCD camera (5) Standard communication hardware for LAN and WAN (6) Communication carrier (e.g., T-1 or DSL). (7) The display workstation at the expert site should be able to display1K ¥ 1K gray level images and 512 color images with standard graphic user interface. It needs a database to manage the local data. The display workstation should also be able to control the microscopic stage motion as well as the automatic focusing.
c14.qxd 2/12/04 5:22 PM Page 370
370
TELEMEDICINE AND TELERADIOLOGY
A
B
C
D Figure 14.6 Schematic of a 2000-line two-monitor workstation showing the display format of digital mammograms in (A) with the four-on-one format; and (B) with the two-on-one format. (C) Eight mammograms display in two monitors with the four-on-one format. (D) One-on-one (Left) and two-on-one (Right) format. CC: craniocaudal view, mlo: mediolateral oblique view.
c14.qxd 2/12/04 5:22 PM Page 371
TELEMICROSCOPY
371
Figure 14.7 A telemammography testbed system for telediagnosis, teleconsultation, and telemanagement (1997–1999). The expert site was located at the breast imaging center, Mt. Zion Hospital (MZH), and the remote site was at the Ambulatory Care Center (ACC), UCSF. The Data Center was located at LRI. 1. Telediagnosis : Images sent from the FFDDM at MZH or ACC to 2K WS at MZH for interpretation. 2. Teleconsultation : Images sent from the FFDDM at ACC to 2K WS at both MZH and ACC. Referring physician at ACC and the expert at MZH used the WS for consultation. 3. Telemanagement : Images sent from the FFDDM at ACC to the 2K WS at MZH. The expert at MZH telemanaged the patient, who was still at ACC, based on the reading. LRI: Lab for Radiological Informatics, not shown in this drawing.
14.5.2
Telemicroscopy Applications
Two major applications of telemicroscopy are in surgical pathology and laboratory medicine. The former deals with samples from surgical specimens at the tissue level, whereas the latter considers samples from peripheral fluid and biopsy at the cell level. Telemicroscopy requires both static and dynamic images. For this reason, an automatic x-y moving stage and focusing are necessary during a teleconsultation process in which both the referring physician and the expert can move the microscopic slide during discussion. In Section 4.8, we presented the microscopic image acquisition and display components of a digital microscopic system. The telecommunication component in telemicroscopy uses similar technology in teleradiology except for dynamic images. In this case, because of the rapid transmission requirement in dynamic imaging, high-bandwidth WAN and/or image compression are required. Figure 4.23A shows a telemicroscopy system; the left-hand side is the image acquisition component, and the right-hand side is the expert image workstation.
c14.qxd 2/12/04 5:22 PM Page 372
372
TELEMEDICINE AND TELERADIOLOGY
Figure 14.8 A conceptual combined teleradiology and telemicroscopy system for image and textual data transmission.
14.6 14.6.1
REAL-TIME TELECONSULTATION SYSTEM Background
Real-time consultation between referring physicians or general radiologists and an expert is critical for timely and adequate management of problem cases. During consultation, both sides need to (1) synchronously manipulate high-resolution DR images (over 16 MB/exam), or large-volume MR/CT/US images (over 8– 20 MB/exam) in real time, (2) perform interpretation interactively, and (3) converse with audio. An even more complex teleconsultation model is used when historical images are required for comparison and when information from the radiology information system (RIS) is needed by the expert. In this situation, neither conventional teleconferencing technology nor PACS workstations would be sufficient to handle the complex model requirements for two reasons. First, most off-the-shelf teleconferencing technology can only handle “store” and “forward” operations, which have no direct connection to the PACS server for inputting images. Second, PACS workstations do not have the necessary capability for interactive teleconsultation, which requires real-time dual-cursor telecommunication technology. In this section we present the design and implementation of a real-time teleconsultation system. The teleconsultation system is designed to operate in a DICOM PACS clinical environment with bidirectional remote control technology to meet critical teleconsultation application with high-resolution and large-volume medical images in a limited bandwidth network environment. We first give the system design and implementation methods and describe the teleconsultation procedures and protocols. Such a teleconsultation system has been used in neuroradiology PACS clinical environment.
c14.qxd 2/12/04 5:22 PM Page 373
REAL-TIME TELECONSULTATION SYSTEM
14.6.2
373
System Requirements
To provide real-time consultation services for serious or difficult cases, real-time teleconsultation systems for high-resolution and large-volume medical images should meet the following requirements: (1) Provides real-time teleconsultation services for high-resolution and largevolume medical images (MR, CT, US, CR, DR, and mammography) (2) Synchronously manipulates images on both local and remote sites including remote cursor, window/level, zoom, cine mode, overlay, and measurement (3) Supports multimedia including audio and video (option) communications (4) Interfaces with PACS through the DICOM standard (5) Can be used in intranet (LAN) and Internet (WAN) environments with TCP/IP protocol (6) Can be operated in scalable network connections including Internet 2, ATM, Ethernet, modem, and wireless. (7) Has similar graphic user interface and image manipulation functions as the standard PACS workstation to minimize user’s retraining and (8) Is cost-effective. In Section 14.6.3, we describe a teleconsultation system designed to satisfy these conditions. 14.6.3
Teleconsultation System Design
14.6.3.1 Image Display Work Flow Real-time teleconsultation relies on the synchronization of the local (referring site) and remote (expert site) workstations. In a clinical PACS environment, images are first acquired by the acquisition computer, sent to the PACS server through a DICOM gateway, and then delivered to different workstations according to some routing schedules. In the stand-alone PACS model, after images arrive at the workstation, they are first stored in the local database, accessed by the user through the graphic user interface, and displayed on the monitors. Generally speaking, the image display procedure is a collection of sequential operations, or a pipeline process as shown in Figure 14.9, including loading image data from disk storage to computer memory, processing the data, and rendering the data on the screens. In an event-driven window environment, the user’s manipulations of the graphic user interface (GUI) windows or displayed images by using the input devices (mouse or keyboard) are translated into events. The procedure to display images can be described as a flow of image data through the pipeline that is controlled by these events. 14.6.3.2 Communication Requirements During Teleconsultation Teleconsultation involves a referring site and an expert site. Between the two sites, three kinds of communications are required: (1) images transmitted from the referring physician or general radiologist site to the expert site; (2) synchronization of manipulations of displayed images in both sites; and (3) voice conversation between the referring physician and the expert.
c14.qxd 2/12/04 5:22 PM Page 374
374
TELEMEDICINE AND TELERADIOLOGY
Graphic User Interface
Events
Display Module
Event Queue Databas Manager
Memory Manager
Source
Image Processor
Filter
View Manager
Consumer
Figure 14.9 A pipeline process showing image display procedure in a display workstation.
Of these three types of communications, transmitting images involves a large amount of data, but they normally can be preloaded before the consultation session starts unless it is an emergency case (Emergency service requires high-speed network connection between two sites). Image preloading can be done with various kinds of network, for example, high-speed ATM network, T1 line, or Internet 2. To synchronize the manipulation operations on displayed images on both sites, the generated events are required to be exchanged in real time between the two sites. Because the events are usually very short messages, even conventional telephone lines (DS-0 line with 56 Kbits/s) can be used to transmit this kind of messages in real time if the maximum message length is less than 2200 bits [~(56 Kbits/s)/(25/s)] and the image repainting rate after image processed is less than 25/s. Although audio conversation is a real-time on-line procedure, it also can be carried out through the telephone line. 14.6.3.3 Hardware Configuration of the Teleconsultation System Based on communication analysis, the concept of distributed image data storage, and the expert model of teleradiology, a real-time teleconsultation system for the two sites, the expert site and the general physician or radiologist site, can be designed. The hardware configuration of the teleconsultation system at each site includes: (1) One workstation linked with the other with an Ethernet, ATM, or modem connection (2) High-resolution display boards that can be configured to support single or dual monitors (3) High-resolution gray scale monitor (dual monitors as option) (4) One telephone for audio communication Figure 14.10 shows a simplified teleconsultation connection between two sites.
c14.qxd 2/12/04 5:22 PM Page 375
REAL-TIME TELECONSULTATION SYSTEM
Referring site
375
Expert site
Telephone
Telephone
Teleconsultation workstation
Network
Teleconsultation workstation
Telephone line Figure 14.10 A simplified teleconsultation system connecting a referring site and an expert site.
14.6.3.4 Software Architecture Two key software features in the teleconsultation system are the synchronization of the user operation and the ability to exchange messages in real time between the local and remote site. To achieve these features, the consultation workstation needs message routing, an interpretation function module, an authoring function, and standard display and manipulation functions similar to a PACS workstation. The following are the basic modules: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
Image display graphic user interface for display and teleconsultation Database manager for image display and teleconsultation authoring Memory manager for image data memory management Image processor for processing image Image viewer for image rendering and display Event interpreter for local and remote message dispatching Remote control manager for routing messages between local and remote sites An event/message queue used to transfer the events and remote messages from GUI and remote manager to display module A queue used to transfer the encoded messages from event interpreter to remote manager Teleconsultation database used to store the information of selected patients, studies, series, images, as well as expert names, hospitals, and hosts DICOM communication services for DICOM image receiving and sending between the teleconsultation system, scanners, and PACS server
Figure 14.11 shows the architecture and data flow of the teleconsultation system. In this design, the operation of the workstation is very similar to a generic PACS workstation, to minimize the user relearning process. Function modules (1), (2), (3), (4), and (5) are designed for normal display as described in Figure 14.9. Modules (6) and (7) are specifically designed for bidirectional remote control for teleconsultation. Module (6), the event interpreter, collects the events coming from the display module, encodes them for transmission and puts them into a queue in Module (9), which transfers the messages to Module (7). Module (7), the remote
c14.qxd 2/12/04 5:22 PM Page 376
376
TELEMEDICINE AND TELERADIOLOGY
Teleconsultation Application
Graphic User Interface (1) Events
Display Module
Event Queue
(8) Database Manager (2)
Memory Manager (3)
Image Processor (4)
View Manager (5)
Decoded Messages
Events
Event Interpreter (6)
Teleconsultation Database (10)
Remote Control Manager (7)
(9)
DICOM Services (11)
Encoded Message Queue Message Communication
DICOM Communication
Network Figure 14.11 Software architecture and data flow in a teleconsultation workstation. Numbers are modules in the software described in text. Remote Control Manager links the work station to other workstation.
control manager, picks up the encoded messages, sends them to the remote site, and receives the remote messages and posts them to the display module for remote control of the display behavior. Module (10) is a teleconsultation database for image display and consultation, and module (11) is for the DICOM communication services for transferring DICOM images between teleconsultation workstations and PACS or imaging modalities. 14.6.4
Teleconsultation Procedure and Protocol
A teleconsultation session proceeds as follows. When a case requires a consultation, a general physician or radiologist first reviews the images from the imaging modality or the PACS workstations and pushes them with the DICOM protocol to the teleconsultation workstation located at the local reading room. At the teleconsultation workstation, the referring physician then performs data authoring supplied by the teleconsultation system and sends the images to the teleconsultation workstation located at the expert site. This could be done through either the local area
c14.qxd 2/12/04 5:22 PM Page 377
REAL-TIME TELECONSULTATION SYSTEM
377
network if it is within the same building or campus, or through the wide area network if it is distant. Later, either the general physician or an expert can call the other site to start the consultation session. The teleconsultation protocol has three steps: Data formatting: The first step is to convert all input images for teleconsultation to the DICOM standard if they are not already DICOM. This is done at the acquisition gateway or at the PACS server. Data authoring: The authoring procedure is to prepare the image data for teleconsultation. The authoring module, “TeleConApp,” is an integrated component in the software package (inside the database manager module) Figure 14.12 shows the data authoring procedure. There are three steps in data authoring: (1) The general physician or radiologist uses the TeleConApp program to create an object called the virtual envelope, which includes information on the selected patients, studies, and series as well as the host name of the expert site and the name of the consultant from the teleconsultation local database. Note that the virtual envelope does not yet contain any images. The virtual envelope is sent to the expert site through a network with DICOM communication services. (2) After the expert site receives the envelope, either site can create a consultation session and the general physician site sends the DICOM image objects related to the patient as dictated by the virtual envelope to the expert site. (3) After sending images, the general physician site automatically performs a DICOM query to the expert site, and the expert site verifies the receipt of the virtual envelope and image object to the general physician site. The consultation session can start once the data are verified. Data presentation: Before the data presentation, both the expert and the general physician site have the virtual envelope and image data. After the consultation session is initiated, the data presentation begins. During data presentation, either site can operate the TeleConApp program to display and manipulate images and related information. TeleConApp synchronizes the operations of image display and manipulation at both sites. The challenge is the software synchronization of a dual cursor system at each workstation so that both sites can have equal control during the teleconsultation.
1 Virtual envelope
Referring Site
2
Expert Site
DICOM images
3 Verification
Figure 14.12 Data authoring procedure for teleconsultation.
c14.qxd 2/12/04 5:22 PM Page 378
378
TELEMEDICINE AND TELERADIOLOGY
PACS workstation MR/CT/CR scanners ATM SONET RING, Internet
Referring site Expert site Teleconsultation workstation
Router
(a) Router
PACS workstations
Teleconsultation workstation
Image database
PACS server
(b)
PACS workstations
(c)
Figure 14.13 WAN network connection of the teleconsultation system with the referring site and the expert site and a PACS workstation in a PACS environment.
Figures 14.13 and 14.14 show the connection of such a teleconsultation system in the PACS environment. Other applications derived from this teleconsutlation system are used for teleconferencing with dynamic images and for intravascular US and cardiac angiography.
14.7
TRENDS IN TELEMEDICINE AND TELERADIOLOGY
The concept of telemedicine and teleradiology originated in the 1970s; however, the technology was not ready for real clinical applications until several years ago. As telemedicine and teleradiology are being integrated into daily clinical service, the associated socioeconomic issues discussed in Section 14.3.5.2 also surface. The trends in telemedicine and teleradiology are to balance cost with the requirements of data integrity, image quality, and turnaround time for the service. We see that teleradiology will become a necessity in medical practices of the twenty-first century and will be an integral component of telemedicine as the future method of health care delivery.
c14.qxd 2/12/04 5:22 PM Page 379
TRENDS IN TELEMEDICINE AND TELERADIOLOGY
379
Remote Site
Teleconsultation
Expert Center
Figure 14.14 A teleconsultation session in neuroradiology showing the remote site and the expert center. (Courtesy of Drs. J Stahl and J. Zhang.)
Telemedicine and teleradiology use web and Internet technologies heavily. Issues that must be resolved immediately are how to lower the communication cost in telemedicine and teleradiology applications and how to bundle textual with image information effectively and efficiently. For the former, Internet 2 and wireless network appear to be excellent candidates, and for the latter, ePR will evolve as a potential winner.
c15.qxd 2/12/04 5:21 PM Page 381
CHAPTER 15
Fault-Tolerant PACS
PACS is a large computer network and system integration of medical images and databases. The system is mission critical because it runs around the clock, 24/7. For this reason, its operation reliability is vital. The purpose of PACS fault-tolerant design is to maintain the continuous available operability of the system. This chapter first reviews the concepts of system fault tolerance, definitions of high availability (HA) and continuous availability (CA), and possible causes of PACS system failure. No loss of image data and no interruption of PACS data flow are the two main criteria of success in a PACS operation. The chapter presents these criteria in detail and also delineates the methods of how to satisfy them with current PACS technology. A case study based on clinical experience with various failures in the PACS controller archive server along with methods of remedy is discussed. The chapter concludes with the current concept of a fault-tolerant PACS design.
15.1
INTRODUCTION
HA and CA are the two commonly used terms categorizing the degree of system reliability. There is a continuum of HA solutions, ranging from 99% system availability rate (88 h downtime/yr) achieved by using the simple “hot spare” technology to 99.99% availability (1 h downtime/yr) achieved with clustered server-based solutions using hardware and fail over software. The fully hardware “fault tolerance” (FT) systems are known as CA solutions, with the highest 99.999% availability rate and a maximum downtime of 5 min/year [http://www.tandem.com, http://IBM.com, http://www.sun.com]. PACS is a mission-critical operation, continuing 24 hours a day and seven days a week, and it requires continuous availability. In PACS, three operational criteria are vital to the system’ success: no loss of data, continuous availability of the system, and acceptable system performance. We consider these topics in this chapter. PACS is a system integration of medical images and patient databases. The integration involves many components, including imaging modalities, computers, display workstations, communication devices, network switches, and computer servers housing health care databases. Figure 15.1 shows a generic PACS architecture, and Figure 15.2 shows the logical connection of some basic components in PACS and teleradiology. PACS and teleradiology are mission-critical systems PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
381
c15.qxd 2/12/04 5:21 PM Page 382
382
FAULT-TOLERANT PACS
Figure 15.1 Generic PACS components and data flow. CA, Continuously available components replacing single point of failure components—database gateway, PACS controller, and application servers.
running around the clock and have no tolerance for failure. In Figures 15.1 and 15.2, each component is a single point of failure of the system and should have the faulttolerant design built in to ensure CA operation. Section 15.7 will elaborate further on this concept. For convenience, we will not distinguish between PACS and teleradiology in this discussion of fault-tolerant design.
15.2
CAUSES OF A SYSTEM FAILURE
Causes of an integrated computer and network system failure can come from human error, natural disaster, software, and component hardware.Table 15.1 shows a survey of the causes of system downtime. Although no formal data on the causes of PACS downtime have been documented, on the basis of field experience, we can assume that PACS follows the same trend of downtime causes as the general computer and network system in Table 15.1. Among these causes, natural disaster is not predictable and hence cannot be avoided. But it can be minimized for PACS system downtime by choosing proper locations for installation, taking proper natural disaster precautions, and using offsite backup. Properly chosen locations and special precautions can minimize system shutdown due to natural disaster, and off-site backup can prevent loss of data. In terms of human error and software, the reliability of current PACS software and the refinement of system design to minimize human error have been greatly improved.
c15.qxd 2/12/04 5:21 PM Page 383
CAUSES OF A SYSTEM FAILURE
383
Figure 15.2 Physical connection of PACS and teleradiology. CA, continuously available components replacing single point of failure components—network switch HIS/RIS/CMS gateway, PACS controller, and application servers.
TABLE 15.1 Downtime
Causes of Computer and Network System
Computer hardware Hard disk drive Communication processor Data communication network Software Human error Other Total
24% 26% 11% 10% 22% 6% 1% 100%
Redundancy in software architectural design allows the system to be recovered gracefully after a software failure. Minimizing human intervention during PACS operation in the system design can lower the rate of human error. However, hardware failure in the computer, disk, communication processor, or network is unpredictable and hence difficult to avoid. In this chapter, we will not discuss natural disaster and human error (see Chapter 18). We first assume that the entire PACS software is fully debugged and is immune
c15.qxd 2/12/04 5:21 PM Page 384
384
FAULT-TOLERANT PACS
to failure. Therefore, in fault-tolerant PACS design, we first address the issue of what would happen if some hardware components fail and how to take care of it so that the PACS operation will not be compromised.We then discuss a fault-tolerant image server design combining both hardware and software for continuously available system operation. If a hardware component involved is the single point of failure (SPOF) of the system, for example, the PACS controller, which is the main server of PACS, its failure can render the entire system inoperable until the problem is diagnosed and resolved. Other critical hardware components that cannot tolerate failure are enterprise health care system cluster servers, application servers, web servers, network switches, imaging acquisition gateways, and database gateways. A failure of one of these components may cripple that branch of operation, which in turn may interrupt the complete system operation. Often, a component failure can also result in loss of data. Both interruption of operation and loss of data are not acceptable in a totally digital operation clinical environment. On the other hand, the failure of a display workstation is not critical because there are duplicate workstations. The failure may cause inconvenience to the user, but it does not cripple the system. Smith et al. described a painful experience during a major PACS server failure and how to derive a method to reverse the operation back to film-based operation. Often, software remedy is implemented in various PACS components to minimize the impact on system operation of hardware component failure. Software remedy of hardware failure is elaborate in design, difficult in implementation, and tedious in data and state recovery. In the following sections, we categorize some of these failures and the methods of their remedy provided by current PACS technology. A case study of PACS controller archive server downtime and its remedy based on clinical experience is given.
15.3
NO LOSS OF IMAGE DATA
15.3.1
Redundant Storage at Component Levels
To ensure no loss of image data, a PACS always retains at least two copies of an image on separate storage devices until the image has been archived successfully to a long-term storage device (e.g., an optical disk or a digital tape library). Figure 10.1 shows the various storage subsystems in PACS. This backup scheme is achieved via the PACS intercomponent communication system, which can be broken down as follows: •
•
At the imaging device. Images are not deleted from the imaging device’s local storage until technologists have verified the successful archiving of individual images to other storage devices via the PACS connections. In the event of failure of the acquisition gateway process or of the archive process, images can be re-sent from the imaging device to the PACS. At the acquisition gateway computer. Images acquired in the acquisition gateway computer remain in its local magnetic disks until the archive subsystem has acknowledged to the acquisition gateway computer that a successful archive has been completed. These images are then deleted from the magnetic disks
c15.qxd 2/12/04 5:21 PM Page 385
NO LOSS OF IMAGE DATA
•
•
385
residing in the acquisition computer so that storage space from these disks can be reclaimed. At the PACS controller. Images arriving in the archive server from various acquisition gateways are not deleted until they have been successfully archived to the permanent storage. On the other hand, all archived images are stacked in the archive server’s cache magnetic disks or RAID and will be deleted based on their aging criteria (e.g., number of days ago the examination was performed; discharge or transfer of a patient). At the display workstation. Images stored in the designated display workstation will remain there until the patient is discharged or transferred. Images in the PACS archive can be retrieved from any display workstations via PACS intercomponent communication.
15.3.2
The Archive Library
The archive digital linear tape (DLT) library consists of multiple input/output drives and controllers housing inside the jukebox, which allow concurrent archival and retrieval operations on all of its drives. A redundant power supply is essential for uninterrupted operation. To build a fault tolerance in the PACS server, a backup archive system can be used. Two copies of identical images can be saved through two different paths in the PACS network to two archive libraries. Ideally, the two libraries should be in two different buildings, in case of natural disaster. The secondary DLT library only requires a computer for control and does not need a PACS controller. Figure 15.3 shows a low-cost backup archive design for no loss of image data.
Figure 15.3 A low-cost backup TANDEM design for no loss of image data.
c15.qxd 2/12/04 5:21 PM Page 386
386
FAULT-TOLERANT PACS
15.3.3
The Database System
The database system comprises redundant database servers, running identical reliable commercial database systems, with structured query language (SQL) utilities. A mirrored database (i.e., two identical databases) can be used to duplicate the data directory during every PACS transaction involving the server. The backup database can reside in a different computer or in a different partition of the same disk of the same computer. The former configuration provides a better fault tolerance. The mirroring feature of the database system provides the entire PACS database with uninterrupted data transactions that guarantee no loss of data in the event of system failure or a disk crash. Although the second method using a different partition of the same disk is not fault tolerant during a disk crash, it guarantees the continuous availability of PACS database during the disk partition failure. The benefits of the second configuration are lower cost and easy implementation and management of the mirroring database. 15.4
NO INTERRUPTION OF PACS DATA FLOW
15.4.1
PACS Data Flow
PACS is a networked and component-integrated system (Figs. 15.1 and 15.2). To summarize the functions of major components and networks discussed in previous chapters: • •
•
• • • •
Imaging modalities: Generate medical images Acquisition gateways: Acquire images, process images, and send images to PACS controller. The PACS controller has three components—the controller, database server, and archive units Controller: Receive images from acquisition gateways, intelligently route images to display workstations, access PACS database to update/query image related records, access archive units to store or retrieve images, provide query/retrieve services for display workstations PACS database server: Perform PACS image-related information management PACS archive units: Store images for short term and long-term archive Display workstations: Display PACS images for diagnosis PACS networks: Connect PACS components
An image in a PACS component is usually processed by a sequential processing chain with first-in-first-out (FIFO) model, as shown in Figure 15.4. For example, in a DICOM-compliant CR acquisition gateway computer, “Process 1” of Queue 1 in Figure 15.4 is the Storage SCP (Service Class Provider) process receiving DICOM CR images from the CR reader; “Process i + 1” is the Storage SCU (Service Class User) process sending the image to the PACS controller. The other processes (2, . . . , i) perform image processing tasks necessary for downstream image archive and display in the PACS controller and display workstations. The queues “Queue 1” to “Queue i” are structured files (if they are located in disk drives) or tables (if they are in database) used for transferring image processing tasks between the processes. In the following, we use the collective term “image processing” to represent these processes.
c15.qxd 2/12/04 5:21 PM Page 387
NO INTERRUPTION OF PACS DATA FLOW
387
Figure 15.4 The first-in-first-out (FIFO) image processing model in a PACS component.
Figure 15.5 Key devices in a component computer.
15.4.2
Possible Failure Situations in PACS Data Flow
The hardware of any component shown in Figure 15.5 can fail, and the failure will interrupt the PACS data flow through that component. If the component is the SPOF, like a PACS controller, a network switch, or an application server, it may render the entire system inoperable. By the same token, any key hardware device in a component computer, as shown in Figure 15.5, can fail, which in turn can crash the component computer and stop any image processing procedures in that computer. The result is the interruption of data flow or even the loss of images. 15.4.3 Methods to Protect Data Flow Continuity from Hardware Component Failure 15.4.3.1 Hardware Solutions and Drawbacks There are two possible solutions. The first is hardware redundancy, in which a backup computer or a tandem system is used to replace an on-line failed computer. Figure 15.6 shows the design of a manually activated tandem PACS controller. The second solution is to use a
c15.qxd 2/12/04 5:21 PM Page 388
388
FAULT-TOLERANT PACS
Acquisition Gateway Computers
Primary PACS Controller
1 Gb LINK
Secondary PACS Controller
DLT Fiber Optic channels
Workstations
Application Servers
RAID
Alarm
DLT or manual
Manual Activation
Figure 15.6 Design of a tandem PACS controller with manual activation of the secondary controller.
cluster server as the main PACS server. The PACS controller is composed of multiple servers controlled by a cluster manager. Hardware solutions have two drawbacks. First, they are expensive because of the hardware redundancy and the software involved in designing the fail over. Second, it is tedious and labor intensive to recover the current state after a component failure. In the case of hardware redundancy, if the primary component fails the secondary will take over. After the primary is fixed and on-line again, it will take tremendous effort to shift the operation from the secondary back to the primary. In the case of cluster server, if one clustered server fails, the recovery process and the return to the original operational state are also very tedious. 15.4.3.2 Software Solutions and Drawbacks Two solutions are offered. The first solution is to design the image data flow in such a way that any image in a transit state has at least two soft copies in two different components, respectively. The second solution is to utilize the principle that the chance of simultaneous failure of two hardware devices is much less than that of either one of them. This principle can be used to minimize the loss of image data during an image processing procedure due to a device failure in a component. A snake road design in the image processing software can be used to partition the FIFO model (described in Fig. 15.4). In this design, data will pass through different hardware devices, for example, between the motherboard (CPU) module (or the network card module if the process is for communication) and the hard disk drive alternately, as shown in Figure 15.7.
c15.qxd 2/12/04 5:21 PM Page 389
CURRENT PACS TECHNOLOGY TO ADDRESS FAULT TOLERANCE
389
Figure 15.7 The snake road design to minimize loss of data due to hardware failure during an image processing procedure.
The drawback of the first software solution is that it makes the whole PACS software architecture complicated and also increases the costs of storage, administration, and software maintenance. The drawback of the second software solution is that it will slow down the execution of the image processing procedure because of too many disk I/O operations for reading and writing of a large image data file. The software architecture is also very weak because any failure in a software process will break down the image processing procedure chain in the snake road.As a result, it decreases the whole PACS performance and may even produce the image stacking effect in some processing steps, which can eventually overflow the capacity of the local disks, leading to loss of image data.
15.5
CURRENT PACS TECHNOLOGY TO ADDRESS FAULT TOLERANCE
Current PACS technology utilizes a mix of software and hardware approaches in addressing PACS fault tolerance. Multiple copies of an image are always in existence in various components at one time until the image is archived in the longterm storage. A second copy of the archived image is stored off-line but not necessarily in a library system because of the expense involved. To ensure that there is no interruption of PACS data flow, larger PAC systems use distributed server architecture, and almost all systems have a mirrored database design. Most PAC systems have certain software designs to avoid the interruption of data flow due to hardware component failure. However, the most frequent hardware failure is in the hard disk drive. In this case, the data flow in the branch of the PACS operation related to the disk drive will be interrupted. If the disk is a component in the PACS controller, the data flow in the complete system will be interrupted; only local functions in the image acquisition workstations can remain in operation. No effective method has been derived to circumvent this hardware
c15.qxd 2/12/04 5:21 PM Page 390
390
FAULT-TOLERANT PACS
failure as of today. In Section 15.7, we discuss a fault-tolerant PACS architecture as a means to provide a systemwide solution.
15.6 CLINICAL EXPERIENCES WITH ARCHIVE SERVER DOWNTIME: A CASE STUDY We use the Saint John’s Health Center (St. John’s), Santa Monica, CA, as a case study to describe the possible component failures in the PACS archive server that can cause operation downtime. Methods of remedy are discussed. In this description, current PACS fault-tolerant technology is also described. 15.6.1
Background
St. John’s is a 224-bed community hospital that performs about 120,000 radiology exam procedures each year. Of those exams, approximately 90% are digitally acquired and stored in a PACS archive server. The archive server is also responsible for prefetching and distribution of any current and previous exams. It also provides patient location information for automatic image distribution to specific review workstations located on the hospital clinical floors. Like all traditional LAN PACS, the archive server is the command center of St. John’s PACS. This case study represents the PACS at St. John’s as of 2001. Since that time, the PACS has been upgraded to a different server and archive configuration. The hardware configuration for the archive server consists of a Sun Ultra2 workstation, two mirrored hard disks, a RAID (redundant array of independent disks), 4 server, and an optical disk jukebox with two drives and a single robotic arm. The Sun Ultra2 workstation has one 4.1-GB hard disk that contains the system software and the application software for the archive server. Attached to the workstation is a separate unit that contains two 4.1-GB mirrored hard disks. This mirrored hard disk device contains mirrored data of the patient and image directories of all the exams stored in the RAID and MOD (Magnetic Optical Disk) jukebox. A third copy of the databases is stored on a separate optical disk off-line, and backup is performed on a daily basis. The RAID 4 system is connected to the Ultra 2 workstation via the SCSI (small computer system interface) port and is a 90-GB array of hard disks. This RAID system holds about 2 weeks’ worth of current exams for fast retrieval. In addition, these exams are archived long term in the MOD jukebox. The MOD jukebox contains a single robotic arm that retrieves the platters and inserts them into one of two drives for exam retrievals. The jukebox has the capacity to contain 255 platters for a total of 3.0 TB of data losslessly compressed at a ratio of 2.5 : 1. The SPOF for the archive server are the single system hard disk and one CPU motherboard of the Sun Ultra2 workstation. However, the database, which is a crucial part of the server, is mirrored on two hard disks with a third off-line daily backup. 15.6.2
PACS Server Downtime Experience
During a period of 1.5 years before September 2001, in a 24 hour, 7 days a week operation, the archive server encountered a total of approximately 4 days of down-
c15.qxd 2/12/04 5:21 PM Page 391
CLINICAL EXPERIENCES WITH ARCHIVE SERVER DOWNTIME: A CASE STUDY
391
time spreading across three separate downtime incidents. The first two incidents were due to system hard disk failures, and the third downtime was due to a CPU motherboard failure (see Fig. 15.5). A description of downtime procedures and events and ensuing uptime procedures are presented to give a snapshot of how archive server downtime affects the clinical workplace. 15.6.2.1 Hard Disk Failure The first two archive server downtimes involved the failure of the server system hard disk. Because the system software and the application software reside on this hard disk, the archive server was considered completely down. The initial diagnosis was made during the early evening hours. A new hard disk was ordered by service, and the service representative arrived early the next morning to begin bringing the server back up. This was accomplished by replacing the hard disk with a new one and then installing the system software. Once the system software had been installed and properly configured, the next step was to install the server application software. The application setup was complex, especially during the second downtime, because the application software had already just undergone a new software upgrade and only two service personnel within the organization had complete understanding of the installation procedures of the new software upgrade. In both cases, the server was brought up by the end of the day, making a total of 1.5 days of downtime for each incident. 15.6.2.2 Motherboard Failure The third archive server downtime involved the CPU motherboard. The NVRAM (nonvolatile random access memory) on the motherboard failed and needed replacement. The failure occurred once again during the early evening, and the initial diagnosis was made. A new CPU motherboard was ordered, and the service representative arrived early the next morning. The hardware was replaced, and tests were made to ensure that all hardware components within the archive server workstation were fully functioning. Because the hard disk was not corrupted or damaged, the server was brought up around noontime, after a downtime of approximately 1 day. 15.6.3
Effects of Downtime
15.6.3.1 At the Management Level During the archive server downtime, normal operations and routines that were performed automatically had to be adjusted and then readjusted once the server was brought back up. These adjustments had a major impact on the clinical work flow and will be described further. Under normal conditions, all new exams were automatically routed to the review workstations on the hospital clinical floors based on the location of the patient, which was obtained from the archive server, which in turn obtained it from the Radiology Information System (RIS) interface. Prefetched exams were distributed to the corresponding workstations along with the current exams for comparison. In addition, radiologists and referring physicians had queries and retrievals on demand for any previous exams that were not located on the local workstation. Once the archive server experienced downtime, the work flow changed quite a bit. Communication between all staff personnel was crucial to relay the current downtime status and implement downtime procedures. The technologists had to manually distribute the
c15.qxd 2/12/04 5:21 PM Page 392
392
FAULT-TOLERANT PACS
exams to the specific reading workstation along with the corresponding review workstation on the floors. Therefore, what normally was one mouse click action to begin autorouting became a series of manual sending to multiple separate workstations. For the clerical staff, hard copy reprints for off-site clinics were not possible because most exams that were requested for hard copy reprint did not reside on the local workstations and they could not query and retrieve for them. For the radiologists, any new exams acquired during the downtime did not have prefetched exams at the local reading workstation for comparisons. Also, they could not query and retrieve any exams from the archive. This included the referring physicians, who also queried and retrieved exams. 15.6.3.2 At the Local Workstation Level Exam data management procedures on the local workstations were implemented as well during the downtime of the archive server. At the onset of downtime, some workstations had exams that were in the process of being sent to the archive. These sending jobs needed to be cleared from the queue before they started to affect the performance of the workstations. In addition, any automatic archiving rules had to be turned off on the local workstations so that no additional exams would be sent to the archive when it was down. 15.6.4
Downtime for Over 24 Hours
To prepare for possible downtime of longer than 24 hours, some exams may need to be deleted from the local workstation that have already been read and archived to make room for incoming new exams. This process can only be performed manually and thus can be very time consuming depending on the total number of workstations that receive new exams. Once the archive server is up and running again, the uptime procedures for the reading workstations must be implemented. All automatic archiving rules must be activated. Any new exams that have not been archived must be manually archived. Finally, query and retrieve tests must be performed on the workstation to ensure that the PACS system is fully operational again. These final uptime procedures for the workstations consume about two hours of extra time (depending on the number of workstations) in addition to the server downtime to bring the system to full operational status. 15.6.5
Impact of the Downtime on Clinical Operation
In these three downtime experiences, St. John’s did not lose any image data. In addition, there was no damage to the mirrored database disk system. In fact, with all the downtime procedures implemented, the end users only experienced the inconvenience of not having the query/retrieve functionality or previous exams for comparison studies.
15.7
CONCEPT OF CONTINUOUSLY AVAILABLE PACS DESIGN
The case study given in Section 15.6 describes an example of the current faulttolerant PACS design in clinical practice. The concept of continuously available PACS design is to minimize the manual intervention and the change of manage-
c15.qxd 2/12/04 5:21 PM Page 393
CA PACS SERVER DESIGN AND IMPLEMENTATION
393
ment and workstation daily operation routines, because these recovery procedures are tedious and labor intensive in clinical operation, not to mention the steps required to bring back PACS engineering operation. Therefore, an ideal design is to build fault tolerance for each SPOF in the system. Thus the PACS controller and each application server are SPOFs of the system and should have fault tolerance built in.The question is how to provide fault tolerance for these SPOFs in the system so that PACS would be continuously available in operation in case any of the SPOFs occurs. If we use hardware solutions alone, the PACS as a whole will be very expensive and difficult to maintain. If we use software solutions alone, the software development effort will be extensive, and the system performance will suffer, as witnessed in the case study. The current concept in continuous available PACS design is to replace every SPOF in the complete system by a new continuous availability (CA) component with four characteristics. First, this CA component has an uptime of 99.999% (downtime: 5 min/year) to satisfy the requirement of continuous availability. Second, in case this CA component fails because of a failure in any of its hardware devices, its recovery should be automatic without human intervention, and the recovery time should be within seconds. Third, this CA component is a one-to-one replacement of the corresponding existing PACS component without any modification of the rest of the system. And fourth, the replacement is easy to install and affordable. Figures 15.1 and 15.2 are a version of the CA PACS design, which is identical to a generic PACS architecture except that all possible SPOFs (PACS controller, application servers, database gateway computer, and network switch) must be replaced by the CA components. Section 15.8 describes such a CA PACS server.
15.8
CA PACS SERVER DESIGN AND IMPLEMENTATION
In this section, we present a design of a CA server using both hardware and software for PACS application and use the terms fault tolerance (FT) and CA interchangeably. We first discuss the hardware components in an image server and the CA design criteria. The architecture of the CA image server is then presented with system evaluation criteria and results. This CA PACS server has been implemented as a 24/7 backup archive server for a clinical PACS since 2002. 15.8.1
Hardware Components and CA Design Criteria
15.8.1.1 Hardware Components in an Image Server Basic hardware components in an image server consist of the CPUs and memory, I/O ports and devices, and storage devices shown in Figure 15.8. Any of these hardware components can fail, and if it is not addressed immediately, operation of the server will be compromised. The design of the CA image server is to develop both hardware and system software redundancy to automatically detect and recover any hardware failure instantaneously. Table 15.1 shows a current survey of the causes of computer and network system downtime in which computer hardware, hard disk drive, communication processor, and data communication network account for about 71% of the system failure. This design addresses the issue of this 71% hardware-related
c15.qxd 2/12/04 5:21 PM Page 394
394
FAULT-TOLERANT PACS
I/O Devices
CPUs CPU Bus
I/O
Ethernet Board
Storage Disk
Interface Serial Device
EEPROM
RAID/DLT
RAM
Memory Figure 15.8 General computer system architecture with short-term (RAID) and long-term (DLT, digital linear tape) library archive.
failure in the complete image server system and methods of circumventing the failure with fail over to achieve the definition of a CA system. PACS application software failure and human errors would not be considered in this CA image server. 15.8.1.2 Design Criteria Fault tolerance is the implementation of redundant hardware components with software control in the server such that in the event of a component failure, maximum availability and reliability—without the loss of transactional data—can be achieved. The requirements for a FT image server are as follows. (1) Highly reliable server system operations • No loss of data • No workflow interruptions Image server reliability means server uptime and fault tolerance. In the event of a server hardware component failure, users might notice a minimal performance impact during the server fail over process but all server functions, such as image archive, retrieval, distribution, and display, must be continuously available. No loss of data and no work interruptions are permitted. All current server processes and transactions should automatically resume with no interruptions. (2) Acceptable system performance • No performance degradation in daily routine operations • Occasional glitches in an acceptable amount of time Performance inclusively measures how well hardware, operating systems, network, and application software perform together. Ultimately, the image server performance affects the end users and the response time of the applications. For FT image servers, the redundant server hardware, once taking over the failed hardware, should
c15.qxd 2/12/04 5:21 PM Page 395
CA PACS SERVER DESIGN AND IMPLEMENTATION
395
be able to handle the same workload in network speed, CPU power, and archive storage as in normal server operations so that the user will not feel a noticeable performance degradation. Normally, server operations and user sessions halt momentarily (about 30 s; see Table 15.2) until the FT image server successfully fails over and resumes in the event of a system glitch. A longer delay (in minutes) is acceptable for noninteractive background processes such as in image archiving. (3) Low cost and easy implementation • Portable • Scalable • Affordable Portability means that existing server software should be able to run on the FT image server without any major changes. Scalability tests how additional hardware and system and application software work with the FT server and impact its ability to handle the workload. High-end million-dollar FT machines such as the Tandem system, which utilizes a sophisticated system design and can recover from system failure in milliseconds, are too expensive for most large-scale medical imaging applications and are often used for short transaction types of applications in the banking, security and stock exchange, and telecommunication industries. The design of the FT server using the concept of the triple modular redundancy (TMR) UNIX server discussed here is affordable; the prototype costs about three times the amount of a comparable UNIX machine but has much longer failover times (in seconds) than the Tandem system. 15.8.2 15.8.2.1
Architecture of the CA Image Server The Triple Modular Redundant Server
Triple Modular Redundant (TMR) Server Concept and Architecture The CA image server uses TMR to achieve fault tolerance at the CPU/memory level. Figure 15.9 shows the core of the TMR server made up of three identically configured UltraSPARC-based modules (Sun Microsystems). The three modules are tightly synchronized and interconnected through a high-speed backplane for intermodule communications. Each module is a complete, operational computer running Sun’s Solaris UNIX OS server, with its own UltraSPARC CPU, memory, I/O interfaces, bridge logic, and power supply. Each module runs all software applications independently and synchronously under the standard Solaris operating environment (Kanoun and Ortalo-Borrel, 2000; http://www.resilience.com). To the UltraSPARC core, a programmable ASIC (application-specific integrated circuit) technology is used to build TMR bridge logic that keeps the three modules synchronized, continuously monitors and compares their operation, and exchanges I/O data among the three modules. The bridge logic reads Sbus transactions within each module and compares them across the modules. If the logic detects a variation between transactions, it assumes an error. Then the system pauses (typically for 5–30 s depending on memory size) while the diagnostic software determines the faulty module and disables it. Next, memories of the two remaining modules are
c15.qxd 2/12/04 5:21 PM Page 396
396
FAULT-TOLERANT PACS
Synchronous
Bridging
C
CPU and Memory
Bridge
B
CPU and Memory
Bridge
A
CPU and Memory
Bridge
Asynchronous I/O Bus
I/O Bus
I/O Bus
I/O Devices
I/O Devices
I/O Devices
A
B Figure 15.9 (A) Fault-tolerant (CA) image server with the triple modular redundancy (TMR) architecture. Each module contains the basic computer system architecture shown in Fig. 15.8 (Left). The bridges are used to synchronize the three modules. (B) The prototype CA TMR with three modules (Ultra Sparc II-like architecture) with three separate power supplies (bottom). The console switch (middle) is for managing the three module displays during debugging stage.
synchronized by performing a full memory copy. Once the copy is complete, the system resumes processing by using the remaining two modules. The Bridge Voting Logic The hardware in the bridge compares all data that come into, or go out of the synchronous part of the system to diagnose any problems. The hardware voting ASIC built into the bridge unit provides real-time fault detection
c15.qxd 2/12/04 5:21 PM Page 397
397
CA PACS SERVER DESIGN AND IMPLEMENTATION
and masking functions that are transparent to the application program. The TMR voting system uses a simple majority voter shown in Figure 15.10. The data that go in or out of the three modules are correlated and voted on to find the most correct output (Z). A disagreement detector compares a voter’s output (Z) with the input (I1, I2, and I3) of each active module. When a disagreement occurs, the detector flags the corresponding module as having failed and the module is taken off-line automatically.The rationale is that the probability of all systems containing the same error (producing the same bad data given some input) is extremely remote. If only two modules were to be used, the mere fact that the two modules behave differently does not guarantee identification of the faulty module. This is a conceptually simple model, because the logic that compares the operation of the three modules need not know what faulty behavior looks like, only that a deviant module should be regarded as faulty. The fundamental premise of this architecture is that any data that are corrupted will not be a problem until they are sent through the system. The hardware in the bridge compares all data that come into or go out of the synchronous part of the system to diagnose any problems. Because the comparisons occur within the hardware, there is no overhead to the operating system and performance is not affected. Although using the voting unit in the configuration shown in Figure 15.10A provides adequate protection against the hardware faults in a module, it does not allow a fault to occur within the voter itself. To provide total protection the replicated voters shown in Figure 15.11 have been used in the TMR server. Here the voter has been triplicated, and the link from each module is passively fanned out through the backplane to each of the voting units. Because each voter built into the bridge unit is logically grouped with the module itself, a voter fault is equivalent to a module fault and will be masked by further voter action at the next stage. Passive fan out of the link connections used in the TMR backplane is necessary to avoid the possible introduction of Byzantine (intermittent) faults that could arise from the use of an active fan out mechanism.
Module C
I3
AND
I2
AND
I1
AND
I3 Module B
I2
V
I1 Module A
Disagreement Detector
(A) TMR Voter (V) with a feedback loop
Z
OR
Z
(B) Voting Logic: one bit majority voting using 3 AND gates and one OR gate Output (Z): Z=1 if 2 or all of 3 inputs are 1 Z=0 if 2 or all of 3 inputs are 0
Figure 15.10 TMR majority voting system and logic. (A) TMR voter (V) with a feedback loop. After the vote, the feedback loop will disable the minority module if it exists. (B) The voting logic.
c15.qxd 2/12/04 5:21 PM Page 398
398
FAULT-TOLERANT PACS
Module C
V
ZC
Module B
V
ZB
Replicate Voters: Produce a correct output even if one voter is faulty. Eliminate single point of failure
Module A
V
ZA
Figure 15.11 TMR system with replicate voters. Three voters are used to avoid single points of failure in the voter. Replicate voters design is used in the TMR prototype.
15.8.2.2
The Complete System Architecture (Avizienis, 1997)
Other Server Components—FT I/O and Storage Subsystem Figure 15.12 shows the complete architecture of the TMR and I/O buses and devices. Other server components in addition to the TMR in the server system are I/O buses and devices. Not only is operating I/O devices such as SCSI or Ethernet interfaces in TMR synchronization extremely difficult, it does not achieve the required systemwide CA. Each I/O subsystem must be regarded individually, with CA implemented in a manner appropriate to each subsystem. Ethernet Each of the three modules contains its own 100 baseT Ethernet interface, each of which is connected, via independent paths, to the local network backbone. The three interfaces form a single software interface with one IP and MAC (media access control, a unique hardware number) address. One interface acts as the active interface, while the others stand by. Should the module containing the active interface fail, or some element of its connection to the backbone fail, that interface would be disabled and a standby unit would become active in its place. Normal network retry mechanisms hide the failure from applications. Fault-Tolerant Storage System There are two storage subsystems, RAID for short term and the DLT library for long term. Because the former is mission critical in a complete CA image server system, the RAID system needs a FT design also. DLT in the CA image server is used as a peripheral storage subsystem; FT can be designed and implemented up to the connectivity level with the CA image server. RAID as a short-term storage. Although RAID has its own built-in FT mechanism in handling disk failure, its SPOFs are in the RAID controller and its connection to the CA image server. In this design, a dual-controller RAID is used for short-term storage. TMR modules A and B are connected to each of the two RAID controllers as shown in Figure 15.13. Disk drive or RAID controller failures are handled by the RAID mechanism. The two redundant connections shown in Figure 15.13 provide a full FT short-term storage solution, which guarantees system survival in an event of failure of one module, one RAID controller, or any combination of both at the same time. A Hitachi 9200 (325 Gbytes) dual-controller RAID (Hitachi Data Systems, Santa Clara, CA) with Veritas Volume manager software (Mountain View, CA) is imple-
c15.qxd 2/12/04 5:21 PM Page 399
399
CA PACS SERVER DESIGN AND IMPLEMENTATION
Synchronous
Bridging
Ethernet FW SCSI interfaces interfaces
UPS
C
CPU and memory
Bridge
100 BT
UW SCSI
UPS
B
CPU and Memory
Bridge
100 BT
UW SCSI
UPS
A
CPU and Memory
Bridge
100 BT
UW SCSI
LAN
For Reserve
rrored Mi Disk0
Mirrored Disk1
UPS
RAID Cotl0
RAID Cotl1
DLT drive0
DLT drive1
RAID
DLT
UPS
UPS
A
B Figure 15.12 (A) The complete CA image server system srchitecture. Three Ethernet interfaces from each of the three modules connected to the LAN, while only one Ethernet interface is active and forms only one IP address for the application. Two mirrored disks (Mirrored Disk0 and Mirrored Disk1) are connected to module A and module B through UW (ultrawide) SCSI interface with failover mode. Two RAID controllers (RAID Cotl0 and RAID Cotl1) are connected to module A and module B through UW SCSI interface with failover mode. Two DLT controllers (DLT drive0 and DLT drive1) are connected to module A and module B through UW SCSI interface with failover mode. Section 15.8.3, System Evaluation explains how this fault-tolerant architecture performs fail over under various conditions. UPS, uninterruptible power supply. (B) The CA image server connected to the PACS simulator shown during RSNA 2001, and SPIE 2002. The simulator consists of the modularity simulator (Left), gateway, TMR (see Fig. 15.9B), and two workstations (Right). The RAID and LTD library are not shown in the figure.
mented with the CA image server shown in Figure 15.13. The dual hardware controllers connected to modules A and B, respectively, provide the two paths to the TMR server while Veritas software dynamically monitors the two paths and switches automatically from one to another in case one path is disconnected. DLT library for long-term archive. For long-term archive in the CA image server, a StorageTeK L40 DLT library with 3.2 TB (Storage Technology Corp., Louisville,
c15.qxd 2/12/04 5:21 PM Page 400
400
FAULT-TOLERANT PACS
Hardware Controller Module C
Multi-Disk Access
Module B
Module A
Multi-Disk Access Hardware Control ler
Figure 15.13 A RAID with two hardware (HW) controllers. The module A SCSI port connects to one controller, and the module B SCSI port connects to the other controller, providing redundancy for system reliability.
CO) is used. The library has two drives, each of which is connected to one of the TMR controller modules, providing the redundant paths to the CA image server, and hence it provides FT connectivity to the server. Meanwhile, Veritas Storage Migrator and Netbackup software are installed to automatically migrate and back up the data from short-term RAID to long-term tape library archive. The Veritas Migrator software has a built-in feature to monitor the multiple paths and to fail over from one to another in case of a failure of any single CPU module or tape driver. But the DLT library itself is not designed as fault tolerant. Its controller and robot arm are still the SPOF. The tape library is used for secondary archive, and the FT can be tolerated. The main purpose is preservation of the data rather than real-time recovery of library system failure. 15.8.3
System Evaluation
15.8.3.1 Testbed for System Evaluation To evaluate the robustness and effectiveness of the CA image server design, two key components are crucial to the process. First, a testbed environment is needed to allow for observation and results gathering. Second, the key SPOFs have to be realized and defined as targets for replacement within the new design. In the specific case of a CA image server, these SPOFs are manifested in system components or devices. The Image Server Simulator System An Image Server Simulator with the following components and functions was used to perform the CA image server evaluation (see Section 22.1.4.2): (1) Acquisition modality: Simulates a device that acquires medical images (2) Gateway: Receives images from acquisition modality and verifies no image loss
c15.qxd 2/12/04 5:21 PM Page 401
CA PACS SERVER DESIGN AND IMPLEMENTATION
401
(3) Image server: Receives images from gateway for storage and distribution (Sun Ultra Sparc II workstation running Solaris v.2.6) (4) Two workstations: Receive images from image server and displays them (Cedara Display software, Toronto, Canada) (5) Network infrastructure The simulator is a closed network using TCP/IP Fast Ethernet consisting of CAT5 cables and two portable 8-port 100 Mbit/s switches. Each of the system components resides on a separate computer device (see Fig. 15.14). The modality, gateway, and two workstations are running each on four separate Pentium III Windows 2000 PC workstations.The Image Server is running on a Sun Ultra Sparc II Solaris v.2.6 workstation. In addition, the simulator has an automatic software package where images are sent from the gateway to the image server and then deleted and re-sent again in a continuous loop. This function aids the burn-in evaluation discussed below. Chapter 22 discusses this image server in more detail. The CA Image Server Simulator System (Huang et al., 2000a; Huang and Liu, 2000; Huang et al., 2000b) Initially, the testbed simulator was designed with the image server running on a Sun Ultra Sparc II Solaris v.2.6 workstation. The image server is the main engine for the system and is crucial for image data flow. Therefore, it is considered a SPOF component within the system. Before the CA image server was implemented, all components in the simulator were tested to ensure that the image data flow was complete. This was accomplished by executing the image data flow multiple times. Results were verified at the image server. To evaluate the CA image server, the image server in the simulator system was replaced with the CA image server described in Section 15.8.2.1 (Fig. 15.9). Identical software running on the image server was installed on the CA image server. The same testing procedures and data used to verify the simulator were applied to the CA image server simulator to ensure that there were no software, setup, or configuration differences while implementing the evaluation protocol. 15.8.3.2 CA Image Server—Testing and Performance Measurement Two sets of tests are used to evaluate the reliability and the performance of the CA imaging server during fail over. Details and results are described in this section. Evaluation Protocol for CA Image Server Reliability and Functionality Tests Define and create clinical test scenarios. Development of an evaluation protocol is anchored around creating operational situations in which hardware failure can occur. Simulations of hardware failures are created within the operation scenarios to test the abilities of the CA image server. Operation scenarios simulated in the
Aquisition Modality
Gateway
Image e Server
Display Workstation
Figure 15.14 Image server simulator system components and data flow used in the CA image server system evaluation.
c15.qxd 2/12/04 5:21 PM Page 402
402
FAULT-TOLERANT PACS
evaluation can be broken down into types based on the perspective of the end user: 1) operational background or “passive” scenarios and 2) operational on-demand or “active” scenarios. Operational background or “passive” scenarios are automatic functions of the CA image server. These include storage and archiving of image data and automatic distribution of this data to workstations. An effective operational simulation would be to have the CA image server perform these background functions on a continuous basis for 24 hours a day, 7 days a week. An automatic loop was created within the simulator testbed. The loop involves the gateway that automatically sends image data to the CA image server that in turn automatically distributes the image data to the workstations. After a set amount of time elapses, the CA image server automatically deletes image data from its storage and database and the data are re-sent. This loop can be executed continuously for extended periods of time to simulate a true operational environment. This scenario was performed in the evaluation protocol of the CA image server design, and any hardware failure occurrences were noted and recorded. Operational on-demand or “active” scenarios are functions of the image server executed based on requests made by another device. For example, a workstation can DICOM query the image server for specific image data. A retrieve request is initiated for the image data to be sent to the requesting device. This scenario is executed with the CA image server. Fail over procedures are described in greater detail in the following paragraphs. Failover procedures for evaluation. Three types of hardware failures were simulated on the CA image server: 1) network devices/components failure, 2) CPU, memory, or entire motherboard failure, and 3) hard disk/storage failure. Scenario 1. Ethernet connection failure. The first failover procedure was evaluated during both receiving and transmitting of image data from the CA image server. During transmission of image data to the workstation, the Ethernet cable connecting the CA image server to the network switch was removed, simulating a network component failure. Figure 15.15 shows this fail over procedure. The image data should continue to transfer after a few seconds once the CA image server has completed its Ethernet fail over automatically. Successful transmission of image data during this fail over procedure was verified by displaying the image data on
Figure 15.15 Ethernet fail over procedure on module A, where CA image server marks port bad and arrows signify that the selected data transmission goes through module B.
c15.qxd 2/12/04 5:21 PM Page 403
CA PACS SERVER DESIGN AND IMPLEMENTATION
403
the workstation and confirming that all data were present and not corrupted. Successful receiving of image data was verified by sending the image data to the workstation for verification. Scenario 2. CPU, memory, motherboard failure. The same fail over procedure was applied to the simulated hardware failure of the CPU, memory, the entire motherboard, or the power supply. Power to one of the three modules was shut off during the transmitting or receiving of image data either by turning the power switch off or by pulling the power plug. Figure 15.16 shows this fail over procedure. The image data should continue to transfer once the CA image server had completed its CPU module failover. Again, successful transmission of image data during this fail over procedure was verified by displaying the image data on the workstation and confirming that all data were present and not corrupted. Successful receipt of image data was verified by sending the image data to the workstation for verification. Scenario 3. Hard disk or storage device failure. The same failover procedure was applied to the simulated hardware failure of the external hard disk or other storage device. Power to one of the hard disks was shut off during the transmitting or receiving of image data either by turning the power switch off or by pulling the power plug on the external hard disk. Figure 15.17 shows this fail over procedure. The image data should continue to transfer once the CA image server had completed its hard disk fail over. An additional step was involved in which, if the hard disks were mirrored, the data must be resynced once the hard disk is recovered after fail over. The same verification procedures were performed as above. Fault-Tolerant Server Performance Interactive test scenarios. The performance test measures the fail over time of the CA image server in the event of system component failures that occur during standard image server operations—receiving and distributing image data. The same three scenarios described previously were used. Test Scenario 1: Ethernet failure while receiving and distributing image data Test Scenario 2: CPU module failure while receiving and distributing image data Test Scenario 3: Disk failure while receiving and distributing image data
Figure 15.16 CPU fail over procedure on module B, where CA image server compares data and marks module B as failed and arrows signify that data transmission goes through module A.
c15.qxd 2/12/04 5:21 PM Page 404
404
FAULT-TOLERANT PACS
C
to SCSI B RFT Bridge CPU and Memory to SCSI A
B
to SCSI B RFT Bridge CPU and Memory to SCSI A
A
Data C ompar iso Data CData Co ompar mparn iso n ison
Write fails and software marks the disk bad, but keeps working and doesn’t write to the disk.
to SCSI B RFT Bridge CPU and Memory
SCSI Bus C
SCSI Bus B
SCSI B
SCSI Bus A
SCSI A
to SCSI A
Figure 15.17 Hard disk fail over procedure. In a normal operation, CA image server writes simultaneously to SCSI disks A and B. When the disk attached to module B fails, CA image server marks disk B bad and arrows signify that data are written instead to the hard disk connected to module A only.
TABLE 15.2
Average System Recovery Time for Common Faults
Common Faults Range of recovery time Average Standard deviation
Ethernet
CPU Module
Disks
3–15 5.8 1.05
20–40 29.8 4.66
35–75 42.2 32.86
Total tests: 182 for each scenario. Times are expressed in seconds.
Table 15.2 summarizes the recovery time for the scenarios listed above. The image data flow continues automatically after the system recovery. No data loss and no interruption of data flow had been observed. All image data were successfully transferred. The effect from the user’s perspective is a delay equal to the system fail over time. The three scenarios were each tested 182 times using test data of 111 images totaling 28 MB in size. The recovery time was measured. Standard deviations (SD) were calculated based on the sample size of the 182 executed failures for each scenario. Note that for Scenario 3, the high SD is due to two different types of failure times: 1) SCSI failure captured by the system immediately, and 2) SCSI failure captured by a SCSI ping timeout, which is the longer failover time. Background burn-in test. The automatic feedback distribution loop described in Section 15.7.2.1 between the CA image server, gateway, and workstation was activated for a continuous period of 3 months 2,160 hr. This is the burn-in period that simulates an image server in a clinical setting. No system failure was observed. With the CA image server, applications execute concurrently on highly replicated hardware. In the event of a failure, work continues on the remaining and still fully functional hardware. The system and network performance are not affected except
c15.qxd 2/12/04 5:21 PM Page 405
CA PACS SERVER DESIGN AND IMPLEMENTATION
405
during the failover process, when users will experience a delay equal to the system recover time. Although the fault recovery may be effectively addressed by the CA design of hardware and system, the time required to recover from errors can vary from a couple of seconds to 75 seconds depending on one or more of the following: (1) (2) (3) (4) (5)
The nature of the failure The elapsed time to discovery of the failure The time required to resynchronize the functional hardware The time required to reestablish network communications with switches The time waiting for a SCSI timeout and then taking the alternative path to the external storage device.
The CA image server with the simulator (Figs. 15.12B and 15.14) was demonstrated live at the Annual Conference of the RSNA (Radiological Society of North America), Chicago during 2000, 2001, and 2002 and at the SPIE (The International Society of Optical Engineering) Medical Imaging Conference, San Diego, 2001 and 2002. 15.8.4
Applications of the CA Image Server
15.8.4.1 PACS and Teleradiology The CA image server described in this section can be used to replace SPOFs in the image components of the PACS operation. Figures 15.1 and 15.2 show the generic PACS and teleradiology architecture, where each SPOF marked by “CA” can be replaced by the CA image server design. The CA image server Simulator described in Section 15.8.2.1 is actually a generic PACS designed for PACS training described in Chapter 22. 15.8.4.2 Off-Site Backup Archive—Application Service Provider Model (ASP) Off-Site Backup Archive—ASP Model for Disaster Recovery The CA image server can be used as a fault-tolerant solution for disaster recovery of short-term image data using an Application Service Provider model (ASP). The ASP shortterm image archive provides instantaneous, off-site automatic backup of acquired image data and instantaneous recovery of stored image data with CA quality and low operational cost. Such an application has been implemented to support an offsite backup archive for a clinical PACS. The CA image server with both RAID and DLT Library as shown in Figure 15.12A and B located at the Image Processing and Informatics Laboratory (IPI) at USC, serves as a short-term off-site backup archive server for St. John’s Health Center in Los Angeles. One hundred percent of clinical image data are sent to this ASP CA image server in parallel with the exams being acquired from modalities and archived in the main PACS server from St. John’s. Currently, connectivity between the main archive and the ASP storage server is established via a T-1 (1.5 Mbits/s) connection. In the near future, Internet 2 can be used to replace the T-1 connection, and a 155 Mbits/s transfer rate can be realized.
c15.qxd 2/12/04 5:21 PM Page 406
406
FAULT-TOLERANT PACS
During the implementation stage, a disaster scenario was initiated and the disaster recovery process using the ASP archive server was successful in repopulating the clinical system on-site within a short period of time (a function of the data size and data transfer rate). The ASP archive was able to recover 2 months of image data with no complex operational procedures. Furthermore, no image data loss was encountered during the recovery. Table 15.3 shows the number of image data in terms of clinical image exams that were tested on this system. A total of approximately 447 exams comprising 29,000 images of various types or 9 GB of total data were tested. The average T-1 performance bandwidth was measured as 179 Kbytes/s using the FTP protocol and 168 Kbytes/s using the DICOM transfer protocol. This ASP off-site backup archive has been integrated with the St. John’s PACS for daily clinical operation. Off-Site Backup Image Archive—Scheduled Downtime Service The CA Image Server can also be used as a backup archive solution for scheduled downtime events that occur to the main PACS server located on-site. Routinely, the main image server within the hospital will undergo software upgrades as well as preventive maintenance. Although these downtime events are scheduled, they still affect normal clinical workflow because these main image servers are mission critical systems. Recently, such an event was arranged at St. John’s in which the main image server on-site was scheduled for software upgrades. The CA image server, which was offsite, was used to provide image data for emergency operation during this downtime period.A total of 100 images (87.5 MB) were transmitted to St. John’s clinical system directly from the IPI over a 4-h period. 15.8.5
Summary of Fault Tolerance and Failover
15.8.5.1 Fault Tolerance and Failover In this section, we summarize the design and implementation of a CA image server that can achieve 99.999% hardware uptime. The design concept is based on a TMR server with three redundant server modules. The fault tolerance and fail over are based on coupling the TMR with majority vote mechanism and fail over software architecture. The majority TABLE 15.3 Results of Image Data Stored Using the ASP Backup Archive During a Disaster Recovery Operation CT Exams
MR Exams
CR Exams
US Exams
RF/Angio Exams
Total
No. of exams
111
110
109
61
56
447
Total images
10,385
15,977
164
1,981
497
29,004
Avg. no. Of images per exam Data size (MB)
93.6
5,192.5
145
998.6
1.5
1,312
32.5
495.3
8.9
994
8,992.4
c15.qxd 2/12/04 5:21 PM Page 407
CA PACS SERVER DESIGN AND IMPLEMENTATION
407
votes detect the faulty component in cycle time, and the software takes care of the automatic fail over. Depending on the types of hardware failure and the state of the execution in the component, the failover can take from 3 to 75 (SD = 32.86 s) s. The longer fail over time and large SD happen in the computer disks, where the SCSI failure can either be captured by the system immediately or by a SCSI ping timeout that yields the large SD. If a mirrored disk fails, the CA image server continues runs smoothly, but to reconstruct the mirrored disk in the background does require more time depending on the size of the disk under consideration. 15.8.5.2 Limitation of TMR Voting System The TMR voting system currently implemented is a passive (static) one that uses fault masking to hide the occurrence of a fault without further system action (active) to diagnose the failed module. A hybrid approach, which is expensive but better to achieve higher availability, could be used. It implements a feature that automatically distinguishes between hard (repeatable) or transient (not repeatable) failures and then responds appropriately. This current system responds to all errors as though they were hard by disabling the affected module. A hard error in a computer system is reproducible and will occur every time the same component attempts the same operation. The module with a hard error must be replaced. If the failure was transient, then the system should automatically resynchronize the module back into operation. However, if the number of transient errors exceeds a set threshold, the module is taken out of service for replacement. The hardware-based voting approach to FT does not address software failures that are due to bugs in the application program or operating system itself because the data from the three modules are the same and will be voted to be without error. If the operating system fails, all execution entities fail simultaneously. This event requires a reboot. Application failures need a restart. Automatic restart of failed applications may be implemented on FT systems with the addition of event management software. In PACS and teleradiology applications, such FT management software should be implemented to improve the total system, hardware and software, reliability. The TMR configuration with the three modules is the minimum redundancy requirement in which the majority voting logic can identify a faulty module. After a single module fails, a TMR configuration will fall back to operate as a DMR (double modular redundancy) configuration until the third module is repaired. In a DMR configuration, the system can still detect a fault by comparing but voting fails to determine which of the two modules is faulty. Therefore, a DMR configuration requires, in general, use of additional algorithms to perform self-diagnoses and determine the faulty module. The current TMR server does not support selfdiagnosis in the DMR configuration. 15.8.5.3 The Merit of the CA Image Server Although the concept of TMR logic has been used in other types of FT design, the TMR CA image server described and implemented is a technology innovation in its handling of fail over. The CA image server connected with the FT storage as a total system for medical image applications has several main advantages over other current FT server designs. It is truly continuously available, lower cost to implement, portable, scalable, affordable, easy to install without extensive change in the application software, and not man-
c15.qxd 2/12/04 5:21 PM Page 408
408
FAULT-TOLERANT PACS
power intensive during fail over and system recovery. The TMR CA Image Server is very suitable for large-scale image database application. In addition to replacing the SPOF components in an operational PACS, the CA image server can be used as an ASP model for short-term backup archive for disaster recovery and scheduled downtime service. Although the CA server described is based on a UNIX Solaris system, similar concepts can be extended to other computer architecture and software operation systems.
c16.qxd 2/12/04 5:20 PM Page 409
CHAPTER 16
Image/Data Security
16.1 16.1.1
INTRODUCTION AND BACKGROUND Introduction
Data security is a very important issue when digital images and pertinent patient information are transmitted through public networks in telemedicine and teleradiology applications. Generally, trust in digital data is characterized in terms of privacy, authenticity, and integrity of the data. Privacy refers to denial of access to information by unauthorized individuals. Authenticity refers to validating the source of a message, that is, that it was transmitted by a properly identified sender. Integrity refers to the assurance that the data was not modified accidentally or deliberately in transit, by replacement, insertion, or deletion. Conventional Internet security methods are not sufficient to guarantee that image/data has not been compromised during data transmission. Techniques including network fire walls, data encryption, and data embedding are used for additional data protection in other fields of applications like financial, banking, and reservation systems. However, these techniques have not been systematically applied to medical imaging, partly because of the lack of urgency until the recent HIPAA proposed requirements in patient data security (Health Insurance Portability and Accountability Act). In this chapter we present a digital envelope (DE) method to ensure image data security during its transmission through public communication networks and when it is archived in the permanent storage. This method can also be used within the institution if image data security is needed. In this chapter, a medical image Digital Signature (DS) is defined as the encrypted message digest of the image using existing public domain hashing algorithms. A medical image Digital Envelop (DE) is defined as the DS plus the encrypted relevant patient information in the DICOM (Digital Imaging and Communication in Medicine) image header. The Sending Site (SS) is where the image is originated, and the Receiving Site (RS) is where the image is received. Image/data means the image plus relevant patient information. 16.1.2
Background
Security is an extremely critical issue when image/data is transmitted across a public network. With current technology and know-how, it is not difficult to get access to
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
409
c16.qxd 2/12/04 5:20 PM Page 410
410
IMAGE/DATA SECURITY
the network and to insert artifacts in the image/data and defy their detection. As a result, image/data could be compromised during its transmission. We give two examples in digital mammography (projection image) and chest CT (section image) to illustrate how easy it is to change medical digital images. Figure 16.1 is a digital mammogram with 2-D artificial calcifications inserted: A is the original mammogram, B is the mammogram with artificial calcifications added, C is the magnification of a region containing the added artifacts, and D is the subtracted images between the original and modified mammograms. Calcifications are very small, subtle objects within a mammogram. If inserted, artifacts would create confusion during diagnosis. Figure 14.5A shows a CT scan of the chest, and Figure 14.5B shows
(A)
(B)
(C)
(D)
Figure 16.1 An example of a digital mammogram with inserted artificial calcifications. (A) Original mammogram; (B) with artifacts; (C) magnification of some artifacts; (D) subtracted between (A) and (B). The artifacts are highlighted with overexposure during display.
c16.qxd 2/12/04 5:20 PM Page 411
INTRODUCTION AND BACKGROUND
411
a 3-D artificial lesion inserted. With the artificial lesion camouflaged by pulmonary vessels, it requires some efforts for its detection. For these reasons, image/data integrity becomes a critical issue in a public network environment. Three major organizations related to medical image/data security have issued guidelines, mandates, and standards for image/data security. The ACR Standard for Teleradiology, adopted in 1994, defines guidelines for “qualifications of both physician and nonphysician personnel, equipment specifications, quality improvement, licensure, staff credentialing, and liability.” The Health Insurance Portability and Accountability Act (HIPAA) of 1996, Public Law 104-191, which amends the Internal Revenue Service Code of 1986 (HIPAA, 2000), requires certain patient privacy and data security. Part 15 of the DICOM Standard (PS 3.15-2000) specifies security profiles and technical means for application entities involved in exchanging information to implement security policies (DICOM, 1996). In addition, SCAR (Society of Computer Applications in Radiology) issued a primer on “Security issues in digital medical enterprise” during the 86th RSNA meeting, 2000, to emphasize the urgency and importance of this critical matter. Despite these initiatives, there have not been active systematic research and development efforts in the medical imaging community to tackle this issue seriously. Many techniques can be used for data protection including network fire walls, data encryption, and data embedding. Various techniques are used under different situations. In telemedicine and teleradiology, because data cannot be limited within a private local area network protected by a fire wall, data encryption and data embedding are the most useful approaches. Modern cryptography can use either private key or public key methods. Private key cryptography (symmetric cryptography) uses the same key for data encryption and decryption. It requires that both the sender and receiver agree on a key a priori before they can exchange a message securely. Although the computational speed of performing private key cryptograph is acceptable, it is difficult for key management. Public key cryptography (asymmetric cryptography) uses two different keys (a public key and a private key) for encryption and decryption. The keys in a key pair are mathematically related, but it is computationally infeasible to deduce the private key from the public key. Therefore, in public key cryptography, the public key can be made public. Anyone can use the public key to encrypt a message, but only the owner of the corresponding private key can decrypt it. Public key methods are more convenient to use because they do not share the key management problem inherent in private key methods. However, they require longer times for encryption and decryption. In real-world implementation, public key encryption is rarely used to encrypt actual messages. Instead, it is used to distribute symmetric keys that are used to encrypt and decrypt actual messages. Digital signature (DS) is a major application of public key cryptography. To generate a signature on an image, the owner of the private key first computes a condensed representation of the image known as an image hash value (or image digest), which is then encrypted by using mathematical techniques specified in public key cryptography to produce a DS. Any party with access to the owner’s public key, image, and signature can verify the signature by the following procedure: First compute the image hash value with the same algorithm for the received image, decrypt the signature with the owner’s public key to obtain the hash value com-
c16.qxd 2/12/04 5:20 PM Page 412
412
IMAGE/DATA SECURITY
puted by the owner, and compare the two image hash values. The mechanism of obtaining the hash is designed in such a way that even a slight change in the input string would cause the hash value to change drastically. If the two hash values are the same, the receiver (or any other party) has the confidence that the image have been signed off by the owner of the private key and that the image had not been altered after it was signed off. Thus it ensures the image integrity. In this chapter, an evolving image security system based on the digital envelope (DE) concept is developed to ensure data integrity, authenticity, and privacy during image/data transmission through public networks. DE includes the DS of the image as well as selected patient information from the DICOM image header. Evolving means that new and better encryption and security algorithms can replace those discussed in this chapter without affecting its principle. In telemedicine and teleradiology, data cannot be limited within a private local area network protected by a fire wall. Therefore, DE offers the most useful security assurance. This method also provides additional image/data assurance to conventional network security protections.
16.2 16.2.1
IMAGE/DATA SECURITY METHOD Four Steps in Digital Signature and Digital Envelope
The image/data security method presented in this chapter uses the DE concept. The DE method involves both the sender side and the receiver side. The sender side is data encryption and embedding, involving four steps: (1) Image preprocessing: To segment the image from its background and extract relevant patient information from DICOM image header (2) Image hashing: To compute an image hash value of the segmented image using some existing hash algorithm (3) Data encryption: To produce a DE containing the encrypted image hash value (DS of the image) and the relevant patient information (4) Data embedding: To embed the DE into the image or the background of the image. The embedding should not affect the quality of the image. The embedding method is different between digital radiography, which may not have a background in the image, and sectional image, in which the background is very prominent. The receiver side is data extraction and decryption, which reverses the process of the four steps. 16.2.2
General Methodology
The general methodology and description of the algorithm of the DS and DE are shown in Figure 16.2. In this description, sending site (SS) and receiving site (RS) are synonymous with examination site and expert center, respectively. Table 16.1 lists the acronyms used in this section.
c16.qxd 2/12/04 5:20 PM Page 413
DIGITAL ENVELOPE
413
Image input Image pre -processing: Segmentation Patient data extraction
Extract envelope from the received image
Compute image hash
Open digital envelope using key Pr, cen
Compute image signature using key Pr, exam
Compute digital envelope Using key Pu, cen
K E Y S K E Y S
Embed envelope into the image
Sending (Examination) site
Read patient data , decode Signature using Pu, exam
Compute hash again from received image
Signature verify between received and original images
Receiving site (Expert center)
Figure 16.2 The block diagram of the encryption and embedding algorithm. Left: data encryption and embedding. Pr,exam, Pu,cen are the private key of the examination site (sending site, SS) and the public key of the expert center (receiving site, RS), respectively; Right: data extraction and decryption. Pr,cen, Pu,exam are the private key of the RS (expert center) and the public key of the SS (examination site), respectively.
16.3
DIGITAL ENVELOPE
The digital envelope method involves data encryption and embedding in the sender side and data extraction and decryption in the receiver side. 16.3.1
Image Signature, Envelope, Encryption, and Embedding
16.3.1.1 Image Preprocessing There are two steps. The first step is segmentation with background removal or cropping the image by finding the minimum rectangle covering the object. In the case of digital radiography like the mammogram, the segmented image would be the breast region without the compression
c16.qxd 2/12/04 5:20 PM Page 414
414
IMAGE/DATA SECURITY
plate (Fig. 16.3A and B). In a sectional image like MR head, it would be the image inside the smallest rectangle enclosing the head. The first two columns in Figure 16.3C show the original and segmented sagittal, transverse, and coronal images, respectively. By cropping large amounts of background pixels from the image, the TABLE 16.1 Method LSB ID IM DS DE SS RS RSAE RSAD DES DESE DESD Pr, exam Pu, exam Pr, cen Pu, cen
Acronyms Used in the Digital Envelope
Least significant bit Image digest Segmented image Digital signature Digital envelope Sending site Receiving site RSA public key encryption algorithm RSA public key decryption algorithm Data encryption standard DES encryption algorithm DES decryption algorithm Private key of the examination site Public key of the examination site Private key of the expert center Public key of the expert center
A
B
Figure 16.3 An example of background removal in a mammogram. (A) The original digital mammogram with the compression plate. (B) The mammogram with background removal. (C) Rows 1, 2, and 3: sagittal, transverse, and coronal MRI, respectively. Columns 1, 2, 3, 4, and 5: the original, segmented, embedded, encrypted DS outside of the image (dots)-digital envelope, and the encrypted digital envelope.
c16.qxd 2/12/04 5:20 PM Page 415
DIGITAL ENVELOPE
415
C Figure 16.3 Continued
time necessary for performing image hashing can be significantly reduced. Extracting the boundary of the image region can guarantee that data embedding is performed outside of the image. If the minimum rectangle is the boundaries of the complete image, then embedding will be performed in the entire image. The second step is to extract patient information from the DICOM image header. 16.3.1.2 image.
Hashing (Image Digest) Compute the hash value for all pixels in the ID = H(IM)
(16.1)
where ID is the hash value of the image, H is the MD5 hashing algorithm, and IM represents the segmented image. MD5 has the characteristics of a one-way hash function. It is easy to generate a hash given an image [H(IM) fi ID] but virtually impossible to generate the image given a hash value (ID fi IM). Also, MD5 is “collision resistant.” It is computationally difficult to find two images that have the same hash value. In other words, the chance of two images having the same hash value is small and dependent on the hash algorithm used. Currently, there are better hash functions than MD5; some of these are shown in Table 16.2. 16.3.1.3 Digital Signature Produce a digital signature based on the above image hash value. DS = RSA E (Pr,exam , ID)
(16.2)
where DS is the digital signature of the segmented image, RSAE represents the RSA public key encryption algorithm, and Pr,exam is the private key of the examination
c16.qxd 2/12/04 5:20 PM Page 416
416
IMAGE/DATA SECURITY
TABLE 16.2 Some Core Cryptography Tools for Generating Digital Envelope and Managing Digital Certificate Hash algorithm Private key encryption Public key encryption Digital signature
MD5, SHA1, MDC, RIPEMD DES, TripleDES, AES RSA, ElGamal RSA, DSS
DES, data encryption standard; AES, advanced encryption standard; DSS, digital signature standard.
(A)
(B)
Figure 16.4 Hash value and the corresponding digital signature. (A) The MD5 hash value of the background-removed mammogram shown in Figure 16.3B. (B) The corresponding digital signature.
site (sending site, SS). Figure 16.4A shows the MD5 hash value of the backgroundremoved digital mammogram shown in Figure 16.3B, and Figure 16.4B is the corresponding DS. 16.3.1.4 Digital Envelope Concatenate the DS and the patient data together as a data stream and encrypt them with the data encryption standard (DES) algorithm (Schneier, 1995). data encrypted = DES E (key DES , data concat )
(16.3)
where dataencrypted represents data encrypted by DES, DESE is the DES encryption algorithm, keyDES is a session key produced randomly by the cryptography library, and dataconcat is the concatenated data stream of the image DS and patient information. The DES session key is further encrypted: key encrypted = RSA E (Pu,cen , key DES )
(16.4)
c16.qxd 2/12/04 5:20 PM Page 417
DIGITAL ENVELOPE
417
where keyencrypted is the encrypted session key and Pu,cen is the public key of the expert center (receiving site, RS). Finally, the DE is produced by concatenating the encrypted data stream and encrypted session key together: DE = (data encrypted ) conc. (key encrypted )
(16.5)
Figure 16.5A is the merged data file of DS and the patient information. Figure 16.5B is the corresponding DE to be embedded into the image. 16.3.1.5 Data Embedding Replace the least significant bit of a random pixel in the segmented image, outside of the image boundary (see Fig. 16.3C, third column) or within the image itself (see Fig. 16.6C), depending on whether a minimum rectangle is found, by one bit of the digital envelope bit stream and repeat for all bits in the bit stream.
(A)
(B)
Figure 16.5 (A) The combined data file of the digital signature and patient information. Two and a half lines (up to the word “patient . . .”) of data is the digital signature displayed in ASC II format identical to Figure 16.4B, displayed in hexadecimal format. (B) Its encrypted format using the public key of the expert center.
c16.qxd 2/12/04 5:20 PM Page 418
418
IMAGE/DATA SECURITY
(A)
(B)
(C)
Figure 16.6 From left to right, the original mammogram, the mammogram with digital envelope embedded, and the subtracted image between the original and the imbedded mammogram. Small white pixels in (c) are where DE is embedded in the LSB.
First, a set of pseudorandom numbers Xn is generated by using the standard random generator shown in Eq. (16.6): X n +1 = (aX n + C ) mod m
(16.6)
where a is a multiplier, C is an additive constant, and m is the mod denoting the modulus operation. Equation (16.6) represents the standard linear congruential generator; the three parameters are determined by the size of the image. In the example shown in Figure 16.6B, a, C, and m were set to be 2416, 37,444, and 1,771,875, respectively, based on the characteristics of the digital mammograms. To start, both the SS and the RS decide a random number X0, called the seed. The seed is then the single number through which a set of random numbers is generated with Eq. (16.6). Unlike other computer network security problems in key management, the numbers of SS and RS are limited in teleradiology or telemedicine application; the seed management issue can be easily handled by a mutual agreement between the SS and RS. Secondly, a random walk sequence in the whole segmented image is obtained: WalkAddress n = M ( X n m)
(16.7)
where WalkAddressn is the location of the nth randomly selected pixel in the segmented image, M is the total number of pixels in the segmented mammogram, and m is the mod defined in Eq. (16.6). Finally, the bit stream in the envelope described in Eq. (16.5) is embedded into the least significant bit (LSB) of each of these randomly selected pixels along the walk sequence. Figure 16.6A is the digital mammogram; Figure 16.6B is the mammogram with the DE embedded, and Figure 16.6C is the subtracted image between Figures 16.6A and 16.6B. Each dot in Figure 16.6C shows the location of the pixel
c16.qxd 2/12/04 5:20 PM Page 419
DIGITAL ENVELOPE
419
in which DE data have been embedded in the LSB. Note that this embedding does not affect the quality of the image because the LSB is noise in a digital radiograph (mammogram in this case). The four steps of image security: image segmentation, DE generation, data embedding, and encryption using the sagittal, transverse, and coronal MR images are shown in Figure 16.3C. We use the sagittal MRI in Figure 16.7 (upper left in Figure 16.3C) as an example to demonstrate the complete process of data encryption and embedding at the sender side. 16.3.2
Data Extraction and Decryption
In the RS, the image and the DE are received. The image authenticity and integrity can be verified by using the DS in the envelope with a series of reverse procedures shown in the bottom of Figure 16.7. First, the same walk sequence in the image is generated by using the same seed known to the SS, so that the embedded digital envelope can be extracted correctly from the LSBs of these randomly selected pixels. Then the encrypted session key in the digital envelope is restored: key DES = RSA D (Pr,cen , key encrypted )
(16.8)
where RSAD is the RSA public key decryption algorithm, Pr,cen is the private key expert center (RS). After that, the digital envelope can be opened by the recovered session key, and the digital signature and the patient data can be recovered: data merged = DESD (key DES , data encrypted )
(16.9)
where DESD is the DES decryption algorithm. Finally, the image digest (ID, see Eq. 16.1) is recovered by decrypting the digital signature: ID = RSA D (Pu,exam , DS)
(16.10)
where Pu,exam is the public key of the examination site (SS). At the same time, a second image hash value is calculated from the received image with the same hash algorithm shown in Eq. (16.1) used by the sending site. If the recovered image hash value from Eq. (16.10) and the original image hash value match, then the receiving site can be assured that this image is really from the examination site and that none of the pixels in the image has been modified. Therefore, the requirements of image authentication and integrity have been satisfied. The RSAREF toolkit (RSAref 2.0, released by RSA Data Security, Inc.) can be used to implement the data encryption part in this method. 16.3.3
Some Performance Measurements
Three parameters can be used to evaluate the performance of the digital envelope methods described in Section 16.3.2 (Zhou et al., 2001). 16.3.3.1 The Robustness of the Hash Function The robustness of the hash function can be evaluated by changing one random pixel in the digital signature.
c16.qxd 2/12/04 5:20 PM Page 420
420
IMAGE/DATA SECURITY
= Original Image Signature Original
Packed signature
+
Public Networks Signature Embedded
Encryption
Sender
-------------------------------------------------------------------------------------------------------
1 and 2
Public Networks decryption
Data Extraction
1
Original Image Signature
Signature
Signature Verification
2 Transmitted Image Signature
Receiver
Image
Figure 16.7 Principle of image integrity and steps of forming the digital envelope. Top: the sender. Bottom: the receiver.
For example, as shown in Figure 16.6B, if one pixel in the embedded mammogram is changed from a value of 14 to 12, the hash value becomes completely different, as seen by comparing Figure 16.4A with Figure 16.8. 16.3.3.2 The Percentage of the Pixel Changed in Data Embedding In the example shown in Figure 16.6B and C, the total pixels in which the LSBs were changed during data embedding are 3404, which accounts for 0.12% of the total
c16.qxd 2/12/04 5:20 PM Page 421
DIGITAL ENVELOPE
421
Figure 16.8 The MD5 hash value of the data-embedded mammogram shown in Figure 16.6(b) after the value of one randomly selected pixel in the image was changed from a value of 14 to 12. It is totally different from the hash shown in Figure 16.4A.
TABLE 16.3 Time Required for Processing the Mammogram Shown in Figure 16.3B by Using the Method Discussed in Section 16.3 Procedures in Examination Site Signing the signature Sealing the envelope Embedding data Total
Time (s)
Procedures in Expert Center
Time (s)
20 <1 17
Extracting data Opening the envelope Verifying the signature
19 <1 17
<38
Total
<37
pixels in the segmented mammogram. There are 840 characters or 840 ¥ 8 bits = 6720 bits. During embedding, only 3404 bits are required to be changed because 3316 bits in the bit stream are identical to the LSBs of the randomly selected pixels. The % pixel change is a parameter for evaluating the performance of the algorithm. The % pixel changed can be computed by: total % pixel changed = [(no. of characters in envelope ¥ 8 bits (16.11) unaffected LSBs)/M)] ¥ 100 where M is the total number of pixels. 16.3.3.3 Time Required to Run the Complete Image/Data Security Assurance The time required to run the security assurance depends on the size of the segmented image, the algorithms used, and the efficiency of the software programming. Table 16.3 shows the time required with the AIDM to process the mammogram shown in Figure 16.3B. Although the results are based on one image, nevertheless it provides a glimpse into the time required for such an assurance method. The procedures of embedding data and extracting data refer to embedding data into the mammogram and extracting data from the embedded mammogram, respectively. For signing the signature and verifying the signature, most of the computational time is taken by computing the image hash value. The purpose of this table is to show the relative time required for encryption and decryption processes. No special effort is made to optimize the algorithm or the programming. 16.3.4
Limitation of the Method
The method we describe can only detect whether any pixel or any bit in the data stream has been altered, but it does not know exactly which pixel(s) or bit(s) has been compromised. It would be very expensive, in term of computation, to deter-
c16.qxd 2/12/04 5:20 PM Page 422
422
IMAGE/DATA SECURITY
mine exactly where the change has occurred. Current data assurance practice is that once the RS determines that the image/data has been altered, it will discard the image, notify and alert the SS, and request the information to be retransmitted.
16.4 16.4.1
DICOM SECURITY Current DICOM Security Profiles
DICOM Standard Part 15 (PS 3.15-2001) provides a standardized method for secure communication and digital signature (DICOM, 2001). It specifies technical means (selection of security standards, algorithms and parameters) for application entities involved in exchanging information to implement security policies. In this part, four security profiles that have been added to the DICOM Standard are secure use profiles, secure transport connection profiles, digital signature profiles, and media storage secure profiles. These address issues like use of attributes, security on associations, authentication of objects, and security on files. (1) Secure Use Profiles. The profiles outline how to use attributes and other Security Profiles in a specific fashion.The profiles include secure use of online electronic storage, basic, and bit-preserving digital signatures. (2) Secure Transport Connection Profiles. The profiles published in 2000 specify the technological means to allow DICOM applications to negotiate and establish the secure data exchange over a network. The secure transport connection is similar to the secure socket layer (SSL) commonly used in the secure web on-line processing [SSL web site] and VPN (virtual private network) encryption often used to extend an internal enterprise network to the remote branches. It is an application of public key cryptography so that the scrambled message by the sender can only be read by the receiver and no one else in the middle will be able to decode it. Currently, the profiles specify two possible mechanisms for implementing secure transport connections over a network, TLS (transport layer security 1.0) and ISCL (integrated secure communication layer V1.00). It endows DICOM with a limited set of features that are required for implementation. (3) Digital Signature Profiles. Although the secure transport connection protects the data during transit, it does not provide any lifetime integrity checks for DICOM SOP (service-object pair) instances. The digital signature profiles published in 2001 provide mechanisms for lifetime integrity checks by using DS. DS allow authentication of the identity entity that created, authorized, or modified a DICOM data set. This authentication is in addition to any authentication done when exchanging messages over a secure transport connection. Except for a few attributes, the profiles do not specify any particular data set to sign. The creator of a DS should first identify the DICOM data subset, calculate its MAC (message authenticity code) and hash value, and then sign the MAC into a DS. As with any DS, the receiver can verify the integrity of this DICOM data subset by recalculating the MAC and then comparing it with the one recorded in the DS.Typically, the creator of the DS would only include data elements that had been verified in the MAC calculation for the DS. The
c16.qxd 2/12/04 5:20 PM Page 423
HIPAA AND ITS IMPACTS ON PACS SECURITY
423
image digital envelope described in Section 16.3 has a very large data set that includes the segmented image and relevant patient information in the DICOM header. The profiles currently specify three possible ways of implementing DS depending on what is to be included in the DICOM data set to be signed: base (methodology), creator (for modality and image creator), and authorization (approval by technician or physician) DS profiles. (4) Media Security Profiles. The DICOM media security profile that was also published in 2001 provides a secure mechanism to protect against unauthorized access to this information on the media with encryption. It defines a framework for the protection of DICOM files for media interchange by means of an encapsulation with a cryptographic “envelope.” This concept can be called protected DICOM file. It, as an application of public key cryptography, follows the steps similar to those of the DE method described in Section 16.3. The DICOM file to be protected is first digested, signed with DS (optional in the profiles), and then sealed (encrypted) in a cryptographic envelope, ready for media interchange. 16.4.2
Some New DICOM Security on the Horizon
The security needs in DICOM are under rapid development. Specifying a mechanism to secure parts of a DICOM image header by attribute-level encryption is probably a next step toward satisfying the patient privacy requirements of HIPAA. The principle is that any DICOM data elements that contain patient-identifying information should be replaced by the DICOM object with dummy values. Instead of simple removal, the dummy values of patient information, such as patient ID and names, are required so that images can still be communicated and processed with existing DICOM implementations, security aware or not. The original values can be encrypted in an envelope and stored (embedded) as a new data element in the DICOM header. Using public key cryptography, the attribute-level encrypted envelope can be designed to allow only selected recipients to open it or different subsets can be held for different recipients. In this way, the implementation secures the confidential patient information and controls the recipients’ access to the part of the patient data they are allowed to see. This selective protection of individual attributes within DICOM can be an effective tool to support HIPAA’s emphasis that patient information is only provided to people who have a professional need.
16.5
HIPAA AND ITS IMPACTS ON PACS SECURITY
HIPAA was put in place by Congress in 1996, and became a formal compliance document in April 14, 2003. It provides a conceptual framework for health care data security and integrity and sets out strict and significant Federal penalties for noncompliance. However, the guidelines as they have been released (including the most recent technical assistance materials, July 6, 2001, modifying parts 160 and 164) do not mandate specific technical solutions; rather, there is a repeated emphasis on the need for scalable compliance solutions appropriate to variety of clinical scenarios covered by HIPAA language.
c16.qxd 2/12/04 5:20 PM Page 424
424
IMAGE/DATA SECURITY
The term “HIPAA compliant” can only refer to a company, institution, or hospital. Policies on patient privacy must be implemented institution-wide. Software or hardware implementation for image data security by itself is not sufficient. Communication of DICOM images in a PACS environment is only a part of the information system in a hospital. One cannot just implement the image security using DICOM or the image-embedded DE method described in Section 16.3 and assume that the PACS is HIPAA compliant. All other security measures, such as user authorization with passwords, user training, physical access constraints, and auditing, are as important as the secure communication (Dwyer, 2000). However, image security as described in Section 16.3, which provides a means for protecting the image and corresponding patient information when exchanging this information among devices and health care providers, is definitely a critical and essential part of the provisions that can be used to support institution-wide compliance with HIPAA privacy and security regulations. The Department of Health and Human Services (DHHS) publishes the HIPAA requirements in the so-called Notice of Proposed Rule Makings (NPRM). There are currently four key areas: • • • •
Electronic transactions and code sets (Compliance date: October 16, 2002) Privacy (Compliance date: April 14, 2003) Unique identifiers Security
Transactions relate to such items as claims, enrollment, eligibility, payment, and referrals, whereas code sets relate to items such as diseases, procedures, equipment, drugs, transportation, and ethnicity. HIPAA mandates the use of unique identifiers for providers, health plans, employers, and individuals receiving health care services. The transactions, code sets and unique identifiers are mainly a concern for users and manufacturers of hospital information systems (HIS) and, to a much lesser extent, for radiology information system (RIS) users and manufacturers, whereas it has little or no consequence for users and manufacturers of PACS. Privacy and security regulations will have an impact on all HIS, RIS, and PACS users and manufacturers. Although HIPAA compliance is an institution-wide implementation, PACS and its applications should have a great interest in making them helpful to become HIPAA supportive. The image security discussed in this chapter and Chapter 15 includes methods for PACS continuous availability and disaster recovery and supports the HIPAA security regulations. In addition to those, the basic requirement for a PACS that will help a hospital to comply with the HIPAA requirement is the ability to generate a list of information on demand, related to the access of clinical information for a specific patient. From an application point of view, there should be a log mechanism to keep track of the access information such as • • •
Identification of the person who accessed the data Date and time when data have been accessed Type of access (create, read, modify, delete)
c16.qxd 2/12/04 5:20 PM Page 425
PACS SECURITY SERVER AND AUTHORITY
• •
425
Status of access (success or failure) Identification of the data
Although each PACS component computer, especially UNIX machines, has its own system functions to collect all the user and access controls listed above as well as auditing information and event reporting if enabled, they are scattered around the system, not in a readily available form. Also, because accessing of data is typically done from many workstations, tracking and managing each of them is a difficult task. With this in mind, a PACS should be designed in such a way that a single data security server can generate the HIPAA information without the need for “interrogating” other servers or workstations. An automatic PACS monitoring system (AMS) jointly developed in SITP (Shanghai Institute of Technical Physics) and our laboratory can be revamped as the PACS reporting hub for HIPAA-relevant user access information. The PACS AMS consists of two parts: a small monitoring agent running in each of the PACS component computers and a centralized monitor server that monitors the entire PACS operation in real time and keeps track of patient and image data flow continuously from image acquisition to the final display workstation. The PACS AMS is an ideal system to collect PACS security information and support HIPAA implementation. The PACS alone cannot be claimed as HIPAA compliant. Secure communication of images using DE and DICOM security standards and continuous PACS monitoring have shown HIPAA support functionalities that are indispensable for hospital-wide HIPAA compliance.
16.6 PACS SECURITY SERVER AND AUTHORITY FOR ASSURING IMAGE AUTHENTICITY AND INTEGRITY 16.6.1 Comparison of Image-Embedded DE Method and DICOM Security The image-embedded DE method described in Section 16.3 provides a strong assurance of image authenticity and integrity. The method has the advantage that the relevant patient information in the DICOM header is embedded in the image. It ensures image security for any individual image to be transmitted through public networks without using the DICOM image header. However, relevant patient information can be retrieved from the DE after being received. Meanwhile, because the actual data transfer will occur only after the DE has been successfully created and embedded, the most CPU-intensive cryptography does not have to be performed on the fly as in the SSL protocol used in web transactions, or the TLS (Transport Layer Secure) and ISCL (Integrated Secure Communication Layer) protocols specified in DICOM security standards. Both the sender and receiver do not have to be on-line at the same time to negotiate an on-line session. Therefore, the image-embedded DE method is particularly suited well for store-forward type of systems like media interchange. The DE method described in Section 16.3 has certain limitations of which we need to be aware.
c16.qxd 2/12/04 5:20 PM Page 426
426
IMAGE/DATA SECURITY
(1) Lack of standards. It should be noted that this DE method does not cater for the automatic identification of the various algorithms and attributes used (e.g., hashing, encryption and embedding algorithms, DE data set to be sealed, communication protocols) while verifying the DE. Unlike the DICOM security profiles, which specify the means for the sender and recipient to negotiate the information, in the DE method they either must agree in advance or the sender must somehow transmit this information to the user by some out-of-band method. (2) Needs further evaluation. The DE method is considered good for security communication of images only when it satisfies certain criteria, like robustness, percentage of the pixel changed in data embedding, and time required to run the complete image security assurance. The DE method with data embedded in the image is time consuming to perform because of image processing and encryption algorithms that require heavy computation. It is necessary to optimize the DE method and fine-tune its performance for real-time applications. It is better to provide the user a choice of selecting less computationally intensive algorithms (Table 16.2) or bypassing the integrity check altogether because the user’s machine may not be powerful enough to handle the heavy DE processing. (3) Limited capability in image distribution. The DE method is mostly suited for a controlled enterprise level and a small number of nodes teleradiology application. It is not designed for large-scale network security. Because the DE is encrypted with the receiver’s public key and then embedded in the image, this CPU-intensive process must be performed all over again for a different user or site because of the different public key. The three shortcomings listed above would limit the image-embedded DE method in a well-controlled environment. Since the DICOM security profiles, described in Section 16.4, have been released and become the standard, the best image security strategy is to combine both the DE for image and the DCIOM security profiles. One method is to use DICOM-compliant communication to address the secure transmission of images between the sender (last step) and receiver (first step) as shown in Figure 16.7. Because the DICOM standard does not maintain the confidentiality and integrity of image data before or after the transmission, and the DS and DE in the DICOM header that is separated from the image data can be easily deleted and recreated by a hacker when the image is available to him, the imageembedded DE can be used to ensure image security. In this way, it can provide a permanent assurance of confidentiality and integrity of the image no matter when and how the image has been manipulated. In fact, even if one was to lose his key or, worse yet, he was no longer in existence, his authentication and signature embedded in the image persist just as his written signature on paper does. 16.6.2
An Image Security System in a PACS Environment
As seen from the above discussion, one of the best designs for an image security system in a PACS environment is shown in Figure 16.9. The system is based on the combination of the image-embedded DE method for image and relevant patient information in the DICOM header and the DICOM security profile for com-
c16.qxd 2/12/04 5:20 PM Page 427
PACS SECURITY SERVER AND AUTHORITY
messages
427
PACS Sercurity Server Automatic PACS Monitoring
Modality
DICOM
DICOM Gateway
DICOM
DICOM
PACS Server
DICOM Security Gateway
Secured DICOM
Public Networks
Integrety & Data Origin Preprocess Images Produce Digital Signature Produce Digital Envelop Embed Data DE Embedded DICOM Images
Authority Extract Data Open Digital Envelop Verify Digital Signature
Issue Certificate
Figure 16.9 Image security system in a PACS environment. Shaded boxes are where data embedding and assurance of image authenticity and integrity are implemented.
munication. The DE method includes the modality gateway workstation for image embedding and a dedicated server to handle all PACS image- and patient-related security issues. The PACS security server has the following three major functions: 1. A DICOM secure gateway to the outside connections (Fig. 16.9, leftmost and rightmost). The gateway is in compliance with DICOM security profiles ensuring integrity, authenticity, and confidentiality of medical images in transit. The gateway plays a role in securing communication of DICOM images over public networks. It is important to build a separate DICOM secure gateway for handling CPU-intensive cryptography so as not to impact the performance of PACS in fulfillment of daily radiological services. The current DICOM security standards are still evolving. It will take time for the standards to mature and to be fully implemented in PACS, so it is also important that the gateway can provide interoperability and evolve with the existing “non-security-aware” PACS. 2. An image authority for image origin authentication and integrity (Fig. 16.9, right). The authority server is designed to take away the limitations of the DE method discussed above and to integrate the method in a PACS environment. First, as a steganographic message, the DE embedded in the image should be permanent, associated only to the image itself and not intended to change every time with a new user. To solve the issue, a dedicated image authority server whose public key will be used to seal DE of all images acquired locally can be implemented. In this system, medical images from modalities will be first digitally signed at the modality gateway machine (Fig. 16.9, left) by
c16.qxd 2/12/04 5:20 PM Page 428
428
IMAGE/DATA SECURITY
the image creator and/or through the physician’s authorization or the PACS manufacturer. The signature plus relevant patient information can be sealed in a DE with the authority server’s public key instead of the individual user’s key. Whenever needed, a remote user can query the image authority to verify the origin authenticity and integrity of an image under review. The image authority is the only one who owns the private key that can be used to extract and decrypt the DE embedded permanently in the image. The image authority serves as the authority for checking the image originality and integrity, in the same way as a certificate authority (CA) (PKI Web) does for certifying the DS. The heavy computation jobs described in Figure 16.7, of ensuring image authentication and integrity at the receiver side are now being taken over by the authority server, thereby reducing the workload at the client side. Meanwhile, it is also relatively easy to keep the DE/embedding algorithms and attributes prearranged between the image authority and the DE creators/senders in a local PACS environment without requiring the open standard to define them. 3. A monitoring system for PACS operations. This component can, at a minimum, keep a user access log and monitor security events, providing support for hospital-wide HIPAA compliance. The major monitoring functions and features include (1) real-time capture of all warnings and error messages in the PACS; (2) periodically check PACS components’ running status; (3) track patient/image data flow in PACS components and analyze the image usages; (4) monitor user logon/off on remote display workstations and guarantee that images are securely read and used; (5) dynamically display the image data flow; (6) warn an administrator of serious errors via pager. In addition, the PACS security server should also be intelligent to deliver only the relevant information to a user. As the volume of clinical data, images, and reports has significantly increased with digital imaging technology, and government regulations continue to emphasize information privacy, they have imposed an implementation challenge for each individual user to access specific data from a specific location—where and how he/she can access the information in a timely fashion. Intelligent security management must find a secure way to match relevant information with a particular user. The attribute-level DICOM security described in Section 16.4 and the image authority with an ability to check image attributes will definitely be a major step toward developing a smart and secured delivery system for medical images. Figure 16.10 shows a testbed of image security system for an off-site backup archive between St. John’s Health Center and IPI at USC. 16.7
SIGNIFICANCE OF IMAGE SECURITY
The role and relative importance of information integrity in health care delivery is undergoing a fundamental shift, the repercussions of which are yet not fully recognizable. The pressures generating this change are both statutory [for example, the federally mandated Health Insurance Portability and Accountability Act of 1996, Public Law 104-191, which amends the Internal Revenue Service Code of 1986
c16.qxd 2/12/04 5:20 PM Page 429
429
SIGNIFICANCE OF IMAGE SECURITY
SJ Medical Center CISCOSYSTEMS
Cisco 2610
T1, 15 miles Embedded DICOM Image
IPI, USC CISCOSYSTEMS
Cisco 2610
RAID
Clinical PACS Server
DICOM Image
off-site Archive GW
PACS GW
Produce Digital Signature
Extract Data
Produce Digital Envelop
Open Digital Envelop
Embed Data
Verify Digital Signature
DICOM Image
DLT
Fault-Tolerant
PACS Server
DICOM Embedded DICOM Image
Image
Figure 16.10 Interhospital connection between a clinical PACS and an off-line archive server. Shaded boxes are where image/data assurance software is implemented. Compare with Figure 16.9.
(HIPAA, 2000)] and social (a redefinition of patient-sensitive information). These pressures have the power to alter medical theory and practice at its core. When one superimposes the rapidly shifting technical and telecommunications tools at the disposal of health care providers and patients on this landscape, the need for research and development of robust, flexible and easy-to-use methods of ensuring health care information security is strongly established. On October 1, 2000 the Electronic Signatures in Global and National Commerce Act went into effect. It was an extension of the standards of the Government Paperwork Elimination Act of 1997 that began to establish the definition of electronic signatures and their use in the federal government (21 CFR Part 11, Federal Register 62, 1997). The Electronic Signatures Act establishes the framework for using electronic signatures to sign contracts, agreements, or records involved in commerce. Although the bill does not specify any specific government technology standards, it allows parties to establish reasonable requirements regarding the use and types of electronic records and signatures. In addition to the HIPAA and the Electronic Signatures Act as the government regulation of electronic health care information, the burgeoning of standards, including DICOM, within the medical community further demonstrates the significance of such research. Other medical federations will almost certainly follow this lead as practitioners and care delivery systems seek guidance in this arena. It is critical that objectively validated data are available to these bodies as they set long-term policy in the coming years. The image and textual standards currently used in the medical community are DICOM and HL-7, respectively. Although encryption technology has no similar standard at the present time, it is important to design an open architecture for image security algorithms and systems using a modular approach, in full concordance with all relevant HIPAA regulations and in compliance with DICOM
c16.qxd 2/12/04 5:20 PM Page 430
430
IMAGE/DATA SECURITY
security profiles. Technological development has traditionally been incremental in nature in any branch of science. Incremental advances in the scientific corpus often sum to qualitative changes in the level of understanding of a particular subject. It is a central feature to develop image integrity and security in a modular fashion so that each component in the system can be replaced by newer technology when it becomes available. The system should be flexible and portable enough to respond to the multiplicity of demands that will unfold as standards, policy, and technology change.
c17.qxd 2/12/04 5:18 PM Page 431
CHAPTER 17
PACS Clinical Implementation, Acceptance, Data Migration, and Evaluation
17.1
PLANNING TO INSTALL A PACS
In this chapter we present methodology and a road map of PACS implementation and system evaluation. Our philosophy of PACS design and implementation is that, regardless of the scale of the PACS being planned, we should always leave room for future expansion, including integration with the enterprise PACS. Thus, if the planning is to have a large-scale PACS now, the PACS architecture should allow its future growth to an enterprise PACS. On the other hand, if only a PACS module is being planned, the connectivity and compatibility of this module with future modules or a larger-scale PACS is important. The terms we discussed in previous chapters, including open architecture, connectivity, standardization, portability, modularity, and IHE work flow profiles, should be considered. 17.1.1
Cost Analysis
A common question always asked is, When is a good time for implementing a PACS in terms of business necessity or investment? Several radiology consulting firms nationwide provide models tailored to individual services to analyze operation and cost benefits. In these models, the participating radiology department or health organization inputs its work flow and operating environment including resources and expenses; the model will predict the cost comparison between a film-based operation versus a digital-based operation. In this section, we present a spreadsheet model developed by the Department of Diagnostic Radiology, Mayo Clinic and Foundation, Rochester, MN. The assumptions of this model are that the institution being modeled considers only a totally film- or digital-based operation and that only differential costs, including equipment and staff, are included. The model tracks from patient admission to resulting events across departmental boundaries. Through this work flow, staff and equipment requirements are determined, leading to the total cost of both systems. With this model, spreadsheets for three artificial institutions with 25,000, 50,000, and 105,000 procedures per year were simulated. Cost comparisons were drawn between PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
431
c17.qxd 2/12/04 5:18 PM Page 432
432
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
the film-based and the digital-based operations. Results are shown in Figure 17.1A, B, and C, respectively. Figure 17.1A depicts the annual operating expenses, indicating that the film-based operation costs more as the number of procedures increases. On the other hand, Figure 17.1.B demonstrates that capital investment in a digitalbased operation costs more when the number of procedures is lower and the gap between the two operations narrows as the number of procedures increases. The total costs for the film-based and digital-based operations including capital investment and the annual operating budget, given in Figure 17.1C, cross each other when the institution performs over 50,000 procedures per year, in favor of the digitalbased operation. This model allows a first approximation of the cost issue comparing the filmbased and digital-based operations. Results from simulation with a mathematical model depend on many assumptions in the model as well as the method of data collection. Care should be exercised in interpreting the results. The spreadsheet file, according to the authors, can be found at the anonymous ftp site: ri-exp.beaumont.edu/pub/diag/Dos/pac-cost.xls or ri-exp,beaumont.edu/pub/diag/ Mac/pac-cost.xls It must be noted that the model was developed in the mid-1990s; many parameters used may not fit with today’s equipment and operation situation and costs. For example, the cost of PC-based workstations and the web-based image distribution method have changed the PACS operation environment drastically. In Chapters 1 and 23, we give an example of South Korean models that were obtained based on actual field data.
Dollar (Millions)
Annual Operational Costs
4 A 3
2
1 Annual Cost-Film Annual Cost-Digital 0 25K
A
50K
105K
Number of Cases Per Year
Figure 17.1 Cost comparison between a film- and a digital-based operation based on a mathematical simulation of three institutions with 25,000, 50,000, and 105,000 procedures per year (Langer, 1996). (A) Annual operating budget. (B) Annual capital budget. (C) Capital plus annual operating budget. The crossover is at 50,000 procedures.
c17.qxd 2/12/04 5:18 PM Page 433
PLANNING TO INSTALL A PACS
Dollar (Millions) 2
433
Capital PACS-RIS Costs B
1.5
1
0.5 Captial Cost-Digital Captial Cost-Film 0 50K
25K
B
105K
Number of Cases Per Year An Example – St. John’s Health Center
Dollar (Millions) 6
Total PACS-RIS Costs C
5
4
3
2
1
Total Cost-Digital Total Cost-Film
0 25K
C
50K
105K
Number of Cases Per Year Figure 17.1 Continued
17.1.2
Film-Based Operation
The first step in planning a PACS is to understand the existing film-based operation. In most radiology departments, the film-based operation procedure is as follows.
c17.qxd 2/12/04 5:18 PM Page 434
434
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
Conventional diagnostic images obtained from X ray or other energy sources are recorded on films, which are viewed from alternators (light boxes) and archived in the film library. Images obtained from digital acquisition systems—for example, nuclear medicine, ultrasound, transmission and emission computed tomography, computed and digital radiography, and magnetic resonance imaging—are displayed on the acquisition device’s monitor for immediate viewing and then recorded on magnetic tapes or disks for digital archiving. In addition, they are recorded on films for viewing as well as for archival purposes. Because films are convenient to use in a clinical setting, clinicians prefer to view digital images on film even though this method causes a reduced image quality. As a result, many departments still use film as a means for diagnosis and as a storage medium regardless of the image origin. In general, films obtained within 6 months are stored in the departmental film library and older films are stored remotely from the hospital. To retrieve older films requires from 0.5 to 2 hours. Most radiology departments arrange their operations in an organ base, with exceptions in nuclear medicine, ultrasound, and sometimes MRI and CT. Some hospitals group MRI and CT into neuro- and body imaging sections. It is advantageous during planning to understand the cost of the film-based operation. The following sample tables are useful for collecting statistics in the planning stage. Table 17.1, an example of the tabulation of the number of procedures, film use, and film cost,
TABLE 17.1 Section (Specialty)
Record of Number of Procedures, Film Used, and Film Cost Number of Procedures Year 1 . . . Year N
Film Used (sheets) Year 1 . . . Year N
Film Cost (dollars)1 Year 1 . . . Year N
Nuclear medicine Ultrasound CT/MRI Pediatrics Genitourinary gastrointestinal (abdominal) Neuroradiology Cardiovascular Interventional radiology General outpatient2 General inpatient2 Mammography Emergency radiology ICUs3 TOTAL 1
Film cost is for X-ray film purchase only; film-related cost is not included. musculoskeletal examinations. 3 Includes all portable examinations.
2
Includes chest and
c17.qxd 2/12/04 5:18 PM Page 435
PLANNING TO INSTALL A PACS
435
provides an overall view of the number of procedures and film usage from each specialty. This information, readily available at the radiology administrative office, can be used to design the PACS controller routing mechanisms, to determine the number of display workstations required, and to arrive at the local storage capacity needed for each display workstation. The film cost can be used to estimate the cost of the film-based operation compared with the cost of the digital-based or PACS operation. Table 17.1 also can be used to generate a film-based operation cost estimate as shown in Table 17.2. Direct and indirect film library expenses included under item 1 in Table 17.2 are X-ray film jackets, mailing envelopes, insert envelopes, negative preservers, rental on film storage both inside and outside the department, fleet services for film delivery, direct (equipment) and indirect (operational) film processor expenses (item 2), developing solutions, replacement parts, facilities repairs and installations, and other miscellaneous supplies. In a typical film-based operation in a large teaching hospital, 70% of the film-based operation budget is allocated to the film library (item 3). The film cost (item 5) should be derived from the number of procedures performed and films used per year, given in Table 17.1. The film operation (not including film viewing) requires a large amount of premium space within the department, which should be translated to overhead cost in the same estimate. Table 17.2 is necessary in the estimation of the cost of the film-based operation in a radiology department. Use of this table and the PACS checklist (Section 17.1.3.2) will allow a comparison of the PACS installation and operation costs with those of the film-based operation. Tables 17.1 and 17.2 give an overview of the film-based operation and its cost, and Table 17.3 provides comparative statistics. Table 17.3 is an estimated breakdown of the percentage distribution of procedures performed and efforts required in conventional projection X-ray examinations according to body region in a large urban hospital in the northeast United States. A similar table can be generated for digital sectional images including CT, MR, and US. The effort required is a measure of time and labor required to perform
TABLE 17.2
Multiple-Year Estimation of Film and Film-Related Costs Year 1
1. Film library Indirect expenses 2. Film processor Indirect expenses 3. Personnel: Darkroom Film library 4. Film-related costs Total (1 + 2 + 3) 5. Film Cost (from Table 17.1) Total Costs * FTE, full-time equivalent.
...
Year N
$ $ $ $
$ $ $ $
$ (FTE)* $ (FTE)*
$ (FTE)* $ (FTE)*
$ $
$ $
$
$
c17.qxd 2/12/04 5:18 PM Page 436
436
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
TABLE 17.3 Percentage Distribution of Conventional Projection X-Ray Procedures Performed and Effort Required According to Body Region Procedure
Percentage
Effort Required*, %
40 39 9 1 1 7
18 25 22 8 9 15
Chest Musculoskeletal Gastrointestinal Genitourinary Neuroradiology Other
* Effort required means that it takes 18% effort to perform all chest X-ray examinations, which amounts to 40% of all departmental examinations, and so forth.
the procedure. The hospital or the department contemplating installation of a PACS should generate these tables to facilitate proper planning. Several two-phase cost analysis models are available in the market to help a hospital analyze the cost impact of PACS. The first phase is to assess the present film-based operation costs. In this phase, the user completes data forms similar to Tables 17.1 and 17.2, and the model presents a detailed costing of the film-based operation. In the second phase, the user evaluates how these costs might differ on implementation of the PACS. A model like this will allow the hospital to have an overview of financial differences between its current film-based operation and a PACS operation. 17.1.3 17.1.3.1
Digital-Based Operation Planning a Digital-Based Operation
Interfacing to Imaging Modalities and Utilization of Display Workstation In a digital-based operation, two components (see Fig. 1.3) are not under the control of the PACS developer/engineer, namely, interfacing to imaging modalities and the use of display workstations. In the former, the PACS installation team must coordinate with imaging modality manufacturers to work out the interface details. When a new piece of equipment is purchased, it is important to negotiate with the manufacturer on the method of interfacing the imaging modality to the PACS controller through a DICOM gateway. It is necessary to include the DICOM standard compliance statement in the equipment specifications for image communication purposes. For the display workstation, radiologists and clinicians are the ultimate users to approve the system. For these reasons, human interface and image work flow play an important role in planning the installation. It is necessary to consider IHE work flow profiles in the planning and designing of the display workstation and its environment. User input is mandatory, and several revisions are necessary to ensure user satisfaction.
c17.qxd 2/12/04 5:18 PM Page 437
PLANNING TO INSTALL A PACS
437
Cabling We describe the cabling and the hub room concept in Section 9.2. This section discusses the overall cable plan. Proper cabling for the transmission of images and patient text information within the department and within the hospital is crucial for the success of a digital-based operation. The traditional method of cabling for computer terminals is not recommended because, as the magnitude of the digital-based operation grows, the web of connecting wires very quickly becomes unmanageable. Cabling is much simpler if the entire department is on the same floor. Greater complications ensue when the communication cable must traverse the whole hospital, which may occupy many floors or even many building complexes. Laying out the cable plan to support the digital-based operation should be thought through carefully. ATM or fast Ethernet switches and routers should be strategically situated, and a conventional Ethernet backup system should be considered. Air-Conditioning Most digital equipment in a digital-based operation requires additional air-conditioning. If a new radiology department is planned, adequate air conditioning should be supplied to the central computing, server, and image processing facilities. Normally, display workstations do not require special airconditioning, but certain room temperature requirements are still necessary. Because hospitals are always limited in space, it may be difficult to find additional central air-conditioning for the large area housing the PACS central facility. Sometimes individual, smaller air-conditioning units must be installed. The individual air conditioner can be a stand-alone floor unit or a ceiling-mounted unit. Each time another air-conditioning unit is installed, additional cool water from the hospital is required. Thus the hospital’s capacity for cooling water should be considered. Also, as noted earlier, additional air-conditioning units will create a lot of noise in the room. To ensure that the room housing the display workstations is suitable for viewing images, adequate soundproofing should be used to insulate these areas from the extra noise created by the air-conditioning and cooling system. Staffing The hospital or department should allocate special technical staff to support the digital-based operation. In particular, system programming, digital engineering, and quality assurance personnel are necessary to make a digital-based operation run smoothly and efficiently. These new full-time equivalents (FTEs) can be allocated by switching some FTEs in the film-based operation. Even if the manufacturer installs a digital-based operation for the department and provides support and maintenance, the department retains the responsibility for supplying the personnel to oversee the operation. In general, the staff requirements consist of one hardware engineer, one system/software engineer, and a quality assurance technologist, all under the supervision of a PACS manager/coordinator. Training The department or hospital should provide four categories of training to the staff. Continuing Education for Radiologists/Clinicians. Adequate training by the department/manufacturer should be provided to staff radiologists/clinicians on the concept of a digital-based operation as well as the use of the display worksta-
c17.qxd 2/12/04 5:18 PM Page 438
438
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
tion. This training should be periodic, and updates should be offered as new equipment or upgraded display workstations are implemented. Resident Training. The training for radiology and other specialty residents should be more basic: The residency program should include training in a digital-based operation. A one-month elective in PACS would be advantageous to radiology residents, for example. During this period, one or two residents would rotate through the PACS unit and obtain basic training in computer hardware architecture, software overview, architecture of the image processor, communication protocols, and the concept of the display workstations. Residents also can learn all the basic image processing functions that are useful for image manipulation as described in Section 11.5. The training schedule should be in the first or second year of residency to allow physicians to understand the concept and the complete procedure involved in a digital-based operation early in their training. This will facilitate future digital-based operations for the department and the health center. In addition, a quick refresher course every year in July, when new staff start in radiology and other specialties, will minimize the downtime of the display workstations. Training of Technologists. Some retraining of technologists in the department must precede the changeover to a digital-based operation. Digitization of images with the laser scanner on X-ray films and the use of computed radiography with imaging plates and a digital radiography system are quite different from the screenfilm procedure familiar to technologists. Also, the use of image workstations and RIS workstations for patient information in RIS and the image transmission from acquisition devices to PACS gateway computers are new concepts that require additional training. The department should provide regular training classes, emphasizing hands-on experience with existing equipment. Training of Clerical Staff. Past experience in training secretarial staff to use word processors rather than typewriters has proven that the switch to digital operations at the clerical level is not difficult. The training should emphasize efficiency and accuracy. Ultimately, the move to a digital-based operation will reduce the number of FTEs in the department. 17.1.3.2 Checklist for Implementation of a PACS This section provides a checklist for implementation of a hospital-integrated PACS. The implementation cost and operational cost can be estimated by multiplying the component price by the number of units and adding the subtotals. The component prices can be obtained from various vendors. I.
II.
Acquisition gateway computers (one computer for every two acquisition devices) Local disk Fast Ethernet connection Interface connections PACS controller Database machine File server Permanent storage library Temporary storage
No. ____
No. 2 No. ____ No. ____ No. ____
c17.qxd 2/12/04 5:18 PM Page 439
MANUFACTURER’S IMPLEMENTATION STRATEGY
III.
IV.
V. VI.
VII.
VIII. IX. X.
17.2 17.2.1
439
Backup archive No. ____ ATM and/or fast Ethernet connection No. ____ Communication networks cabling (contracting) ATM or Gigabit Ethernet switches No. ____ Ethernet switches No. ____ Router No. ____ Bridge No. ____ Video monitoring system No. ____ Display workstation Four-monitor 2K station No. ____ Two-monitor 2K station No. ____ Four-monitor 1K station No. ____ Two-monitor 1K station No. ____ One-monitor 1K station No. ____ Web-based workstation No. ____ Teleradiology component Yes ____ No ____ Interface with other databases HIS Yes ____ No ____ RIS Yes ____ No ____ Other database(s) Yes ____ No ____ Software development The software development cost is normally about 1–2 times the hardware cost. Equipment maintenance 10% of hardware + software cost. Consumable Optical disks, tapes/year No. ____ Supporting personnel FTE (full-time equivalent) No. ____
MANUFACTURER’S IMPLEMENTATION STRATEGY System Architecture
There are a number of PACS manufacturers that can install PACS of various scales. For larger manufacturers, the strategy is to install a full PACS for the institution either at one time or incrementally. Smaller vendors want to carve a niche in the market (i.e., teleradiology) that the larger manufacturers do not consider financially beneficial. Regardless of the approach, with the advances and the low cost of the PC computer for workstations, high-speed networks, and RAID, and the practically free Internet technology, the manufacturer’s system architecture design becomes generic, allowing a full or incremental implementation. 17.2.2
Implementation Strategy
A generic manufacturer’s PACS business unit consists of R&D, service, training, design, implementation, consulting, marketing, and sale components. The market
c17.qxd 2/12/04 5:18 PM Page 440
440
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
and sale components handle the presale with contributions from design and consulting. Information is fed back to R&D for product development. The postsale is the responsibility of design and implementation. Normally, the manufacturer assigns a PACS manager working with the hospital team in overseeing the implementation. The operation after implementation is the responsibility of service and training. It is interesting to see the difference between the manufacturer’s point of view in selling a PACS and an imaging device shown in Table 17.4. In PACS, the implementation is 60%, versus only 20% in the imaging device. Table 17.5 shows the perception of the manufacturer in manpower allocation in a PACS negotiation and implementation. 17.2.3 Integration of an Existing In-House PACS with a New Manufactured PACS Larger manufacturers also have strategies for upgrading or integrating an in-house PACS through partnership. Figure 17.2 shows an example of using two DICOM gateways, one at the in-house PACS and the second at the manufacturer’s open architecture PACS. In Figure 17.2, the existing open architecture in-house PACS (left) is in daily operation and a much larger-scale PACS (right) supplied by the manufacturer is being designed. The goal is to continue utilizing the in-house PACS for daily operation until the full implementation of the manufacturer’s PACS is ready. Meanwhile, the new design takes full advantage of the open architecture of the in-house PACS during the transitional phase. Let us use an example to illustrate the steps involved. Consider the intensive care unit (ICU) operation in which CRs are used to acquire images and workstations (WS) are used for display.
TABLE 17.4 Comparison of Manufacturer’s Efforts in Imaging Device and PACS Sales Imaging Device % Effort
PACS % Effort
40 40 20
20 20 60
Market Sale Implementation
TABLE 17.5 Manufacturer’s Estimation* of Allocation of Manpower† Required for PACS Implementation (33- to 500-Bed Hospital) Presale Institution Manufacturer
1
Design 4–6 wks
Implementation 6 wks
Application Postimplementation 4 wks
2 RT 1 manager
1 manager
2 RT
2
2
2
1 manager, 2 RT + institution staff 1
* The estimation is based on the institution installing the first PACS module with an expandable permanent archive. † Manpower, persons assigned to the task but not necessarily full time.
c17.qxd 2/12/04 5:18 PM Page 441
441
MANUFACTURER’S IMPLEMENTATION STRATEGY
In-House PACS WS (O)
Other Servers
Manufacturer Open Architecture
WS
PACS Archive Server (O)
WS (O)
C
Modality
1
CR
Acquisition Gateway (M)
WS (M)
6
2
Acquisition Gateway
Archive Server (M)
(O)
HIS/RIS
A
3 DICOM 4
Gateway
(O)
B
DICOM Gateway
(M)
5 O: original path 1-5:transitional path 1-2-6: new path A-B-C: retrieve old images
Figure 17.2 Integration of an existing in-house PACS through a partnership with the manufacturer. Two DICOM gateways, one in the in-house PACS and the second in the new PACS, are used for the integration. Numerals and letters indicate paths of the integration. See text for details.
Original steps (O): CR—acquisition gateway (O)—archive server (O)—WS (O). During the transitional phase: Step I. (1–5 and 6) CR sends a second copy of images to the new acquisition gateway (M)—archive server (M)—DICOM gateway (M)—DICOM gateway (O)—WS (O). Images are also sent to WS (M) from archive server (M) through “6.” When this path is completed, and the ICU directory is identical in both archive servers (this can be achieved in about 2 weeks, because the average stay of ICU patients is less than 1 week), the original path (O) will be disconnected. All CR images are routed to the archive server (M) only, and WS (M) replaces WS (O). Step II. To retrieve historical images from the original archive server (O) from the new WS (M) through route (A—B—C): WS (M)—DICOM gateway (M)— DICOM gateway (O)—Archive server (O). The disadvantage of this method is that the older images of the same patient stored in the original archive server (O) would appear in a separate file. A better way is WS (M)—archive server (M)—DICOM gateway (M)—DICOM gateway (O)—archive server (O). The latter will require the archive server (M) to integrate
c17.qxd 2/12/04 5:18 PM Page 442
442
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
the older images to the same patient folder once it is retrieved before sending them to the WS (M). The integration of old and new images in the archive server (M) is not a trivial task.
17.3
TEMPLATE FOR PACS RFP
There are several templates for PACS RFPs (requests for proposals) in the public domain. One, A PACS RFP Toolkit (http://www.xray.hmc.psu.edu/dicom/rfp/ RFP.html), is by John Perry, who was with the Siemens Gammasonic Unit and developed the Medical Diagnostic Imaging Support (MDIS) system. Two others were developed by the military: One was for the original MDIS, and the second was the DIN/PACS II (www.tatrc.org) by the retired Commander Jerry Thomas of the U.S. Navy. Another RFP was developed by D. G. Spigos of the American Society of Emergency Radiology’s Committee on New Techniques and Teleradiology. These RFPs contain much useful information for those who plan to start the PACS acquisition process.
17.4
PACS IMPLEMENTATION STRATEGY
When implementing a PACS within a clinical environment, it is very important to recognize some key fundamental concepts that will serve as cornerstones for a successful implementation. First, PACS is an enterprise-wide system or product. It is no longer just for the radiology or imaging department; therefore, careful consideration of all decisions/strategies going forward should include the entire health care continuum, from referring physicians to the radiology department clinical and technical staff to the health care institution’s information technology (IT) department. It is crucial for a successful implementation that some of the key areas within the health care institution have buy-in of the PACS process, including administration, radiology department, IT department, and high-profile customers of radiology (e.g., orthopedics, surgery). Furthermore, a champion or champions should be identified for the PACS process. Usually this is the medical director of radiology, but it can include other physicians as well as IT administrators. Second, PACS is a system with multiple complex components interacting with each other. Each of these components can be an accumulation of multiple hardware components. A general clinical PACS usually comprises the archive, archive server/controller, DICOM gateway, web server, workstations, and a RIS/PACS interface. Whether considering implementation or acceptance, all components of the system must be assessed. The following subsections describe some of the steps involved in implementing a PACS within a health care institution. 17.4.1
Risk Assessment Analysis
It is important to perform a risk assessment analysis before implementation so that problem areas and challenges can be mapped out accordingly and timeline schedules made to accommodate the potential roadblocks. Some areas to focus on are the network infrastructure that will be supporting the PACS, the integration of
c17.qxd 2/12/04 5:18 PM Page 443
PACS IMPLEMENTATION STRATEGY
443
acquisition modality scanners with PACS (e.g., legacy systems, modality worklist, quality control workstations), physical space for the PACS equipment, and resource availability. Resource availability is especially crucial because a successful PACS implementation hinges on the support provided by the in-house radiology department. In making risk assessments, it is also helpful to determine areas in which there is a low risk and a high return. These areas are usually departments where there is a high volume of image (film) and a low rate of return of film back to the radiology department (e.g., critical care areas, orthopedics, surgery). These low-risk/ high-return areas can help to drive the implementation phase timeline and also be a good first push in the implementation process. 17.4.2
Implementation Phase Development
Implementation of PACS should be performed in distinct phases, which would be tailored based on the risk assessment analysis performed at the health care institution. Usually, the first phase is when the main components are implemented such as the archive, archive server/controller, network infrastructure, HIS-RIS-PACS interfaces, workstations, and one or two modality types. The next phases are targeted toward implementing all modality types and a web server for enterprise-wide and off-site distribution of PACS exams. The phased approach allows for a gradual introduction of PACS into the clinical environment, with the ultimate goal being the transformation into a filmless department/hospital. 17.4.3
Development of Workgroups
Because PACS covers such a broad area within the health care institution, it is important to develop workgroups to handle some of the larger tasks and responsibilities. In addition, a PACS implementation team should be in place to oversee the timely progress of the implementation process. The following are some simultaneous key workgroups and their responsibilities: (1) RIS-PACS interface and testing: Responsible for integration/testing of RISPACS interfaces including the modality worklist on the acquisition scanners. (2) PACS modalities and system integration: Responsible for the technical integration of modalities with PACS and installation of all PACS devices. (3) PACS acquisition work flow and training: Responsible for developing work flow and training for clerical and technical staff and for any construction needed in the clinical areas. (4) PACS diagnostic work flow and training: Responsible for developing work flow and training for radiologists and clinicians and for any construction needed in the clinical diagnostic areas (e.g., reading room designs). (5) PACS network infrastructure: Responsible for all design and implementation of the network infrastructure to support PACS. In addition to the above-listed workgroups, a PACS implementation team should be formed to oversee the implementation process. Members should include at least one point person from each workgroup, and additional members should include the PACS implementation manager, the medical director of imaging, the administrative
c17.qxd 2/12/04 5:18 PM Page 444
444
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
TABLE 17.6
Sample Heading for an Implementation Checklist
Done
When
Who
WG
Event
XXXX
5/14/02
John
2
Review Modality Integration List
The first column indicaties whether task is complete; the second column indicates the date the task was assigned; the third column indicates who the task responsibility is assigned to; the fourth column (WG) is the workgroup that the task belongs to; and the last column shows the task description.
director of imaging, an IT representative, and an engineering/facilities representative. This team should meet at least every two weeks and more frequently as the date of live implementation nears. The goals of this team are to update any status items and to highlight any potential stumbling blocks to the implementation process. In addition, this team meeting allows a forum for higher-level administrators to observe the progress of the implementation. It is crucial to identify particular in-house resources for the implementation process. These include a technical supervisor of each modality, a clerical supervisor, a film librarian or film clerk, an RIS support person, and an IT network support person. These resources are an excellent source of information for issues related to PACS such as technologist work flow, clerical work flow, film distribution work flow, designing and performing RIS interface testing with PACS, and the overall hospital IT infrastructure. 17.4.4
Implementation Management
Developing a schedule and implementation checklist can assist management of the implementation process. This template includes topics such as the task description, the date scheduled for the task, the owner of the task, and a checkmark box to indicate completion of the task. Table 17.6 shows a sample heading of an implementation checklist and a sample task. This template allows for finer granularity of the implementation process to protect against overlooked implementation tasks. Input for the checklist can come from the PACS implementation team meetings. Furthermore, the checklist can be broken down into smaller subtask checklists for tracking of issues within each of the workgroups.
17.5 17.5.1
IMPLEMENTATION Site Preparation
One of the first agendas for a PACS implementation is to plan and execute construction projects. The challenge with many health care institutions is to utilize existing space and build around it. This involves coordination and communication because there may be many areas that need construction to accommodate the special hardware equipment that would be installed. Most important is the construction of an ergonomically conforming reading room for the radiologists (Ratib, 2003). Careful attention should be made in designing a workspace environment so that a radiologist can read PACS exams comfortably and for long periods of time. Figure 17.3 shows four different transition stages to convert a clinical area into an ergonomic reading room for the radiologist that utilizes space efficiently. In addi-
c17.qxd 2/12/04 5:18 PM Page 445
IMPLEMENTATION
445
Figure 17.3 Different stages of the conversion of a clinical space into a reading room for radiologists.
tion, during the transition from a film to a filmless environment, some consideration in the planning should include the ability to view hard copy films and soft copy images in the same workspace. Along with construction, an equally important task is planning and installing the physical network infrastructure that will provide PACS connectivity. One of the first decisions is to determine whether to utilize the existing hospital network or implement a new network specifically dedicated to PACS. It is more costly to implement a separate network for PACS; however, sometimes it is necessary if the health care institution does not have the bandwidth to support a PACS. The network infrastructure should be in place before implemention of any PACS components. Baseline performance tests can be performed before the system goes live and then afterwards to determine whether the network performance is clinically acceptable. 17.5.2
Defining Equipment
Another step in the implementation process is to define the PACS equipment needed for the hospital.This task is most challenging because the determining factors include financial budget and work flow-related issues. The end results will undoubtedly vary from institution to institution. Some of the areas to consider are the number of diagnostic workstations for the reading radiologists, the number of review workstations for the critical care areas, and the number of clients for the web server. In addition, because cross-sectional acquisitions are complex, especially for CT and MR, a
c17.qxd 2/12/04 5:18 PM Page 446
446
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
number of quality control (QC) workstations may need to be considered. These QC workstations help the technologist prepare the large number of images in a particular order and presentation for the radiologists as well as removing any unnecessary images before archiving. Finally, the storage of all new image data is crucial for a clinical PACS. Storage capacities should be estimated to determine the optimal storage for both the local workstations and the archive. These estimates are based on the number of exams acquired daily that would be sent to the local workstation, the exam type, the average number of images per exam, and the image data size, as well as the length of time desired for the exams to remain. This is sometimes known as a procedural volume analysis, and it captures the data storage needs for all exams that would be acquired digitally. One crucial decision is to determine what kind of long-term storage should utilized, because there are currently many different hardware options (e.g., digital tape, DVD, MOD, large-scale RAID, SAN (Storage Area Network), ASP). It is interesting to note that during the initial planning time, some of the existing advances of technology may not yet be present. In this case, an upgrade to the archive storage capacity as well as the server hardware may be necessary. This is discussed below in Section 17.7 on image/data migration. 17.5.3
Work Flow Development
The next two steps of the PACS implementation process are the most important tasks necessary for the success of any PACS implementation. They are the development and implementation of the new work flow and the training program. These two tasks are vital because they involve the users of PACS, who ultimately become the yardstick to measure the true success of PACS. In the past, the focus was more related to whether the technology could support the needs of the users. However, with the growing PACS technology, it is more apparent now that the focus has shifted to how well the technology interfaces with the users. To develop and design a new work flow, a work flow workgroup should be formed. The group members should include the clerical supervisor, the technologist supervisor, a radiologist, the PACS system administrator, and a training and modalities representative. The workgroup initially defines the current baseline work flow. Even defining the current work flow is helpful in addressing a lot of unnecessary functions that were normally performed. The new work flow for PACS is developed in an iterative process during biweekly meetings. 17.5.4
Training Development
Development of the training program for PACS is another key step toward a successful implementation. A PACS training team should be formed before implementation consisting of the PACS system administrator and two key point persons for training. The two key point persons are identified from each of the two areas— clerical staff and technical staff. These key point persons attend the class on applications training provided by the vendor. Then, as they become experts, they train others. This is typically known as the “train the trainer” methodology. The training team serves other purposes in addition to training PACS users. The team serves as a feedback mechanism for the PACS users, and any user input is recorded and documented. One of the duties of the training team is to develop training documents for the PACS users. These include easy instructions for more complex workstation
c17.qxd 2/12/04 5:18 PM Page 447
IMPLEMENTATION
447
functions. These instructions can be posted at high-visibility areas such as at each of the workstation locations. An in-house user manual should also be developed complete with a troubleshooting guide and contact numbers for support. For the development of the training program, PACS users and user groups must be identified because each user group performs different PACS functions. Some examples include radiologists, technologists, clerks, physicians, nursing staff, and system administrators. Once groups are identified, the PACS components that each user group will interact with are identified as well as the specific workstation functions to properly target the scope of training for the PACS users. It is important that careful attention is focused on training all users not just on the specific PACS devices but on any other functions or devices within their work flow, because PACS integrates throughout the clinical work flow. Eventually, the training team may evolve into the PACS management and support team. 17.5.5
On-Line: Live
Before going on-line with PACS in a clinical environment, there are some necessary steps that can ease the live period. First, a “countdown to live” checklist and script should be developed to ensure against any overlooked tasks during the “live” date. Second, the live date should be determined on a day that usually does not begin or end the weekday, or the beginning or end of a work week (e.g., Monday through Friday). The training team should identify PACS local user support (PLUS) representatives who will provide the initial frontline support for the specific ward areas that utilize the PACS review workstations. These PLUS representatives can also notify the PACS management team of any outstanding issues that can be discussed on a weekly basis in meetings during the initial “live” phase of the implementation. The training program schedule should begin 3 months before the live date and should focuse on training the users and PLUS users on demo workstations. About 2 weeks before live, a refresher workstation training session can be offered. During the “live” week there will be many areas where on-the-spot training is necessary. For example, radiologists usually cannot appreciate the full extent of application training until they must read the PACS exams as part of their live clinical work flow. An advanced workstation training session can be offered as a follow-up for those users interested in learning more about advanced workstation functions. During the “live” phase, at each review workstation in the ward areas, radiology staff and PLUS representatives can be stationed to provide immediate support and publicity. Analog films are phased out as patients are discharged from the wards, and those needed for comparison studies are immediately digitized for viewing on PACS workstations. Finally, exams acquired from the modalities are not printed to film, to promote rapid PACS utilization. During the initiation of “live” week, an issues list should be developed. This issues list helps track any outstanding issues that evolved from the “live” week and should be managed and updated on a regular basis. 17.5.6
Burn-In Period and System Management
The PACS management and support team is a key component in the successful implementation and management of the system during the burn-in or stabilization period. The team members are responsible for any issues related to the daily oper-
c17.qxd 2/12/04 5:18 PM Page 448
448
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
ation and maintenance of the PACS system. The members include the PACS system administrator, a PACS technical assistant, and a PACS training and modalities representative. The PACS system administrator should have a medical imaging background. This member is responsible for integration and implementation of PACS devices, for maintenance and support of PACS, and for coordination of any issues related to PACS (i.e., upgrades, service, troubleshooting). The PACS technical assistant can be an in-house clerical FTE with a technical background in computer science. This member is responsible for troubleshooting PACS and work flow issues, assists in the training of PACS devices, and performs backup maintenance and support for PACS. The PACS training and modalities support representative can be an in-house technologist FTE with training and support skills. This member is responsible for coordinating and training of PACS devices and modalities, troubleshooting technologist PACS-related work flow issues, and maintenance and support of the modalities. During the burn-in period some of the issues carried over during the “live” phase need to be addressed and resolved as soon as possible. It is also key during this time period to tune the work flow accordingly because the initial work flow developed by the workgroup may not be exactly as predicted. This is where work flow development plays an important role in the full utilization of the PACS for a filmless environment.
17.6
SYSTEM ACCEPTANCE
One of the key milestones to system turnover is the completion of the acceptance testing (AT) of PACS. There are a few reasons why AT is important to PACS. First, AT provides vendor accountability for delivering the final product that was initially scoped and promised. It also provides accountability for the in-house administration that there is documentation that the system was tested and accepted. AT also provides a glimpse into determining the characteristics of PACS uptime and whether it will function as promised. Finally, AT provides proof of both PACS performance and functionality as originally promised by the vendor. Most vendors provide their own AT plan; however, usually it is not thorough enough and is a template that is not customized to the specific health care institution’s needs. The following sections describe some of the steps in designing and developing a robust AT that can be used for final turnover of PACS in the clinical environment. 17.6.1
Resources and Tools
One of the first steps is to identify resources that will be involved in both AT design and implementation. Usually the PACS vendor will supply one resource personnel to be involved in the AT, and it is typically the vendor’s PACS project manager. However, it is necessary for some in-house personnel to be involved in the AT. For example, the PACS system administrator can gain knowledge or even provide knowledge about the new PACS during AT development and implementation. A technologist representative can assist in the acquisition portion of end-to-end testing of PACS. Finally, a RIS support person can assist in the RIS portion of the end-toend testing of PACS. As for AT tools, the vendor usually provides a system acceptance document/checklist. However, it is usually a boilerplate document and not
c17.qxd 2/12/04 5:18 PM Page 449
SYSTEM ACCEPTANCE
449
customized to the client’s needs. Therefore, it is important to ensure that all acceptance criteria and test development and may have to be client-driven. The checklist should include: (1) (2) (3) (4)
Test description Pass/fail criteria and comments Who performed the test Any performance times
In addition, a consistent data set of PACS exams containing various modality types and a large image file size should be used for all tests. 17.6.2
Acceptance Criteria
Acceptance test criteria are divided into two categories. The first category is quality assurance. This includes PACS image quality, functionality, and performance. The second category is technical testing, which focuses on the concept of “no single point of failure” throughout the PACS and includes simulation of downtime scenarios (see Chapter 15). Acceptance criteria should include identifying which PACS components are to be tested. Figure 17.4 is an example of a PACS implemented at a health care institution. The components that should be included within this PACS are (1) (2) (3) (4) (5)
RIS-PACS interface PACS broker Modality scanner(s) Archive server/storage Diagnostic workstation
Modality(ies)
(3)
DLT/ MOD Jukebox
Archive Server
Diagnostic Workstation(s)
(5)
(4) RAID
(7) 10/100 Mbit/sec (2)
RIS (1)
PACS Broker
(6)
Review Workstation(s)
Figure 17.4 An example of a PACS for acceptance testing (AT). Numerals described in text are the components needed to be included in the AT.
c17.qxd 2/12/04 5:18 PM Page 450
450
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
(6) Review workstation (7) Network devices If the PACS also includes a web server, then it also should be included within the acceptance testing criteria. The following subsections describe acceptance test parameters; Section 17.8 presents the system evaluation methodology. 17.6.2.1 Quality Assurance The quality assurance portion of the acceptance test plan can be grouped into three categories: 1) PACS image quality, 2) PACS functionality, and 3) PACS performance. PACS image quality is the evaluation of the display monitor and whether a particular display monitor is suitable for the radiologists. Because the radiologists will be reading directly from the display monitors it is important that they are satisfied with the quality of the image display. Currently there are multiple display options, some of which are 5MP (mega pixels) CRT, 5MP flat panels, and 3MP flat panels (see Chapter 11). Radiologists from each specialty should be involved in the evaluation because each specialty may have specific needs. The five characteristics to be evaluated are: (1) Sharpness: Identification of structures within the PACS image (e.g., interstitial lung markings, renal filling defects, bony trabecular markings) (2) Brightness: Ability to display high brightness while maintaining the best gray scale dynamic range (3) Flicker: Perception of raster lines in the CRT while viewing an image (4) Angle of view: Ability to view images at extreme side angles, especially if LCD is used. (5) Glare or reflection: Amount of glare emanating from an ambient light source on the display monitor. In general, the 3MP flat panel workstation is acceptable for general radiology reading. The 5MP flat panel display is a newer technology that can potentially allow higher spatial resolution. The second category of quality assurance is PACS functionality. Usually, vendors will focus mostly on workstation functionality such as display worklist, display images, and image tool sets. Some workstations may feature 3D postprocessing tools as well. The acceptance of workstation functionality occurs during applications training for the radiologists. However, back-end functionality (e.g., RIS-PACS interface, archiving and distributing functions, and QC workstations) may tend to be overlooked even though it is a vital part of the entire work flow. End-to-end testing is one of the few methods that can truly track the functionality of PACS as a whole. This includes ordering a radiology exam in the RIS, acquiring the PACS exam on a modality scanner, archiving, distributing, and displaying on all corresponding PACS components. In addition, verification that all necessary RIS and PACS data are captured and stored in PACS (e.g., patient demographics, image characteristics) within the DICOM header should be included. The final category is PACS performance. It is extremely important to separate as much as possible the performance of the network from the performance of the
c17.qxd 2/12/04 5:18 PM Page 451
SYSTEM ACCEPTANCE
451
PACS. If possible, a network baseline performance should be performed before the PACS is on-line. This can be accomplished by querying for and retrieving a test set of PACS exams. When the PACS is on-line, a second network performance test should be performed to measure the network bandwidth capability of a live network. In addition to the network performance tests, the PACS vendor will suggest performance numbers that are expected from the PACS application. These performance numbers should be agreed on by both client and vendor to ensure acceptance. For PACS performance numbers, the time measured should always be from the time the first image of the PACS exam arrives at the workstation until the last image arrives and the entire exam can be displayed on the workstation. These are realistic performance measurements that reflect real clinical scenarios. Most vendors like to present the performance time of the first arrival image, which is not a realistic measurement. Table 17.7 shows measurements made during an acceptance test of a clinical PACS. These measurements were performed several years back, and general PACS performance numbers have improved considerably since then. Nevertheless, the method used in generating this table can be used as a general guideline for making the performance test. 17.6.2.2 Technical Testing The technical testing component of AT focuses on identifying single points of failure (Chapter 15) within the PACS and performing downtime scenarios to ensure that the PACS can continue to operate clinically, if not at full performance level. For example, referring to Figure 17.4, the archive component (4) includes a RAID and a MOD jukebox. Each one of these components, including the server, should be shut down to simulate a downtime scenario and brought back up to verify normal operations. It is important to include both the downtime scenario as well as bringing the archive back up to verify normal operations as they are both vital to acceptance. The PACS broker (2) should be shut down to observe the contingency work flow and verify whether the PACS can continue to operate even if it is limited. Another component that is often overlooked is the network infrastructure (7). For example, if a hot spare switch is identified as a redundant switch, it is necessary to perform the scenario of replacing the live switch with the hot spare as part of the acceptance test. In summary, each of the PACS components within the PACS that has a potential for hardware failure should undergo a downtime scenario and be brought back up to ensure that the clinical PACS operation is not halted. This technical testing provides a few advantages:
TABLE 17.7
Performance Numbers Gathered from a PACS Acceptance Test
RAID (short-term storage) MOD (long-term storage)
CR Exam 2 Images
MR Exam 80 Images
CT Exam 90 Images
12
45
73
45
107
140
Times measured in seconds are when the entire exam is loaded onto the local workstation and the test is performed on a loaded clinical network.
c17.qxd 2/12/04 5:18 PM Page 452
452
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
(1) It simulates real-world potential downtime clinical scenarios. (2) Observations can be made of the effect of downtime on the clinical PACS when certain components are shut down. (3) It provides a level of comfort for the PACS support personnel, as they will be encountering tangible downtime experience. (4) It can confirm any redundancy features as functional (e.g., redundant network switch, redundant servers). 17.6.3
Acceptance Test Implementation
Each of the implementation phases of the PACS process should have an acceptance test performed. Acceptance at each of the phases is also crucial for the vendor because it is only after acceptance that it can collect the remainder of the fee balance, which is negotiated beforehand. The implementation of the AT is a twophased approach. The first phase should be performed approximately 1 week before the live date. The content of phase one includes the technical component testing focusing on single points of failure, end-to-end testing, contingency solutions for downtime scenarios, and any baseline performance measurements. The second phase should be performed approximately 2 weeks after the live date so that the PACS has stabilized a little bit in the clinical environment. The contents of phase two include PACS functional and performance testing as well as any additional network testing, all on a loaded clinical network. 17.7
IMAGE/DATA MIGRATION
Two scenarios are possible to trigger image/data migration: converting to a new storage technology, and increasing data volumes. It is possible for a health care institution to have a dramatic increase in PACS data volumes once it transforms into a filmless institution. This is possible due in part to the continuous image accumulation as well as the integration of new modalities generating mass volumes of PACS data and archiving the large data quantities to PACS. For example, the multislice detector CT scanner is capable of generating up to 1000 images amounting to almost 500 Mbytes of data per exam. Figure 17.5 shows an example of a health care institution’s exam volumes for three specific modality types, CT, MR, and US. It is very likely that a hospital may need to expand the archive storage capacity. Furthermore, most PACS installed in previous years do not have a secondary copy backup of all the archived PACS image data for disaster recovery purposes. It has only been a recent trend for PACS to offer disaster recovery solutions. Therefore, should a hospital decide to upgrade the archive server performance and expand with a higher-capacity data media storage system, there are a few major challenges facing a successful upgrade. One challenge is how to upgrade to a new PACS archive server in a live clinical setting. Another challenge is how to migrate the previous PACS data to a new data media storage system in a live clinical setting. 17.7.1
Migration Plan Development
Some of the issues that surround a migration plan are that the data migration must not hamper the live clinical work flow in any way or reduce system performance.
c17.qxd 2/12/04 5:18 PM Page 453
IMAGE/DATA MIGRATION
453
SJHC Procedure Volumes 18000
Total Procedures
15000
Year 1999
12000 Year 2000 9000 Year 2001 (Projected)
6000 3000 0 CT
MR
US
Exam Types
Figure 17.5 An example of the increase in PACS exam volumes for three modalities for a sample health care institution. STHC, St. John’s Health Center.
With any migration, it is important that verification be performed to prevent any data loss. Once the data have been successfully migrated to the new data media, the original data media storage system should be removed, which may incur additional downtime of the archive server. Development of a migration plan is key to addressing the surrounding issues and ensuring a data migration that will have the least impact on the live clinical PACS. 17.7.1.1 Data Migration Schedule Because data migration occurs in a live clinical setting, it is important to determine the times at which the data migration will not impact normal clinical work flow. This may include scheduling a heavier data migration rate during off-hours (e.g., nights and weekends) and a lighter rate during operating hours and hours of heavy clinical PACS use. Expert knowledge of the clinical work flow is valuable input toward developing a good schedule data migration. Downtime may be involved both initially and at the end of the data migration process and should be scheduled accordingly with contingency procedures. 17.7.1.2 Data Migration Tools It is advantageous to have the data migration performed by a second archive server so that the live PACS server is impacted less. However, many times this solution is not realistic and therefore the hardware is shared for both live operations and for data migration. In addition to the data migration hardware, software tools to verify successful migration must be included. For example, a database log that records all transactions of PACS exams that were successfully migrated can be used to track any exams that encountered some kind of migration error. The initial migration is automatic, whereas the subsequent iterations are performed manually to investigate the cause of the migration error and resolved accordingly. 17.7.1.3 Data Migration Tuning It may be necessary to fine-tune the data migration rate because estimates for the migration rate may not be accurate ini-
c17.qxd 2/12/04 5:18 PM Page 454
454
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
tially. Fine-tuning is very crucial because an aggressive migration rate can adversely affect the performance of the entire clinical PACS. Careful attention to the archive and system performance is especially important during the onset of the data migration. The data migration rate may need to be scaled back. This may be an iterative cycle until an optimal migration rate is achieved that does not adversely affect the clinical PACS. 17.7.2
An Example—St. John’s Health Center
A successful example of image/data migration at St. John’s Health Center, which has a filmless PACS that acquires approximately 130,000 radiological exams annually, is given here. St. John’s began the PACS process in April 1999 by implementing PACS with CR for all critical care areas. This was considered Phase I of the PACS process. Phase II comprised the integration of MR, CT, US, digital fluorography, and digital angiography within the PACS. Phase II was completed in April 2000. Since then St. John’s PACS volumes have increased steadily. The maximum storage capacity of the PACS archive implemented in April 1999 was 3.0 TB, which would mean that older PACS exams would have to remain off-line before a year is over. The archive storage device was a magneto optical disk (MOD) jukebox with only a single copy of the PACS data (Fig. 17.6, upper left, MagicStore Advance). Therefore, should St. John’s encounter a disaster there is potential for all the PACS data to be lost because there is no backup of the data (see Section 10.6). With these considerations, it was determined that St. John’s need for archive server upgrade, data storage expansion, and data backup was becoming increasingly urgent.
MagicStore Advance Archive: SIEMENS •UltraSparc 2 •Image DB Only •90 GB RAID
Handles All Previous Exams Query Requests Prior to the Upgrade
Tape Library: Storage of All PACS Exams
Incoming and Outgoing PACS Image Exams SIEMENS •Enterprise 450 •Patient & Image Databases •270 GB RAID
STORAGETEK •Enterprise 450 Server •43 GB RAID •7.9 TB Storage Capacity.
MagicStore Professional Archive: Handles All Current Exams Transactions (Query Requests & Archival of PACS Exams)
Figure 17.6 Final archive configuration and processes for Saint John’s Health Center PACS on completion of data migration. MagicStore Advance Archive (upper): The initial archive server. MagicStore Professional Archive (bottom): Upgraded archive server after data migration. Tape library (middle right): Handles all archive. See also Figure 18.7.
c17.qxd 2/12/04 5:18 PM Page 455
PACS SYSTEM EVALUATION
455
The decision was made in late 2000 to upgrade the archive server. With the archive upgrade, all new PACS exams were archived through a Sun Enterprise 450 platform server with a 270 GB RAID (Fig. 17.6, lower left, MagicStore Professional). The exams were then archived to a network-attached digital tape storage system comprising an additional Sun Enterprise 450 with a 43 GB RAID and a 7.9 TB storage capacity digital tape library (Fig. 17.6, right). The storage capacity of the tape library technology was forecast to double in the next few years as the tape density doubles, eventually making it a 16 TB tape library. All older PACS exams were still distributed via the original archive server, but the exams have been successfully migrated to the tape library as well. Figure 17.6 shows the final configuration after the completion of the data migration.
17.8
PACS SYSTEM EVALUATION
In Section 17.6 we presented a PACS acceptance test performed before the system can be accepted for daily clinical operation. After the PACS has been in operation for several months, we should evaluate the impact of the PACS on the clinical environment. We discuss several methods of PACS system evaluation in this section. The first method is evaluating subsystem throughputs, which does not involve a comparison of film-based and digital-based operations. The second method is a method of directly comparing the performance of a film-based operation with a digital-based operation. The third method is a standard ROC (receiver operating characteristic) analysis, comparing image quality of hard copy versus soft copy display. Sections 17.8.1–17.8.3 give examples of each method. 17.8.1
Subsystem Throughput Analysis
The overall throughput rate of the PACS is the sum of the throughput rates of individual PACS subsystems, including acquisition, archive, display, and communication network. Table 17.8 defines acquisition, archival, retrieval, distribution, display, and network residence times. The throughput of a PACS subsystem can be measured in terms of the average residence time of individual images in that subsystem. The residence time of an image in a PACS subsystem is defined as the total time required to process the image in order to accomplish a particular task within that subsystem. The overall throughput of a PACS can then be measured by the total residence time of an image in the various subsystems. Each of the PACS subsystems may perform several tasks, and each task may be accomplished by several processes. An archive subsystem, for example, performs three major tasks: image archiving, image retrieval, and image routing. To perform the image retrieval task, a server process accepts retrieve requests from the workstation, a retrieve process retrieves an image file from the permanent archive, and a process sends the image file to the destination display workstation. These three processes communicate with each other through a queuing mechanism and run cooperatively to accomplish the same task. The retrieval residence time of an image file in the archive subsystem can be measured by the elapsed time from the moment an archive server receives the retrieve request, and retrieves the image file from the
c17.qxd 2/12/04 5:18 PM Page 456
456
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
TABLE 17.8 Definitions: Acquisition, Archival, Retrieval, Distribution, Display, and Network Residence Times Image Residence Time
Measurement Performed at
Definition
Acquisition
Acquisition
Total time of receiving an image file from a radiological imaging device, reformatting the images, and sending the image file to the PACS controller
Archival
Archive
Total time of receiving an image file from an acquisition computer, updating the PACS database, and archiving the image file to permanent storage
Retrieval
Archive
Total time of retrieving an image file from the archive library and sending the image file to a display workstation
Distribution
Archive
Total time of receiving an image file from an acquisition computer, updating the PACS database, and sending the image file to a display workstation.
Display (2 K)
Display
Total time of receiving an image file from the PACS controller, transferring the image file to the disk or RAID, and displaying it on a 2 K monitor.
Display (1 K)
Display
Total time of receiving an image file from a PACS controller and displaying it on a 1 K monitor.
Network
Network manager
Total traveling time of an image from one PACS component to another via a network.
permanent archive, to the time the server sends the image file to the destination display workstation. Once these residence times are broken down into computer processes, a time stamp program can be implemented within each process to automatically log the start and the completion of each process. The sum of all processes defined in each task is the measured residence time. 17.8.1.1
Residence Time
Image Acquisition Residence Time An acquisition subsystem performs three major tasks: (1) acquires image data and patient demographic information from radiological imaging devices, (2) converts image data and patient demographic information to the DICOM format, and (3) sends reformatted image files to the PACS controller. Archive Residence Time Images in the PACS controllers are archived to permanent storage. However, the archival residence time of an image was found to be significantly affected by the job prioritizing mechanism (Section 17.8.1.2) utilized in the PACS controller. Because of its low priority compared with the retrieve and distribute processes running on the archive subsystem, an archive process is always
c17.qxd 2/12/04 5:18 PM Page 457
PACS SYSTEM EVALUATION
457
compromised—that is, it must wait—if a retrieve or distribute job is executing or pending. Image Retrieval Residence Time Images are retrieved from the permanent storage to the PACS controllers. Among the three major processes carried out by the PACS controller, retrieve requests always have the highest priority. Thus images intended for study comparison are always retrieved and sent to the requested display workstation immediately, before archive and distribute processes are initiated. Distribution Residence Time All arriving images in the PACS controller are distributed immediately to their destination display workstations before being archived in the permanent storage. Images can also be sent directly from the acquisition gateway to the workstation. Display Residence Time A display workstation configured with a local disk storage or RAID may need to first receive images from the PACS controller’s RAID and thus delay the image display time on the monitor. Communication Subsystem: Network Residence Time The residence time of an image in the multiple communication networks can be measured as an overlapped residence time of the image in the acquisition, archive, and display subsystems (see preceding subsections). The ATM or fast Ethernet throughput is limited by the magnetic disk input/output rate. PACS Overall Throughput: Total Image Residence Time The overall throughput of the PACS can be determined by the total residence time of an image from its original source (a radiological imaging device) to its ultimate destination (a display workstation or the permanent archive). 17.8.1.2 Prioritizing The use of job prioritizing control allows urgent requests to be processed immediately. For example, a request from a display workstation to retrieve an image from the permanent archive has priority over any other processes running on the archive subsystem and is processed immediately. On completion of the retrieval, the image is queued for transmission with a priority higher than that assigned to the rest of the images that have just arrived from the acquisition nodes and are waiting for transmission. During the retrieval, the archive process must be compromised until the retrieval is complete. Suppose, for example, that the retrieval of a 20-Mbyte image file from the permanent archive takes 54 s; if this image file is retrieved while another large image file is being transmitted to the same archive, reducing the retrieval time for the former image to 96 s, the time delay of the retrieval without job prioritizing is 42 s. 17.8.2
System Efficiency Analysis
We can use image delivery performance, system availability, and user acceptance as a means of measuring system efficiency.
c17.qxd 2/12/04 5:18 PM Page 458
458
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
17.8.2.1 Image Delivery Performance One method of evaluating PACS system efficiency is to compare image delivery performance from the film management system and from the PACS. For example, consider the neuroradiology PACS component—in particular, the CT and MR images.We can decompose both the filmbased and the digital-based operation into four comparable stages. In the case of the film management system, the four stages are: 1. At each CT and MR scanner, technologists create films by windowing images, printing images, and developing films. 2. Film librarians deliver the developed films to the neuroradiology administration office. 3. Neuroradiology clerks retrieve the patient’s historical films and combine them with the current examination films. 4. Radiology residents or film library personnel pick up the prepared films and deliver them to a read-out area (or neuroradiology reading room). Similarly, the four stages in the PACS are: 1. The period during which the acquisition gateway computer receives images from the imaging device and formats images into the DICOM standard image file. 2. The elapsed time of transferring image files from the acquisition computer to the PACS controller. 3. The processing time for managing and retrieving image files at the PACS server. 4. The time needed to distribute image files from the server to the display workstation. The time spent in each film management stage can be recorded and estimated by technologists, film clerks, and personnel who have many years of professional experience. Circumstances that should be excluded from the calculation because their performance variances are too large to be valid include retrieval of a patient’s historical films from a remote film library and the lag time between pickup and delivery of films to and from the various stages in the film management system described above. These exclusions apparently make the film-based operation more competitive with the PACS. The performance of the PACS can be automatically recorded in database files and log files by software modules. The data included in these files are (1) date, time, and duration of each modularized process and (2) size of the image file acquired. 17.8.2.2 System Availability PACS system availability can be examined in terms of the probability that each component will be functional during the period of evaluation. In the neuroradiology example, the components considered to affect the availability of the neuroradiology PACS include (1) the image acquisition subsystem, including all CT and MR scanners and interface devices between the scanners and the acquisition computers, (2) the PACS controller and the archive
c17.qxd 2/12/04 5:18 PM Page 459
PACS SYSTEM EVALUATION
459
library, (3) the display subsystem with its display workstation computer and display monitors, and (4) the communication network. Calculations of the probability that each component will be functional can be based on the 24-hour daily operating time. The probability P that the total system will be functional is the product of each component’s uptime probability in the subsystem, defined as follows: n
P = ’ Pi
(17.1)
i =1
where Pi is the uptime probability of each component i, n is the total number of components in the subsystem, and P is the product of all terms. 17.8.2.3 User Acceptance The acceptability of the display workstation can be evaluated by surveying users’ responses and analyzing data from a subjective image quality questionnaire. Table 17.9 shows a sample item in a user acceptance survey, and Table 17.10 is a typical subjective image quality survey. 17.8.3
Image Quality Evaluation—ROC Method
A major criterion in determining the acceptance of the PACS by users is image quality in soft copy displays compared with the quality available from a different
TABLE 17.9
Display Workstation User Survey Form
Attribute
Poor (1)
Fair (2)
Good (3)
Excellent (4)
Avg. Score
Image quality Speed of image display Convenience of image layout Performance of manipulation functions Sufficiency of patient information Sufficiency of image parameters Ease of use Overall impression Overall average
TABLE 17.10 Subjective Image Quality Survey Form for Comparing the Film and Light Box Versus the Display Workstation Display System
Ranking Scales (Perception of Confidence) Least 1
Film and light box Display workstation
Most 2
3
4
5
6
c17.qxd 2/12/04 5:18 PM Page 460
460
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
display system or hard copy. In Section 17.8.2, we briefly noted a subjective image quality survey method. In this section, we discuss a more rigorous analysis based on the receiver operating characteristic (ROC; see Section 5.5.4). Although it is tedious, time consuming, and expensive to perform, ROC has been accepted in the radiology community as the de facto method for objective image quality evaluation. The ROC analysis consists of the following steps: image collection, observer testing, truth determination, and statistical evaluation. Consider a sample comparison of observer performance in detecting various pediatric chest abnormalities—say, pneumothorax, linear atelectasis, air bronchogram, and interstitial disease—on soft copy (a 2K monitor) versus digital laser-printed film from computed radiography. Sections 17.8.3.1–17.8.3.5 describe the basic steps of carrying out an ROC analysis. 17.8.3.1 Image Collection All routine clinical pediatric CR images are sent to the primary 2K workstation and the film printer for initial screening by an independent coordinator, who selects images of acceptable diagnostic quality, subtle findings, disease categories, and matched normal images. To ensure an unbiased test, half of the selected images should be determined from the soft copy and the other from the hard copy. A reasonably large-scale study should consist of about 350 images to achieve a good statistical power. The selected images should be screened one more time by a truth committee of at least two experts who have access to all information related to a specific patient, including clinical history and images, obtained by means of other radiological techniques. During this second screening, some images will be eliminated for various reasons (e.g., poor image technique, signal too obvious, overlying monitoring or vascular lines clouding the image, overabundance of a particular disease type). The remaining images are then entered into the ROC analysis database in both hard copy and soft copy forms. 17.8.3.2 Truth Determination Truth determination is always the most difficult step in any ROC study. The truth committee usually determines the truth of an image by using the clinical history of each case, the hard copy digital film image, the soft copy image, all image processing tools available, and biopsy results if they are available. 17.8.3.3 Observer Testing and Viewing Environment The display workstation should be set up in an environment similar to a viewing room, with ambient room light dimmed and no extraneous disruption. Film images should be viewed in a standard clinical viewing environment. Observers are selected for their expertise in interpreting pediatric chest X rays. Each observer is given a set of sample training images from which to be trained on four steps: (1) learning how to interpret an image with soft copy display, (2) completing an ROC form (see example in Fig. 17.7), (3) viewing the corresponding hard copy film from a light box, and (4) filling out a second ROC form on the hard copy film from a light box. 17.8.3.4 Observer Viewing Sequences An experimental design is needed to cancel the effects due to the order in which images are interpreted. The image sample from the ROC database can be randomized and divided into four subsets
c17.qxd 2/12/04 5:18 PM Page 461
PACS SYSTEM EVALUATION
461
Figure 17.7 Structured receiver operating characteristic form used in a typical ROC study of chest images. For each disease category, a level of confidence response is required. Chest quadrants assessed were right upper (R.U.), right lower (R.L.), left upper (L.U.), and left lower (L.L.).
(A, B, C, and D), containing approximately equal numbers of images. Identical subsets are present for both the hard copy and the soft copy viewing. Each observer participates in two rounds of interpretation. A round consists of several sessions (depending on the total number of images). To minimize fatigue, each observer interprets about 30 images during a session. During the first round, the observers interpret all images, half from hard copy and the other half from soft copy. During the second round, which should be 3–5 months later to minimize the learning effect, the observers again interpret all images, but for each image using the viewing technique not used in round 1. The viewing sequence is shown in Table 17.11. 17.8.3.5 Statistical Analysis The two ROC forms for each image modality filled in by each observer are entered in the database. The results are used to
c17.qxd 2/12/04 5:18 PM Page 462
462
PACS CLINICAL IMPLEMENTATION, ACCEPTANCE, DATA MIGRATION, AND EVALUATION
TABLE 17.11 Observer No. 1 2 3 4 5
Order of Interpreting Image Sets Used in the ROC Study Order of Interpreting Technique Subsets* Round 1 A1, B1, C2, D2 A2, B2, C1, D1 C1, D1, A2, B2 C2, D2, A1, B1 C1, D1, A2, B2
Round 2 A2, B2, C1, D1 A1, B1, C2, D2 C2, D2, A1, B1 C1, D1, A2, B2 C2, D2, A1, B1
* A, B, C and D are the four image sets, each with an equal number of images. The numbers 1 and 2 refer to the technique (soft or hard copy). For example, observer 1 views image sets A1, B1, C2, D2 during Round 1 and later (>3 months) in Round 2 reviews the image sets A2, B2, C1, and D1.
perform the statistical analysis. A standard ROC analysis program—for example, CORROC2 developed by Charles Metz of the University of Chicago—can be used to calculate the area under the ROC curve (see Fig. 5.10) along with its standard deviation for a given observer’s results for the hard copy and soft copy viewing methods. The ROC area can be compared by the disease category with the paired t-test. The results provide a statistical comparison of the effectiveness of using soft copy and hard copy on these sets of images in diagnosing the four disease categories. This statistical analysis forms the basis of an objective evaluation of image quality of the hard copy and soft copy displays derived from this image set.
Ch18.qxd 2/12/04 5:17 PM Page 463
CHAPTER 18
PACS Clinical Experience, Pitfalls, and Bottlenecks
In this chapter we first give the clinical experience of two medical centers: the Baltimore VA Medical Center and the St. John’s Health Center, Santa Monica, CA. The former case documents a successful story in terms of clinical operation of a filmless radiology at a VA medical center after 9 years of operation. The St. John’s Health Center case describes two PACS operation issues that will confront many existing PAC systems in the near future: archive server upgrade and off-site backup archive. The next three topics in this chapter are PACS pitfalls, PACS bottlenecks, and pitfalls in DICOM conformance.
18.1
CLINICAL EXPERIENCE AT BALTIMORE VA MEDICAL CENTER
This section is based on information from Drs. Eliot Siegel and Bruce Reiner in “Filmless Radiology at the Baltimore VA Medical Center: A Nine Year Retrospective” (Siegel and Reiner, 2003). The Baltimore VA Medical Center (VAMC) started its PACS implementation in the late 1980s and early 1990s. The VAMC purchased a PACS in late 1991 for approximately $7.8 million, which included $7.0 million for PACS and $800,000 for CR. The manufacturers involved were Siemens Medical System (Erlangen, Germany) and Loral Western Developed Labs (San Jose, CA); the product was later changed hand to Loral/Lockheed Martin and then to General Electric Medical Systems. The goals of the project were to integrate with the VA home-grown Clinical Patient Record System (CPRS) and the then to-be-developed VistA imaging system (see Section 12.5). The project has been under the leadership of Dr. Eliot Siegel, Chairman of the Radiology Department. The system was in operation in the middle of 1993 in the new Baltimore VAMC. This system has since evolved and has been integrated with other VA hospitals in Maryland into a single imaging network, the VA Maryland Health Care System. The following subsections summarize the benefits, costs, savings, and cost-benefit analysis.
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
463
Ch18.qxd 2/12/04 5:17 PM Page 464
464
18.1.1
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
Benefits
Four major benefits at the Baltimore VAMC are changing the operation to filmless, reducing unread cases, reducing retake rates, and drastically improving the clinical work flow. Figure 18.1 shows the drop from the 8% unread imaging study rate before PACS to approximately 0.3% in 1996. The remaining number of unread studies was identified in a weekly audit with the HIS/RIS system and subsequently made available for interpretation or reinterpretation with the PACS. Being filmless is the direct benefit of using the CR system. CR also contributes to reducing the rate of retake due to unsatisfactory examinations. This retake rate has been reduced by 84% from 5% when film was used to approximately 0.8% after the transition to the CR and filmless operation shown in Figure 18.2. The transition to the filmless operation has resulted in the elimination of a number of steps in the process in which imaging studies are made available for interpretation by radiologists. Figure 18.3 shows the 59 steps required in a film-based operation in 1989 versus the 9 steps required after the transition to the PACS operation in Figure 18.4. The interval from the time when a study was obtained until it was reported is reduced from several hours (often the following day) to less than 30 minutes (during the normal workday). The result is rapid reporting (from when a study is performed until it is dictated), the time of which is reduced from 24 hours to 2 hours as shown in Figure 18.5. This “real-time” reporting has had a positive impact on the quality of patient care and the perception of radiology services by referring clinicians. These benefits were associated with a strong (92%) preference for PACS vs. film (3%, with 5% undecided) among the clinicians at the Baltimore VAMC when an initial survey was conducted within 2 years of the implementation of the PACS.According to the survey, the clinicians’ perception of the biggest benefit of PACS to their practice is the fact that it saves them time and makes them more productive. In response to a formal survey in the mid-1990s, 98% of the respondents indicated that the PACS resulted in more effective utilization of their time. Reduction of unread Exams
9% 8% 7% 6%
1993 (Before PACS) 1994
5% 4%
1995
3% 2%
1996
1% 0%
Figure 18.1 There was a 96% reduction in the percentage of lost or unread examinations, but not a complete elimination of these, within 3 years of the transition to filmless operation.
Ch18.qxd 2/13/04 6:35 PM Page 465
CLINICAL EXPERIENCE AT BALTIMORE VA MEDICAL CENTER
465
Reduction of retake rates
6% 5% 4% Pre-PACS PACS
3% 2% 1% 0%
Figure 18.2 The wide dynamic range of computed radiography and the ability to manipulate images at the workstation resulted in a reduction of 84% in the image retake rates for the general radiography technologists. There was also a shift in the most common reason for the retakes from error in technique (images too light or too dark), to problems with patient positioning.
Referring Clinician 1. Get chart from clerk 2. Write orders in chart 3. Give chart to clerk 4. Fill out study request 58. Ask clerk to pull chart 59. Review report in chart
Transportation Aide 14. Transport patient to dept. 32. Transport patient back
Ward Clerk
Nurse
5. Flag order in chart 6. Place chart in “pending orders” bin 10. Contact radiology with patient info 12.Inform nurse of scheduled study 13. Contact transportation personnel 56. Sort reports 57. File reports in chart
7. Take chart from bin 8. Document order in chart 9. Ask clerk to schedule study
Radiology Clerk 11. Schedule patient 15. Look up index card 16. Review card for old exams 17. Give card to film room 21. Place request in pending bin 31. Call transportation 33. Re-file index card
Film Room Clerk 18. Check recently pulled films 19. Search for films in library 20. Write new study on jacket 35. Combine with old studies 36. Bring films to reading room 49. File report in film jacket
Medical Clerk 54. Sort radiology reports 55. Bring reports to wards
Transcriptionist 45. Retrieve tapes 46. Transport tapes for dictation 47. Transcribe and print reports 48. Bring report to film room 50. Bring report to front desk 51. Give report to radiologists 53. Take report to Medical Admin.
Radiologist
Technologist 22. Retrieve request and patient 23. Obtain images 24. Take cassettes to dark rm. Tech 28. Check films for quality 29. Update patient index card 30. Return study card to clerk 34. Bring films to film room
Dark Room Tech 25. Bring films to processor 26. Process films 27. Return films to tech
37. Take films from “stack” 38. Remove films and requests 39. Hang films 40. Review images and reports 41. Dictate case 42. Take down films 43. Return films to jacket 44. Return jackets to “stacks” 52. Review and sign report
Figure 18.3 In 1989, a work flow study conducted at the Baltimore VA Medical Center revealed that 59 steps were required in a film-based environment to request, obtain, and report an inpatient chest radiograph. (Courtesy of Dr. E. Siegel.)
Ch18.qxd 2/12/04 5:17 PM Page 466
466
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
Referring Clinician 1. Physician order entry on HIS 9. Report available on HIS
Transportation Aide 2. Transport patient to dept. 6. Transport patient back
Technologist 3. Choose patient from modality worklist 4. Obtain images 5. Check images for quality
Radiologist 7. Review images and reports 8. Dictate study with voice recognition system
Figure 18.4 After the introduction of PACS and integration of the system and imaging modalities with the hospital and radiology information systems, the number of steps required for an inpatient chest radiograph decreased substantially to 9. (Courtesy of Dr. E. Siegel.)
Reduction in turn around time (hr.) 30 25 20 15
Pre-PACS PACS
10 5 0
Figure 18.5 The combination of a reduction in work flow steps and rapid availability of images after they are obtained has reduced the time from when an image is acquired until it is transcribed from approximately 24 to 2 hours.
18.1.2
Costs
The two major contributors to the cost of the system are the depreciation and the service contract. The VA depreciates its medical equipment over a period of 8.8 years, whereas computer equipment is typically depreciated over a 5-year time period. The other significant contributor to the cost of the PACS is the service contract, which includes all of the personnel required to operate and maintain the system. It also includes software upgrades and replacement of all hardware components that fail or demonstrate suboptimal performance. This includes replacement of any monitors that do not pass the quality control tests. No additional personnel are required other than those provided by the vendor through the service contract. In the Baltimore VAMC, the radiology administrator, chief technologist, and chief of radiology share the responsibilities of a PACS departmental system administrator.
Ch18.qxd 2/12/04 5:17 PM Page 467
CLINICAL EXPERIENCE AT ST. JOHN’S HEALTH CENTER
18.1.3
467
Savings
18.1.3.1 Film Operation Costs Films are still used in two circumstances. Mammography exams are still using films, but they are digitized and integrated to the PACS. Films are also printed for patients who need to have them for hospital or outpatient visits outside the VA Healthcare network. Despite these two uses, film costs have been cut by 95% compared with the figure that would have been required in a conventional film-based department. Additional savings include reductions in film-related supplies such as film folders and film chemistry and processors. 18.1.3.2 Space Costs The ability to recover space in the radiology department because of PACS contributes to a substantial savings in terms of space indirect costs. 18.1.3.3 Personnel Costs The personnel cost savings include radiologists, technicians, and film library clerks. An estimate was made that at least two more radiologists would have been needed to handle the current workload at the VAMC had the PACS not been installed. The efficiency of technologists has improved by about 60% in sectional imaging exams, which translates to three to four additional technologists had the PACS not been used. Only one clerk is required to maintain the film library and to transport film throughout the medical center. 18.1.4
Cost-Benefit Analysis
In Section 17.1, we describe a cost analysis model that compares film-based operation and PACS operation. There is a crossover point at which PACS becomes more cost beneficial to use than the conventional film-based operation (see Fig. 17.1C). The Baltimore VAMC PACS underwent a similar study by a group of investigators from Johns Hopkins University; the results are shown in Figure 18.6. The study found that in a conventional film-based environment the cost per unit examination was relatively flat as the volume of studies performed in the radiology department increased from 20,000 to 100,000 studies per year. As the number of studies increases, additional space, personnel, and supplies are necessary. With a PACS, there was a rapid decrease in unit cost per study because equipment costs for the system are fixed and do not increase substantially with added volume. At a volume of 90,000 studies per year, there was an approximately 25% savings in cost per unit study with a PACS compared with film. The “break even” point between film-based and filmless operation occurs at approximately 39,000 studies per year. At volumes greater than that, the study found filmless operation to be more cost effective than a conventional department.
18.2 18.2.1
CLINICAL EXPERIENCE AT ST. JOHN’S HEALTH CENTER St. John’s Health Center’s PACS
Saint John’s Health Center, Santa Monica, CA, has a filmless PACS that acquires approximately 130,000 radiological exams annually. As the first phase, St. John’s implemented the PACS with CR for all critical care areas in April 1999. Phase II, completed in April 2000, comprised the integration of MR, CT, US, digital
Ch18.qxd 2/12/04 5:17 PM Page 468
468
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
$85
Film PACS
$80 $75 $70 $65 $60 $55 $50 $45 $40
20,000 30,000
40,000 50,000 60,000
70000
80000
90000 100,000
Figure 18.6 The cost per unit examination is relatively flat as the volume of studies performed in the radiology department increases from 20,000 to 100,000 studies per year; however, there is a rapid decrease in unit cost per study with PACS. At 90,000 studies per year there is an approximately 25% decrease in cost per unit study with PACS in comparison to film. The “break even” point occurs at approximately 39,000 studies per year. (Courtesy of Dr. E. Siegel.)
fluorography, and digital angiography within the PACS. Since then, St. John’s PACS volumes have increased steadily. The original storage capacity of the PACS archive was a 3.0 TB MOD Jukebox, which would mean that older PACS exams would have to remain off-line before a year is over. Also, the archive had only a single copy of the PACS data. Therefore, should St. John’s encounter a disaster, it might lose all the PACS data because there was no backup (see Section 15.6). With these considerations, St. John’s determined to overhaul its PACS archive system with the following goals: • • •
Upgrade the archive server to a much larger capacity Develop an off-site image/data backup system Conduct an image/data migration during the archive system upgrade
These goals were accomplished in late 2001 based on the concepts discussed in Chapters 15 (fault-Tolerant server) and 17 (acceptance test). With the archive upgrade, all new PACS exams were archived through a Sun Enterprise 450 platform server with a 270 GB RAID. The exams were then archived to a network-attached digital tape storage system comprising an additional Sun Enterprise 450 with a 43 GB RAID and a 7.9 TB storage capacity digital tape library. The storage capacity of the tape library technology was forecast to double in the next few years as the tape density doubles, eventually making it a 16 TB tape library.
Ch18.qxd 2/12/04 5:17 PM Page 469
CLINICAL EXPERIENCE AT ST. JOHN’S HEALTH CENTER
DURING UPGRADE/ MIGRATION:
BEFORE UPGRADE/ MIGRATION: Incoming & Outgoing PACS Exams
Archive Server
Handles Requests for PACS Exams Stored BEFORE Server Upgrade
AFTER UPGRADE/MIGRATION COMPLETE:
Incoming & Outgoing PACS Exams
Upgraded Archive Server
Data Migration of PACS Exams BEFORE Server Upgrade
MOD Jukebox Long-Term Storage
Incoming & Outgoing PAC Exams
Handles Requests &Storage for PACS Exams AFTER Server Upgrade
Original Archive Server
MOD Jukebox Long-Term Storage of PACS Exams BEFORE Upgrade
:
469
SAN Tape Library Long-Term Storage of PACS Exams AFTER Upgrade
Original Archive Server Handles Requests for PACS Exams Stored BEFORE Server Upgrade
Upgraded Archive Server
SAN Tape Library
Handles Requests for PACS Exams Stored AFTER Server Upgrade
Long-Term Storage of ALL PACS Exams
Background Migration
Figure 18.7 Before, during, and final archive configuration and process for St. John’s Health Center PACS. L, before; M, during; R, after migration with new-archive system. See also Figure 17.6.
Figure 18.7 shows the final configuration after the completion of the data migration. The next several subsections cover methodology and clinical experience in upgrading the archive system, developing the off-site backup archive, and procedures of disaster recovery. 18.2.2
Backup Archive
18.2.2.1 The Second Copy Current general disaster recovery solutions for image/data vary in their approach toward creating redundant copies of PACS data. This secondary copy can be stored in a variety of ways. First, it can be held within the PACS. This is usually not recommended because the likelihood of it being destroyed is the same as that for the primary copy. A second option is to create the secondary copy and store it somewhere on-site in a fireproof safe or storage compartment. Although this decreases the possibility of data loss, should the disaster be widespread throughout the local area, the PACS data are still susceptible to destruction. A third option is to store the data media off-site in a storage vault, a strategy adopted by most data centers. This option is considered the best solution among the previously mentioned options for a disaster recovery solution. 18.2.2.2 The Third Copy PACS archives with large data media storage capacities such as digital tape usually have large data media elements that can store PACS data. These tape media can hold up to a few weeks’ worth of PACS data. In this case, a secondary copy may not be enough to protect against a large catastrophic incident. This is due to the fact that should a disaster occur right when
Ch18.qxd 2/12/04 5:17 PM Page 470
470
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
the most recent tape is being filled to capacity and has not been sent off-site, the hospital could lose up to a few weeks’ worth of PACS exams. It is necessary to create a third copy to cover the turnover period between secondary copy tapes. The third copy can be stored on-site or off-site in a data storage vault. Figure 18.8 shows an example of the disaster recovery procedure developed at St. John’s Health Center, which includes vaulting all tape media to an off-site vault service (Brent, 2003). It was estimated that one digital tape medium currently held 5 days’ worth of PACS data, with the figure doubling when the tape density doubled in the near future. Therefore, if a second copy of the PACS data were made, the hospital could still lose up to 10 days of data in the worst-case scenario. A disaster recovery procedure that includes a daily third copy of the PACS data is necessary to cover the turnover period. The following, along with Figure 18.8, are the procedure steps. An operator must perform these steps manually. There are three tape pools, copy1, copy2, and copy3: (1) The PACS exam arrives and three copies are written to three separate tapes. Copy1 is the primary copy and always resides in the tape jukebox. (2) At 10 a.m. every day, the Copy2 tape is checked. a. If the Copy2 tape is 95% full, it is ejected (occurs approximately every 5 days) and sent off-site. b. A new Copy2 tape is loaded into the jukebox from the Copy2 tape pool.
Copy2 Tape Pool
2a) & 3a)
2a) & 3a) 4)
2b)
Data Vault
5) To/From PACS ABC Vault Service
3b) & 6)
5) 5)
1)
ABC Vault Service
Copy3 Tape Pool
Figure 18.8 Interim disaster recovery procedure with the third copy backup archive for St. John’s Health Center PACS data.
Ch18.qxd 2/12/04 5:17 PM Page 471
CLINICAL EXPERIENCE AT ST. JOHN’S HEALTH CENTER
471
(3) The Copy3 tape(s) is replaced daily: coverage for period when the Copy2 tape reaches 95% full (tapes are necessary if the institution acquires more than the capacity of one tape per day). a. Copy3 tape ejected 10 a.m. every day and sent off-site. If Copy2 tape is 95% full, Copy3 tape of that specific day are returned to the Copy3 tape pool for reuse. b. New Copy3 tape(s) is loaded into the jukebox from the Copy3 tape pool. Two tapes are loaded for large institutions in case the first tape fills to capacity during the day and overflows onto the second tape. (4) Data vault storage service picks up tapes to transfer to a secure location daily. Copy2 tapes are stored permanently for disaster recovery. Copy3 tapes are stored temporarily. (5) When a Copy2 tape arrives at the data vault storage facility, all Copy3 tapes stored are returned to the hospital and returned to the copy3 tape pool. (6) Copy3 tape contents are erased and ready for reuse. Some of the disadvantages of these options are that there must be recyclable data media for each day of the coverage period and that the procedures are tedious and prone to human errors because they are labor intensive. Even though this solution will provide proper redundancy of all acquired PACS data, the hospital must logistically still wait for new hardware to replace the damaged components in order to import the data media after a disaster. This could take another week to a few weeks depending on availability of the hardware. In the mean time, radiologists are dependent on the previous exams for an accurate diagnosis, which will not be available until the replacement hardware arrives and is installed. 18.2.2.3 Off-Site Third Copy Archive Server Another option is to store the third copy data in a backup archive server off-site. With this option, a copy can be stored automatically and there is no need for an operator to perform the daily backup cycle. In addition, the backup storage capacity is easily configurable based on the needs of the hospital. Some of the downsides of this option include the logistics of purchasing, maintaining, and supporting a backup archive server off-site by the hospital. During a disaster, there is a high likelihood that the network connection may be down and therefore access to the backup copies of the PACS exams would be very difficult. With these issues as a backdrop, there have been new approaches developed toward not only creating a fault-tolerant main archive server but also providing data redundancy and recovery during either scheduled downtime events or unscheduled disasters like the off-site ASP fault-tolerant backup archive described in Section 17.7.2. 18.2.3
FT Backup Archive and Disaster Recovery
18.2.3.1 ASP Backup Archive With these possible options, St. John’s decided to choose the ASP fault-tolerant off-site backup archive method in preparing for its archive upgrade as well as for its disaster recovery scheme for the following reasons:
Ch18.qxd 2/12/04 5:17 PM Page 472
472
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
(1) A copy of PACS exam is created and stored automatically. (2) Backup archive server is CA (continuously available) fault-tolerant with 99.999% availability. (3) There is no need for operator intervention. (4) Backup storage capacity is easily configurable and expanded based on requirements and needs. (5) Data are recovered and imported back into hospital PACS within 1 day with a portable data migrator. (6) System is DICOM compliant and ASP model based. (7) System does not impact normal clinical work flow. (8) Radiologists can read with previous exams until hospital PACS archive is recovered. Following Figure 17.6, the left-hand side hospital site would be St. John’s and the right-hand side off-site ASP would be the IPI, USC. The fault-tolerant archive equipment at the ASP is described in Section 15.8.4. Section 18.2.3.2 presents the procedure of disaster recovery. 18.2.3.2 Disaster Recovery Procedure Figure 18.9 shows the recovery procedure of the PACS data should St. John’s encounter a disaster scenario. If the connectivity between the two sites is still live, then the stored PACS exams can be migrated over to the hospital with DICOM-compliant network protocols. At this point, the radiologist will have the previous and current PACS exams to continue reading until the new hardware is replaced and the hospital archive is brought back on-line. However, in most disaster scenarios, there is the likelihood that the connectivity between the two sites is not functional in addition to the damage to the hospital PACS archive (large crosses shown in Fig. 18.9). In this case, the PACS exams are imported into the hospital PACS from the fault-tolerant backup archive server with a portable data migrator. The portable data migrator will export the PACS exams
IPI/USC
Clinical PACS Server
DICOM Gateway
PACS
on C
T1 Router
T1 Router
PACS Storage Offsite
PACS Gateway
Fault-Tolerant Backup Archive
T1
Hospital Site
ne
ct
io
n
St. John’s
Data Migrator
Figure 18.9 Final disaster recovery procedure at St. John’s Health Center and system evaluation procedure.
Ch18.qxd 2/12/04 5:17 PM Page 473
PACS PITFALLS
473
from the fault-tolerant archive server. It is then physically brought to the hospital, where the PACS exams can be imported directly into a workstation or server so that the radiologists will have previous cases to read. Because the portable data migrator is DICOM compliant, the PACS exams can be directly imported into the hospital PACS without any additional software or translation. This recovery procedure can serve as an interim solution until the hospital PACS archive has been brought back on-line. The data migrator contains up-to-date PACS data because it is always synchronized with the clinical PACS work flow.
18.3
PACS PITFALLS
PACS pitfalls are mostly by human error, whereas bottlenecks are due to imperfect design in either the PACS or image acquisition devices. These drawbacks can only be realized through accumulated clinical experience. We discuss pitfalls in Section 18.3 and bottlenecks in Section 18.4. Pitfalls due to human error are often initiated at imaging acquisition devices and at workstations. Three major errors at the acquisition devices are entering wrong input parameters, stopping an image transmission process improperly, and incorrect patient positioning. The errors occur most often at the workstations, where users have to enter many key strokes or click the mouse frequently before the workstation can respond. Other pitfalls at the workstation unrelated to human error are missing location markers in a CT or MR scout view, images displayed with unsuitable lookup tables, and white borders in CR images due to X-ray collimation. Pitfalls created by human intervention can be minimized by a better quality assurance program, periodic in-service training, and interfacing image acquisition devices directly to the HIS/RIS through a DICOM broker. 18.3.1
During Image Acquisition
18.3.1.1 Human Errors at Imaging Acquisition Devices The two most common errors at computed radiography (CR) acquisition are using the wrong imaging plate ID card at the reader (Fig. 18.10, left) and entering the wrong patient’s ID, name, accession number, or birth date and invalid characters at the scanner’s operator console (Fig. 18.10, right). These errors can result in a loss of images, images being assigned to the wrong patient, the patient image folder containing other patient’s images, orphaned images, and crashing of the acquisition process because of illegal characters. Routine quality assurance (QA) procedures checking the CR operator log book or the RIS examination records against the PACS patient folder normally can discover these errors. If a discrepancy is detected early enough before images are sent to the workstation and the long-term archive, the PACS manager can perform the damage control by manually editing the PACS database to: 1. Correct for patient’s name, ID, etc., and other typographical errors 2. Delete images not belonging to the patient 3. Append orphaned images to the proper patient image folder
Ch18.qxd 2/12/04 5:17 PM Page 474
474
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
Figure 18.10 Left: The ID card of the last patient was used at the card reader (lower left). Right:At the CR terminal: entering the wrong patient’s name:“Doo” instead of “Doe”; wrong ID character with “/” instead of a numeral; wrong birth date: “1999” but the exam date is 1996.
Lost images from a patient’s folder can usually be found in the orphaned image directory. If images have already been sent to the workstation before the PACS manager has a chance to do the damage control described above, the PACS coordinator should alert the users immediately. If, however, these images have already been archived to the long-term storage before the damage control is performed, recovery of the errors would become more complicated. In this case, the PACS controller should have a mechanism allowing the PACS manager to correct this error at the PACS database. 18.3.1.2 Procedure Errors at Imaging Acquisition Devices Procedure errors can be categorized separately to CT, MR, and CR during image acquisition. In CT and MR, the three most common errors are: 1. Terminating a scan while images are being transmitted 2. Manually interrupting an image transmission process 3. Realigning the patient during scanning. These errors can result in CT and MR image files missing images, images being out of order in the sequence, and the scanner crashing because of the interruption of the image transmission. Another pitfall during CT acquisition that is not from human error is in the coronal head scan protocol. In this protocol, the image appearing on the CT display monitor will have the left and right directions reversed. If film output is used, the technologist manually inputs the orientation as an annotation on the screen, which is then printed on the film with the image. Such an interactive step in the PACS is not possible because the user graphics interface on the CT display console is not included in the DICOM image header. A method for circumventing this shortcoming is to detect the scanning protocol from the DICOM image header and
Ch18.qxd 2/12/04 5:17 PM Page 475
PACS PITFALLS
475
Figure 18.11 Left: Coronal view of a CT scan without a “left” or “right” indicator. Right: Label “16/2 Prone” can be generated with the DICOM header information.
automatically annotate the orientation on the image during its display on the PACS workstation (Fig. 18.11). In CR, the most common error occurs when the technologist places the imaging plate under the patient in the wrong direction during a portable examination. The result is the wrong image orientation during display. Errors due to manual interruption of the transmission procedure can be alleviated by using the DICOM communication protocol for automatic image verification and recovery during the transmission. Images archived out of order in the sequence can be manually edited during the QA procedure. Incorrect CR orientation during display can be detected by an algorithm with automatic rotation (Fig. 18.12) or manually rotated during the QA procedure. Scheduling regular in-service training and continuous education for technologists can minimize pitfalls created by human error. However, the most effective method is to use a direct HIS/RIS DICOM broker interface between the image acquisition device and the RIS, which can minimize the human interaction for inputting improper patient-related data. Currently, commercial product for interfacing CR, CT, and MR to the RIS through a DICOM broker is available for direct patient data input without typing. 18.3.2
At the Workstation
18.3.2.1 Human Error Two human errors occur often at the workstation. First, the user enters too many key strokes or clicks the mouse too often before the workstation can respond to the last request. As a result of this error, the workstation does not respond properly or the display program crashes. When the workstation response is slow, the impatient user enters the next few commands while the workstation is still executing the last request; the display program can either crash
Ch18.qxd 2/12/04 5:17 PM Page 476
476
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
Figure 18.12 CR image. Left: white borders and wrong orientation. Right: Background removed and automatic rotation to correct for the orientation.
unexpectedly or hang. Another possible result is that the workstation continues executing the next few commands by the user after the current one is completed. Because the user may have forgotten what commands he/she had entered, an unexpected display may appear on the screen, which may cause confusion. The second error occurs when the user forgets to close the window of a previous operation. In this case, the workstation may not respond to the next command. If the user panics and enters other commands, errors similar to those described in the first scenario may occur. Two remedies are possible. The workstation can provide a large visible timer on the screen for the user to know that the last command or request is being processed. This will minimize errors due to the user’s impatience. A better solution is to have an improved workstation design to tolerate this type of human error. 18.3.2.2 System Deficiency Four system deficiencies at the workstation are omission of localization markers on a CT/MR scout view (Fig. 18.13), incorrect lookup table for CT/MR display (Fig. 18.14), wrong orientation of the CT head image in the coronal scan protocol discussed earlier (Fig. 18.11), and white borders in CR due to X-ray collimation (Fig. 18.12). Omission of localization markers on a CT/MR scout view can be remedied by creating the localization lines with the information from the DICOM image header (see Fig. 18.13, bottom two images). The use of an incorrect lookup table for a CT/MR display is case dependent. Using the histogram technique can generate correct lookup tables for CT. There are still some difficulties in obtaining a uniformly correct lookup table for all images in an MR sequence because of the possibility of nonuniformity of the body or surface coils used. Sometimes it is necessary to process every image individually by using the histogram method (see Fig. 18.14). The correction for the CT head coronal scan
Ch18.qxd 2/12/04 5:17 PM Page 477
PACS BOTTLENECKS
477
Figure 18.13 Scan lines (bottom) can be generated from the DICOM header information to correlate scans between planes (sagittal, top; transverse, bottom).
was discussed above (Fig. 18.11). White borders in CR due to X-ray collimation can be corrected by an automatic background removal technique (Section 3.3.4, Fig. 18.12).
18.4
PACS BOTTLENECKS
Bottlenecks affecting the PACS operation include network contention; CR, CT, and MR images stacked up at acquisition devices; slow responses from workstations; and long delays for image retrieval from the long-term archive. Improving the system architecture, reconfiguring the networks, and streamlining operational procedures through a gradual understanding of the PACS clinical environment can alleviate bottlenecks. Utilization of the IHE workflow profiles discussed in Section 7.5 would also help to circumvent some of the bottleneck problems. 18.4.1
Network Contention
Network contention will cause many bottlenecks. First, CR/CT/MR images could stack up at the image acquisition gateway computers. The result would cause the computer disks to overflow and eventually lose some images. Another effect is that it will take longer for the PACS controller to collect a complete sequence of images from one MR or CT examination, which can consequently delay the transmission of the complete sequence file to the workstation for review. Network contention can also cause a long delay for image retrieval from the long-term archive, especially when the retrieved image file is large.
Ch18.qxd 2/12/04 5:17 PM Page 478
478
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
A
B
C
D Figure 18.14 Body CT (top) and MR (bottom) head scan: Left: (A, C) wrong LUT. Right: (B, D) correct LUT.
The methods of correction can be divided into general and specific. The general category includes redesigning the network architecture, using a faster network, and modifying the communication protocols. An example of redesigning the network architecture is to separate the network into segments and subnets and redistribute the heavy traffic routes to different subnets. For example, CT/MR acquisition and CR acquisition can be divided into two subnets. Assigning priorities to different subnets based on use and time of day is one way to address the user’s immediate needs. Using a faster network can speed up the image transfer rate. If the current network is conventional Ethernet, consider changing it to gigabit. Ethernet switches
Ch18.qxd 2/12/04 5:17 PM Page 479
PACS BOTTLENECKS
479
and the fast Ethernet workstation connection. Conventional network protocols often use default parameters that are only suitable for small file transfer. Changing some parameters in the protocol may speed up the transfer rates, for example, enlarging the TCP window size and the image buffer size. Methods in the specific category depend on the operational environment. Methods of correction may involve changing the operational procedure in the radiology department. Consider two examples: CR images stacked up at the CR readers and CT/MR images stacked up at the scanners. Most CR applications are for portable examinations and are mostly performed in the early morning. An obvious method of correction is to rearrange the portable examination schedule at the wards. CT/MR images stacked up at the scanners may be caused by a design fault in the communication protocol at the scanners. There are two methods of correction. First, use the DICOM auto-transfer mode to “push” images out from the scanner as soon they are ready to be sent to the acquisition gateway or the workstation (Section 7.4.5, Fig. 18.15). Second, from the acquisition gateway or the workstation, use DICOM to “pull” images from the scanner periodically (Section 7.4.5, Fig. 18.16). 18.4.2
Slow Response at Workstation
A slow response at the workstation is mostly due to bad workstation local database design, and not sufficient image memory in the workstation. An example of bad workstation database design is when the database uses a large file in the local disk for image storage. During the initial configuration of the workstation software, this storage space is contiguous. As the workstation starts to accumulate and delete images while it is being used, this space becomes fragmented. Fragmented space causes an inefficient I/O transfer rate, rendering a slow response in bringing images from the disk to the display. Periodic cleanup is necessary to retain contiguous space in the disk. Insufficient image memory in the workstation requires continuous disk memory swap, slowing down the image display speed. This is especially problematic when the image file is large. Several methods for correcting a slow response at the workstation are possible. First, one can increase the image memory size to accommodate a complete image DICOM Image-client (e.g. auto- transfer based on C-Move service class)
CT/MR Scanner
DICOM
DICOM upper layer protocol for TCP/IP
Image-server
Digital Network
WS local database updates received images
Image Display WS
Note: The auto- transfer software at the scanner automatically “push” images whenever they are ready. The image server receives an image one at a time at the workstation.
Figure 18.15 A push operation at the scanner to transmit an image out of the scanner as soon as it is generated.
Ch18.qxd 2/12/04 5:17 PM Page 480
480
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
DICOM Image-client (supporting C-Query/Retrieval service class)
DICOM upper layer protocol for TCP/IP
DICOM Image-server
Query/Retrieve Digital Network
CT/MR Scanner
PACS Acquisition Gateway Computer
Note: The query/retrieval program “pull” images by sending query/retrieve requests to the scanner for a complete study. Figure 18.16 A pull operation at the acquisition gateway or the workstation to receive an image from the scanner as soon as it is generated.
Each disk may contain only a Partial image
RAID
Image Memory
Image 10-20 Mbytes/ sec
An image is partitioned to an array of disks
Image
Display Monitor
During display the image is recombined at the image memory from partial images stored in many disks, hence the high display speed
Figure 18.17 Preload images from RAID to the image memory to speed up the display.
file, which will minimize the requirement for memory disk swap. The use of RAID technology is another way to speed up the disk I/O for faster image display. A better local database design and image search algorithm along with RAID at the workstation can speed up the image seeking and transfer time (Fig. 18.17). 18.4.3
Slow Response from Archive Server
A slow response from the long-term archive can be caused by slow optical disk or digital linear tape library search and read/write operations and also by patient images being scattered in different storage media with many simultaneous requests taking place. For example, the optical disk read/write data rate normally is about 400–500 Kbytes/s, but when many large image files have images scattered in many
Ch18.qxd 2/12/04 5:17 PM Page 481
PITFALLS IN DICOM CONFORMANCE
481
Exams 1,2,3
OD/Tape Library
erased
OD/ tape Platter Manager
Exam 3 images 1 to 10 Exam 1 Exam 2 Exam 3 images 11 to20
Patient examinations temporarily scatter at many optical disks or digital linear tapes in chronological order
Platter manager rearranges the same patient images to contiguous disk/tape space
Figure 18.18 Concept of the platter/tape manager to save the same patient images in contiguous space. Read/Write media is required.
platters, the seek time and disk I/O will be slow. This will cause a slow retrieval response of these files from the long-term archive. There are three possible methods of improvement. First, an image platter or tape manager software can be used to rewrite scattered images to contiguous optical disk platters or tapes (Fig. 18.18). This mechanism will facilitate the future retrieval time for images. Another method is to use the image prefetch mechanism (Fig. 18.19), which anticipates what images will be needed by the clinician for a particular patient. A combination of both mechanisms will speed up the response time from the long-term archive. In addition to these two mechanisms, a permanent storage upgrade, for example, to a library with multiple read/write drives, can also improve the overall throughput of the retrieval operations. Pitfalls and bottlenecks are two major obstacles hindering a smooth PACS operation after installation. Sections 18.3 and 18.4 identify most of these problems based on clinical experience. Methods are also suggested to circumvent these pitfalls and to minimize the occurrence of bottlenecks.
18.5 18.5.1
PITFALLS IN DICOM CONFORMANCE Incompatibility in DICOM Conformance Statement
During the integration of multivendor PACS components, even though each vendor’s component may come with a DICOM conformance statement, they
Ch18.qxd 2/12/04 5:17 PM Page 482
482
PACS CLINICAL EXPERIENCE, PITFALLS, AND BOTTLENECKS
Jones date 1 Jones date 2 Jones date 3 Workstation Local Storage Jones date 1 Jones date 2 Prefetch with knowledge from HIS/RIS
Jones date 3
Display Workstation
Jones date 4
Jones date 4
Figure 18.19 Concept of image prefetch for faster image display at the workstation.
still may not be compatible. We have identified some pitfalls caused by such incompatibility. 1. Missing image(s) from a sequence during an acquisition process: When a CT or MR scanner (storage SCU) transmits individual images from a sequence to an image acquisition gateway (storage SCP), the scanner initiates the transfer with a push operation. The reliability of the transmission is dependent on the transfer mechanism implemented by the scanner. An abnormal manual abortion of the scanning process can sometimes terminate the transmission, resulting in missing the image being transferred. 2. Incorrect data encoding in image header: Examples include missing Type 1 data elements, which are mandatory attributes in the DICOM standard, incorrect value representation (VR) of data elements, and encoded data in data elements exceeding their maximum length. 3. SOP Service Class not fully supported by the SCP vendor: This happens when an SCU vendor and an SCP vendor implement an SOP Service Class with different operation supports. For example, a C-GET request initiated from a display workstation is rejected by an archive server because the latter only accepts C-MOVE requests, even though both C-GET and C-MOVE are DICOM standard DIMSE-C operations that support the Q/R Service Class. 4. DICOM conformance mismatching between the individual vendors: When an SCU vendor and an SCP vendor implement the Q/R Service Class in different information models or with different support levels, a C-FIND request in Patient Level initiated from a display workstation is rejected by a CT scanner because the latter only supports the Q/R Study Root model, which does not accept any query requests in Patient Level. 5. Shadow group conflict between the individual vendors: When vendor A and vendor B store their proprietary data in the same shadow group, data
Ch18.qxd 2/12/04 5:17 PM Page 483
PITFALLS IN DICOM CONFORMANCE
483
previously stored in the shadow group by vendor A are overwritten by vendor B’s data. 18.5.2
Methods of Remedy
These pitfalls can be minimized through the implementation of two DICOM-based mechanisms, one in the image acquisition gateway and the second in the PACS controller, to provide better connectivity solutions for multivendor imaging equipment in a large-scale PACS environment, the details of which are discussed in Section 7.4.
Ch19.qxd 2/12/04 5:15 PM Page 485
PART IV
PACS-BASED IMAGING INFORMATICS
Ch19.qxd 2/12/04 5:15 PM Page 487
CHAPTER 19
PACS-Based Medical Imaging Informatics
HIS Database
Generic PACS Components & Data Flow Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS Controller & Archive Server
Application Servers
Workstations
Web Server
Figure 19.0 PACS infrastructure and data flow. The unshaded components are those also involved in the medical imaging informatics infrastructure (MIII).
Chapters 1 to 18 discuss the basics of PACS, its infrastructure, technology, and clinical utilization. After many years the existence of PACS in the clinical environment, we have gradually uncovered the richness of information residing in the PACS database. The four chapters in Part IV discuss some systematic methods for taking advantage of this wealth of data for better health care delivery, research, and education. This chapter, which presents the concept of PACS-based medical imaging informatics, forms the foundation of and serves as an introduction to Chapters 20 to 22.
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
487
Ch19.qxd 2/12/04 5:15 PM Page 488
488
PACS-BASED MEDICAL IMAGING INFORMATICS
19.1 19.1.1
MEDICAL IMAGING INFORMATICS INFRASTRUCTURE (MIII) Concept of MIII
Medical imaging informatics infrastructure (MIII) is a server designed to take advantage of existing PACS resources and its image and related data for large-scale horizontal and longitudinal clinical service, research, and education applications that could not be performed before because of insufficient data. MIII is the vehicle to facilitate this utilization of PACS in addition to its daily clinical service. 19.1.2
MIII Architecture and Components
MIII comprises the following components: medical images and associated data (including PACS database), tools for image processing, visualization, graphic user interface, communication networking, data/knowledge base management, and application-oriented software. These components are logically related as shown in Figure 19.1, and their functions are summarized as follows. Figure 19.2 depicts the connection of the MIII with PACS and other imaging related resources. 19.1.3
PACS and Related Data
The PACS database and other related health information system databases containing patient demographic data, case histories, medical images and corresponding diagnostic reports, and laboratory test results constitute the data source in the MIII.
Medical imaging informatics infrastructure (MIII) components and their logical relationship CUSTOMIZED SOFTWARE RESEARCH APPLICATION SOFTWARE
CLINICAL SERVICE APPLICATION SOFTWARE
EDUCATION APPLICATION SOFTWARE
MIII DATABASE & KNOWLEDGE BASE MANAGEMENT IMAGE PROCESSING TOOLS
VISUALIZATION TOOLS
GRAPHIC USER INTERFACE
SECURITY (share PACS security)
COMMUNICATION NETWORK
PACS IMAGE & RELATED DATABASE Figure 19.1 MIII components and their logical organization.
Ch19.qxd 2/12/04 5:15 PM Page 489
MEDICAL IMAGING INFORMATICS INFRASTRUCTURE (MIII)
489
Generic PACS Components & Data Flow
HIS Database
Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS Controller & Archive Server
Application Servers
Workstations
Web Server
Distributed and Grid computing
MIII user
MIII Server Image content Indexing
MIII user DICOM Gateway
Web Application Server
3D Rendering and visualization Engine
CAD Server
A
General DICOM Acquisition Gateway CD
CD ROM
DVD
DVD ROM
DICOM PC
DAT
Tape Reader
MO Disk
MO Drive
(Processing DICOM)
B Figure 19.2 Connectivity between PACS infrastructure, MIII server, and MIII resources. The DICOM gateway is for inputting images not belonging to the in-house PACS. (B) The DICOM gateway architecture. It accepts images not originated from the local PACS.
Ch19.qxd 2/12/04 5:15 PM Page 490
490
PACS-BASED MEDICAL IMAGING INFORMATICS
These data are organized and archived with standard data formats and protocols, such as DICOM for image and HL7 for text. In addition, a controlled health vocabulary can be used as standard for medical identifiers, codes, and messages proposed by the American Medical Informatics Association. 19.1.4
Image Processing
Image processing software allows for setting up the image content indexing and retrieval mechanism. Its functions include segmentation, region of interest determination, texture analysis, content analysis, morphological operations, image registration, and image matching. The output from image processing can be a new image or features describing some characteristics of the image. Image processing functions can be performed automatically or interactively on image data from the PACS database by an input gateway or server. Data extracted by the image processing functions can be appended to the image data file. 19.1.5
Database and Knowledge Base Management
The database and knowledge base management component software has several functions. First, it integrates and organizes PACS images and related data, extracts image features and keywords from image processing, and derives medical heuristics rules and guidelines into a coherent multimedia data model. Second, it supports on-line database management, content-based indexing and retrieval, formatting, and distribution for visualization and manipulation. This component can be developed on top of a commercial database engine with add-on application software. Third, data mining tools with the knowledge base as guides can be used as a method to mine relevant information from the databases. 19.1.6
Visualization and Graphic User Interface
Visualization and graphic user interface are output components. Both components are related to workstation design. Visualization includes 3-D rendering, image data fusion, and static and dynamic imaging display. Visualization utilizes extracted data from image processing (i.e., segmentation, enhancement, and shading) for output rendering. Visualization can be performed on a standard workstation (WS) or with high-performance graphics engines. For low-performance WS, the final visualization can be precomputed and packaged at the WS; with high-performance graphic engines, the rendering can be in real time at the WS. Graphic user interface (GUI) considers optimization of workstation design for information retrieval and data visualization with minimal effort from the user. A well-designed GUI is essential for effective real-time visualization and image content retrieval. GUI can also be used for extraction of additional parameters for nonstandard interactive image analysis. In Section 19.2.2.2, we describe 3-D rendering and visualization as a resource in more detail. 19.1.7
Communication Networks
Communication networks include network hardware and communication protocols required to connect MIII components together as well as with PACS. The MIII com-
Ch19.qxd 2/12/04 5:15 PM Page 491
PACS-BASED MEDICAL IMAGING INFORMATICS
491
munication network can have two architectures: It can have a network of its own with a connection to the PACS networks, or it can share the communication networks with the PACS. In the former, the connection between the MIII and the PACS networks should be transparent to the users and provide the necessary high-speed throughput for the MIII to request PACS images and related data and to distribute results to users’ workstations. In the latter, the MIII should have a logical segment isolated from the PACS networks so that it will not interfere with PACS daily clinical functions. High-speed networks with fast Ethernet switches or connections to Internet 2 routers would facilitate the data transfer rates when a large volume of images is required for study (see Chapter 9).
19.1.8
Security
Security includes data authenticity, access, and integrity. Data authenticity verifies the originality of the data, access considers who can access what type of data and when, and integrity means that data have not been altered during transmission. After PACS images have been processed, some validation mechanisms are needed to ensure its integrity, accuracy, and completeness (see Chapter 16). Once the aforementioned components are implemented in the MIII, applicationoriented software can be designed and developed to integrate the necessary components for a specific clinical, research, or education application. This provides rapid prototyping and reduces costs required for the development of every application. It is in this application-layer component that the user encounters the advantages and the power of the MIII.
19.1.9
System Integration
System integration includes system interface and shared data and workspace software. System interface software utilizes existing communication networks and protocols to connect all infrastructure components into an integrated imaging information system. Shared data and workspace software allocates and distributes resources including data, storage space, and workstation to the on-line users. Section 19.2 describes some resources used by the MIII.
19.2 19.2.1
PACS-BASED MEDICAL IMAGING INFORMATICS Background
PACS originated as an image management system for improving the efficiency of radiology practice. It has evolved into a hospital-integrated system dealing with information media in many forms, including voice, text, medical records, waveform images, and video recordings. To integrate these various types of information requires the technology of multimedia: hardware platforms, information systems and databases, communication protocols, display technology, and system interfacing and integration. We have discussed most of these topics in earlier chapters. As a PACS grows in size and functionality, so does the content of its database. The richness of information within the PACS provides an opportunity for a completely new
Ch19.qxd 2/12/04 5:15 PM Page 492
492
PACS-BASED MEDICAL IMAGING INFORMATICS
approach in medical research and practice via the discipline of medical informatics. The following subsections present some resources in the MIII as an illustration of their connectivity and utilization. Some of these resources can be a component in the MIII, and others can be stand-alone resources with connection to the MIII.
19.2.2 Resources in the PACS-Based Medical Imaging Informatics Infrastructure 19.2.2.1 Content-Based Image Indexing The PACS database is designed to retrieve information by artificial keys, such as patient name and hospital ID. This mode of operation is sufficient for traditional radiology operations but not adequate for image data storage of large-scale research and clinical applications. Therefore, an enhanced database in MIII is needed that contains keywords of diagnostic reports, patient history, and imaging sequences, as well as certain features of images for content-based indexing of underlying MR/CT/US/CR images in the PACS database. Using artificial indexing like the patient’s name, ID, age group, disease category, and so on through one-dimensional keyword search is a fairly simple procedure. On the other hand, indexing through image content is complicated because the query first has to understand the image content, which can include abstract terms (e.g., objects of interest), derived quantitative data (e.g., area and volume of the object of interest), and texture information (e.g., interstitial disease). There are two basic methods that use image contents as a means for image retrieval: concept based and content based (Rasmussen, 1997). In the concept-based method, the image is associated with metadata, that is, data about the image data. The metadata can include the image type (e.g., CT, MRI, X ray), the brain and its substructures, their functions and blood supply, etc. Metadata describing the actual content and relationship can be free text or keywords chosen from a structured vocabulary. Indexing and retrieval methods similar to those in text-based retrieval can then be used to organize and access the images. Because in the study of any organ, almost all images depict some sort of anatomical entity, an important component of a structured vocabulary should be anatomical terminology that is also arranged to show relationships among the anatomical structures and their functions (Brinkley et al., 1999). Therefore, the concept-based method is more anatomically oriented. In content-based methods, image processing and mathematical techniques are used to analyze the actual content of the images, for example, the gray level distribution or the different shapes and relationships, then match these features against a query image (Tagare et al., 1995; Huang, 1999). Ideally, one would like to be able to ask for images that “look like this one.” Content-based image retrieval requires extensive image processing and sophisticated mathematical techniques (Tagare et al., 1997; Pietka et al., 2003). We describe the content-based method in this section because its application is more relevant to disease diagnosis. Figure 19.3 shows the concept of image content indexing developed for brain myelination disorder research. Indexing via image content is a frontier research topic in image processing, and the PACS-rich database will allow the validation of new theories and algorithms.
Ch19.qxd 2/12/04 5:15 PM Page 493
PACS-BASED MEDICAL IMAGING INFORMATICS
493
Figure 19.3 Image query by content and features in brain myelination disorders. R (L) FRO MTR: Right (left) front magnetization transfer ratio. Easy to use graphic user interface allows image query by image content. The sliding bars on the left can be used to control retrieved images shown on the right. (Courtesy of Drs. K. Soohoo and S. Wong.)
19.2.2.2
Three-Dimensional Rendering and Visualization
3-D Rendering PACS is designed as a data management system; it lacks the computational power for image content analysis at the image workstation or at the PACS controller. For this reason, it will be necessary to allocate computation and 3-D rendering resources in the MIII infrastructure for high-performance computations should such a task be requested by MIII workstations. On the completion of a given 3-D computation, the results can be distributed with visualization capability to other image workstations through the high-speed networks. A computation and 3-D rendering with visualization resource, as shown in Figure 19.4, is necessary in the PACS environment: The PACS database is at the left; the computation 3-D rendering visualization resource is on the top, connected to the PACS or MIII networks, with 100 mbits/s (or higher). The web server in the MIII provides a connec-
Ch19.qxd 2/12/04 5:15 PM Page 494
494
PACS-BASED MEDICAL IMAGING INFORMATICS
Computation 3-D Rendering and visualization resource
DB
Graphical/ Imaging Engine
Medical Image Database Server
WEB Server
Clinical PACS Server/Controller
Cache DB
MIII
MIII PACS Networks
Mirrored Text DB
PACS Database
Central Image Archive (TB)
Web clients PACS Workstation
Figure 19.4 Connectivity of a PACS with the MIII server containing a computation 3-D rendering and visualization resource and a web server. Both PACS workstations and web clients through the MIII can request 3-D rendering and visualization. DB, database; TB, terabytes.
tion to the PACS. Both web clients and PACS workstations can access 3-D rendered images from the web server. To view 3-D images after rendering, we need the 3-D visualization tool. Steps in 3-D Visualization Volumetric visualization within the 3-D rendering and visualization component is a MIII resource. The user at a satellite site wishing to view fusion volumetric images from different imaging modalities (e.g., MRI and CT) in the PACS database from a workstation can use the visualization tool. Figure 19.5 shows the steps needed to accomplish this task through the visualization engine consisting of a high-end graphic computer: (1) From the image workstation, the request is sent to either the PACS database or the MIII server to retrieve the volumetric image set. (2) The image set is sent to the visualization engine. (3) The visualization engine performs the necessary 3-D computation and rendering functions. (4) Results are sent back to the workstation for viewing after the task is completed. In step (4), the user can also communicate with the rendering and visualization resource directly for further instructions or manipulation. Let us consider two scenarios. Scenario 1: Volumetric Visualization of Clinical Images. Suppose the user wishes to view fusion volumetric images from different imaging modalities (e.g., CT
Ch19.qxd 2/12/04 5:15 PM Page 495
PACS-BASED MEDICAL IMAGING INFORMATICS
495
3. Image fusion or 3D rendering
4. Display and graphics command 3D rendering and
visualization Engine
Workstation to view 3D rendering results
1. Retrieve volume MR or CT image dataset
PACS Controller and Archive Server
2. Ship retrieved 3D images
Figure 19.5 High-speed 3-D visualization of volumetric images using the clientserver setup with a rendering and visualization engine. The engine is a major resource of the MIII.
with PET or MR with CT) in the PACS database, either from the PACS workstation or from a web client through the web server. And suppose the existing PACS and workstations at the site do not support such a capability. Figures 19.4 and 19.5 illustrate the steps that would be needed to accomplish this task through the computation resource. In this scenario, we assume that the workstation has the hardware and software capability to view 2-D and 3-D images with graphics. Figure 19.6 shows an example of CT and PET fusion images. Scenario 2: Video/Image Conferencing with Image Database Query Support. Now suppose that the referring physician at an image workstation requests a video conference on a patient image case with a radiologist located elsewhere. Figure 19.7 shows the data flow and demonstrates how to utilize the medical image database server and the computation resource to accomplish the task. The data flow starts with (1) establishment of a video conference between two image workstations; (2) the referring physician requests the case from the PACS database and sends the necessary queries to the MIII server. The PACS database transmits data (3), and the MIII server sends query instructions to the computation resource (3). The computation resource performs the necessary 3-D rendering and sends the results back almost in real time to the PACS workstation at both sites (4), allowing a real-time video conference with the 3-D high-resolution image set at both workstations (5). Note that to accomplish this scenario, in addition to the PACS database, three components are necessary: the MIII server, the computation node, and the high-speed network. However, the video/image conferencing does not allow the manipulation of images by either site. Synchronization and instantaneous image manipulation of images require teleconsultation resources described in Section 14.6. 19.2.2.3
Distributed Computing
Concept of Distributed Computing The basic idea of distributed computing is that if several computers are networked together, the workload can be divided into smaller pieces for each computer to work on. In principle, when n computers are networked together, the total processing time can be reduced down to 1/n of the single-computer processing time. It should be noted that this theoretical limit is
Ch19.qxd 3/2/04 2:17 PM Page 496
496
PACS-BASED MEDICAL IMAGING INFORMATICS
A
B Figure 19.6 Viewing CT and PET fusion images through the visualization resource by the MIII user at a web client. Images are from a dual-gantry CT/PET scanner. (Courtesy of Dr. R. Shrestha.) (A) Left: CT image; Right: PET image. (B) PET image is fusing with CT image. (C) Pseudocolor PET image (physiology) overlays Black & White CT image (anatomy). (See color insert.)
unlikely to be achieved because of the unavoidability of various overheads, most likely due to data communication latency. Two important factors affect the design of a distributed computing algorithm. Processor speed variations in different computers make it important to implement the mechanism to balance the workload in distributed computing in a way that allows faster computers to be given more work to do. Otherwise, the speed of the processing is essentially dictated by the capabilities of the slowest computers in the network. Data communication speed is another factor to consider. If workstations are connected by conventional Ethernet, with a maximum data transfer rate of
Ch19.qxd 2/12/04 5:15 PM Page 497
PACS-BASED MEDICAL IMAGING INFORMATICS
497
C Figure 19.6 Continued
Provide real time image fusion and 3-D rendering of 3D images
MIII
Image Workstation
Computation and 3-D Rendering Node Provide patient image and text files
Referring Physician
(4) (4)
(3)
(2)
MIII Server
(3)
PACS/MIII High-speed Networks
(2)
(2)
(1) (5) Conversing 3-D and related data of a patient case (4)
Image Workstation
(3)
Radiologist PACS Database Figure 19.7 Application in scenario 2: Teleconference with high-resolution 3-D image set. Refer to text for numerals.
Ch19.qxd 2/12/04 5:15 PM Page 498
498
PACS-BASED MEDICAL IMAGING INFORMATICS
10 Mbits/s, the slow data transfer rate will limit the application of the distributed computing. Increased implementation of the asynchronous transfer mode (ATM) and gigabit Ethernet technologies will widen the parameter regime in which distributed computing is applicable. The minimum requirement for distributed computing is a networked computer system with software that can coordinate the computers in the system to work coherently to solve a problem. Several software implementations are available for distributed computing; an example is the Parallel Virtual Machine (PVM) System developed jointly by Oak Ridge National Laboratory, the University of Tennessee, and Emory University. It supports a variety of computer systems, including workstations by Sun Microsystems, Silicon Graphics, Hewlett-Packard, and DEC/ Microvax and IBM-compatible personal computers running the Linux operating system. After the PVM system is installed in all computer systems, one can start the PVM task from any computer. Other computers can be added to or deleted from the PVM task interactively or by a software call to reconfigure the virtual machine. For computers under the same PVM task, any computer can start new PVM processes in others. Intercomputer communication is realized by passing messages back and forth, thus allowing the exchange of data among the computers in the virtual machine. The parameter regimes for applicability of distributed computing are both problem- and computer dependent. For distributed computing to be profitable, t1, the time interval required to send a given amount of data between two computers across the network, should be much shorter than t2, the time needed to process them in the host computer. In other words, the network data communication rate (proportional to 1/t1) should be much higher than the data processing rate (proportional to 1/t2). The smaller the ratio of t1 to t2, or the higher the ratio of the two rates, the more advantages for distributed computing. If the ratio of the two rates is equal to or less than 1, there is no reason to use distributed computing, because too much time would be spent waiting for the results to be sent across the network back to the host computer. Thus, for t1 £ t2, it is faster to use a single computer to do the calculation. Although the data communication rate can be estimated based on the network type and the communication protocol, the data processing rate depends both on the computer and on the nature of the problem. For a given workstation, more complex calculations lower the data processing rate. Because the computer system and the network are usually fixed within a given environment, the data processing rate depends more on the nature of the problem. Therefore, whenever a problem is given, one can estimate the ratio of the two rates and determine whether distributed computing is worthwhile. Figure 19.8A depicts the concept of distributed computing based on the data communication rate and the processing rate. The data were obtained with Sun SPARC LX computers and the Ethernet (10 mbits/s) communication protocol. With the advances of ATM and gigabit Ethernet technologies in desktop computers, communication rates will render distributed computing attractive. Distributive computing is applicable only when the communication rate is above the computation rate. Immediate applications using distributed computing are image compression, unsharp masking, enhancement, as well as computer-aided diagnosis, discussed later in this chapter.
Ch19.qxd 2/12/04 5:15 PM Page 499
PACS-BASED MEDICAL IMAGING INFORMATICS
499
Comm. Rate vs Proc. Rate
1000
T-Rate KB/S
Rates KB/s
C-Rate KB/S
100
10 1
10
100
1000
10000
Data size KBytes
A Assignment
Master WS
3D Blocks Slave WS
3D Dataset
Ethernet Switch
Slave WS
Slave WS
B
PACS database provides 3D data sets
Return Results to Master WS
PACS Database
Figure 19.8A Distributed computing in a PACS environment. The solid curve represents the computer-to-computer transmission rate (T-rate) under PVM and TCP/IP Ethernet (10 Mbits/s) connection, as a function of data size. The dotted curve represents data processing rate (C-rate) required to perform a 2-D FFT in a Sun SPARC-LX computer. The squares and diamonds represent the measured data points. Distributed computing is applicable only when the solid curve exceeds the dotted curve. (Courtesy of Dr. X. Zhu.) B Procedure in distributed computing: master workstation requests 3-D volume image data from the PACS database and assigns 3-D blocks to each slave workstation (WS) for the computation task. Each slave WS returns results to the master WS, which compiles all results for the task. (Courtesy of Dr. J. Wang.)
Ch19.qxd 2/12/04 5:15 PM Page 500
500
PACS-BASED MEDICAL IMAGING INFORMATICS
Distributed Computing in a PACS Environment Each image workstation in a PACS, when it is not in active use, consumes only a minimum of its capacity for running the background processes. As the number of image workstations grows, this excessive computational power can be exploited to perform value-added image processing functions for PACS images. Image processing is used extensively in the preprocessing stage, as in unsharp masking in CR, but it has not been popular in postprocessing. One reason is that preprocessing can be done quickly through manufacturer’s imaging modality hardware, which is application specific, and the execution time is fast. On the other hand, postprocessing depends on the image workstation, which, in general, does not provide hardware image processing functions beyond such simple functions as lookup table, zoom, and scroll. For this reason, at the image workstation, the user very seldom uses time-consuming image processing functions even though some, like unsharp masking, are effective. The multi-image workstation PACS environment and MIII infrastructure with many servers and workstations connected allows the investigation of distributed computing for image processing by taking advantage of the excessive computational power available from the workstations and servers. Conceptually, distributed computing will allow the acceleration of time-consuming image processing functions, with the result that the user will demand image postprocessing tools that can improve medical service by providing near-real-time performance at the image workstation. In distributed computing, several networked image workstations can be used for computationally intensive image processing functions by distributing the workload to these workstations. Thus the image processing time can be reduced at a rate inversely proportional to the number of workstations used. Distributed computing requires several workstations linked together with a high-speed network, but these conditions are within the realm of a PACS and MIII in its number of workstations and the ATM and gigabit Ethernet technologies. Figure 19.9 depicts the concept of distributed computing in a PACS and MIII network environment, using a 3-D data set as an example. In Figure 19.8B, if the computation is for digital signature of the 3-D image volume for data security purpose, and if some slave workstations do not have the encryption software, the grid computing concept can be used by sending the encryption software (middleware) with the individual 3-D block data sets to the slave workstations.
19.2.2.4
Grid Computing
The Concept of Grid Computing As computing technologies evolve from the familiar realm of parallel, peer-to-peer and client-server models, grid computing represents the latest and most exciting incarnation. Grid computing inherits many basic concepts from distributed computing and networked computing. In distributed computing, a given task is performed by distributing it to several or more networked computers (see Figure 19.8B). Grid computing has one more major ingredient which is the middleware that goes with the data during its distribution. Middleware can be computational resources, a software package, a security check, some display functions, or even data to facilitate the designated computer which may not have the necessary resources for the task. For example, if image content
Ch19.qxd 2/13/04 6:36 PM Page 501
PACS-BASED MEDICAL IMAGING INFORMATICS
501
Applications
User-Level Middleware
Collective
Resource Core Middleware
Connectivity
Fabric
A
SJHC PACS WS
HCC2 PACS WS
SJHC PACS Server
HCC2 PACS Server
Data Grid SJHC SAN P1
SJHC
with DICOM
P2
HCC2 SAN P2
FT Backup Archive
PACS Simulator
IPI
P1
HCC2
B
Figure 19.9 (A) The five layers of the grid computing technology. (B) A Data Grid Configuration consisting of three sites, each site has its own PACS. SJHC and HCC2 have SAN storage technology in which a partition (P1) is dedicated to its own PACS, and a second partition (P2) is for Data Grid resource. The Data Grid with DICOM middleware configures the arrangement of the back-up archive for each site. SJHC P2, HCC2 P2, and FT Backup Archive at IPI share the back-up and recovery responsibility for the other sites in the Grid. For example, HCC2 P2 is back-up for SJHC, IPI FT Archive is back-up for HCC2, and SJHC P2 is back-up for IPI. Workstations outside of the Data Grid can access the grid through DICOM Q/R for image viewing recovery services should its own PACS archive go down (double arrows). SJHC, Saint John’s Health Care; HCC2, USC Healthcare Consultation Center 2; IPI, Image Processing and Informatics Lab, USC.
indexing task is being requested which requires high power computational algorithms, the middleware goes with the image data can contain the computational algorithms. Grid computing also has more organization than distributed computing, and each grid can possess different kinds of specific resources and even data. When needed, the federation administration of grid computing can poll resources from different grids through the federation organization for a specific task. Grid com-
Ch19.qxd 2/13/04 6:37 PM Page 502
502
PACS-BASED MEDICAL IMAGING INFORMATICS
puting in medical imaging informatics applications is still in its infancy, but MIII should consider it as a resource in planning its infrastructure. Current Grid Technology Grid computing is the integrated use of geographically distributed computers, networks, and storage systems to create a virtual computing system for solving large-scale, data-intensive problems in science, engineering, and commerce [Grid]. A grid is a high-performance hardware and software infrastructure providing scalable, dependable and secure access to the distributed resources. Unlike distributed computing and cluster computing, the individual resources in grid computing maintain administrative autonomy and are allowed system heterogeneity; this aspect of grid computing guarantees scalability and vigor. Therefore, the grid’s resources must adhere to agreed-upon standards to remain open and scalable. A formal taxonomy, composed of five layers (as shown in Figure 19.9A) has been created to assure this standardization: a. Fabric Layer: The lowest layer includes the physical devices or resources, e.g., computers, storage systems, networks, sensors and instruments. b. Connectivity Layer: The layer above the fabric layer includes the communication and authentication protocols required for Grid network transactions, e.g., the exchange of data between resources, and the verification of the identity of users and resources. c. Resource Layer:This layer contains connectivity protocols to enable the secure initiation, resource monitoring, and control of resource-sharing operations. d. Collective Layer: The layer above the Resource Layer contains protocols, services, and APIs (Application Programming Interface) to implement the transaction among resources, e.g., resource discovery, and job scheduling. e. User Application Layer: This highest layer calls on all other layers for applications. At its core, grid computing is based on an open set of standards and protocols— e.g., the Open Grid Services Architecture (OGSA) [Physiology: Anatomy]. The grid provides the user with the following types of service: [Grid] a. Computational Services support specific applications on distributed computational resources, such as supercomputers. A grid for this purpose is often called a Computational Grid. b. Data Services allow the sharing and management of distributed datasets. A grid for this purpose is often called a Data Grid. (In this section, we describe a Data Grid architecture for the recovery of PACS image data) c. Application Services allow access to remote software and digital libraries, and provide overall management of all applications running. d. Knowledge Services provide for the acquisition, retrieval, publication and overall management of digital knowledge tools. Currently, there are several large-scale grid projects underway worldwide. For example the Ninf from the Tokyo Institute of Technology; Globus from ANL (Argonne National Laboratory) and Information Science Institute (ISI), USC;
Ch19.qxd 2/16/04 2:23 PM Page 503
PACS-BASED MEDICAL IMAGING INFORMATICS
503
Gridbus from University of Melbourne; European Datagrid, and many others [Bermanin]. However, there is only limited investigation of the impact of this emerging technology in biomedical imaging with an exception in a project called “e-Diamond: a Grid-enabled federated database of annotated mammograms” [Brady]. This section describes a Data Grid concept as a first large-scale application of grid computing in medical images. A Data Grid Architecture for Medical Imaging Archive and Recovery Storage Area Network (SAN). The Data Grid describes in this section requires the SAN technology. A current data storage trend in large-scale archiving is Storage Area Network (SAN) technology, and PACS is no exception in this trend [11]. With this new configuration, the PACS server will still have a short-term storage solution in local disks containing unread patient studies. However, for long-term storage, the PACS data is stored in a SAN. This SAN is a stand-alone data storage repository with a single Internet Protocol (IP) address. File management and data backup can be achieved with a combination of digital media (e.g., RAID, Digital Linear Tape (DLT), etc.) smoothly and with total transparency to the user. In addition, the SAN can be partitioned into several different repositories each storing different data file types. The storage manager within the SAN is configured to recognize and distribute the different clients’ data files and store them to distinct and separate parts of the SAN. Data Grid Architecture. In this section, a design and implementation of a Data Grid for clinical image recovery is discussed. In this design, the Globus 3.0 toolkit co-developed by ANL and ISI, USC is used as the guide for implementing the data grid architecture [Globus]. The integration of the Globus toolkit along with the DICOM technology forms the middleware of the Data Grid utilizing the SAN technology which can take care of the data management as well as transfer of clinical images in the PACS data model. This data grid architecture ensures the protection and recovery of medical image data, in particular, PACS clinical images. PACS technology, significantly improved over the past decade of development, still struggles with an Achilles tendon: image data back up. Available solutions remain expensive, unreliable, and time consuming to maintain. It is, unfortunately, a routine experience for large-scale PACS archive systems to go down for hours, crippling clinical operations. (see Section 18.2.3). As an example of The Data Grid architecture, consider a grid with three sites with the configuration shown in Figure 19.9B. The first site is the IPI Lab (Image Processing and Informatics) at USC, where the major resources are the PACS Simulator and the DICOM fault-tolerant back up Archive (see Sections 15.8 and 22.1.4). Both components are the resources in the Data Grid. The second and the third sites are the Saint John’s Health Care (SJHC) and the Healthcare Consultation Center II (HCC II) at USC. Both sites have a clinical PACS with a SAN archive system. A partition of each SAN, which does not handle the clinical PACS image data, is used as backup archive resources in the Data Grid. It is important to note that the SAN partitions belonging to each of the two sites are completely independent, and the data stored in these partitions are orthogonal and separate from the clinical data partitions that are integrated with each of the respective clinical PACS.
Ch19.qxd 2/12/04 5:15 PM Page 504
504
PACS-BASED MEDICAL IMAGING INFORMATICS
A clinical workstation outside of the Data Grid is able to access the grid for image viewing services. This testbed architecture is in prototype stage of evaluating a reasonable cost but reliable, easy to maintain, portable, scalable archive system for image back-up archive and recovery. Its success would resolve a long-term headache of PACS archive recovery problem discussed in several previous chapters.
19.3 19.3.1
CAD IN PACS ENVIRONMENT Computer-Aided Detection and Diagnosis
The computer-aided detection or diagnosis (CAD, CADx) engine can be stand alone as a PACS-based application server, or it can be organized as a MIII resource. The purpose of CAD is to use image postprocessing in deriving parameters to aid the physician to make better diagnoses. Traditional CAD is done off-line in the sense that an image set of a patient is acquired from a given modality either through a peripheral device or a network transmission, from which image processing is performed to extract relevant parameters. These parameters are then used to provide additional information to pinpoint the sites of potential pathology in the image set to alert the physician. The derived parameters are not appended to the images for later retrieval. All these procedures are performed without taking advantage of the PACS resource. An example of a film-based CAD is that used in mammography. In this film-based CAD, screen/film mammograms are first digitized, subsampled, and fed to a processor that contains the CAD algorithms for detection of microcalcification, masses, and other abnormalities. Results from the CAD are superimposed on the subsampled mammogram and display on a workstation with a regular monitor(s). A standard mammography viewer is used to display the mammographic film so that a visual comparison between the subsampled image with detected lesions and the film can be performed. This off-line CAD method is a twostep process requiring special hardware and components to accomplish the detection process. Such a CAD system consists of a film digitizer, a workstation, a CAD processor, and a film mammography viewer. CAD can be integrated in a PACS or MIII environment by taking advantage of the resources in its storage, retrieval, communication, and display components. Figure 19.10 depicts the data flow in film-based CAD mammography. 19.3.2
Methods of Integrating CAD in PACS and MIII Environments
19.3.2.1 CAD without PACS CAD without PACS can use either direct digital input or film with a digitizer. In either case, CAD is a totally off-line isolated system. Table 19.1 lists the procedure of performing CAD in such a system. The film-based CAD system for mammography shown in Figure 19.10 is an example. 19.3.2.2 CAD with DICOM PACS Integration of CAD with DICOM PACS or MIII can have four approaches. In the first three described below, the CAD is connected directly to the PACS. The fourth approach is a CAD server that can be connected to either the PACS or the MIII, or both.
Ch19.qxd 2/16/04 2:24 PM Page 505
CAD IN PACS ENVIRONMENT
505
Mammo viewer
digitizer
subsampler
Visual Comparison
Film
CAD processor
Film Mammogram
CAD WS Digitized Image
Subsampled Image
CAD Results Overlay on Subsampled Image
Figure 19.10 Data flow in film-based CAD mammography. TABLE 19.1 • • • • •
CAD Without PACS With or Without Digital Input
Collect films or digital images based on patient’s record Digitize films or develop interface programs to read digital images Input images to the CAD workstation (WS) CAD algorithm Return results to CAD WS
TABLE 19.2
CAD with DICOM PACS: PACS WS Q/R, CAD WS Detect
At the PACS server • Connect CAD WS to PACS • Register CAD WS (IP address, port number, application entity (AE)) title to receive images At the PACS WS Use DICOM Query/Retrieve to select patient/studies/images • Use DICOM C-GET to select images from PACS server, which pushes the images to CAD WS •
At the CAD WS Develop DICOM storage class provider (SCP) to accept images • CAD • Develop database to archive results •
PACS WS Q/R, CAD WS Detect In this approach, the PACS workstation (WS) queries and retrieves (Q/R) images from the PACS database and the CAD WS performs the detection. Table 19.2 and Figure 19.11 illustrate the steps for the CAD. This method involves the PACS server, the PACS WS, and the CAD WS. A DICOM C-Store function must be installed in the CAD WS. CAD WS Q/R and Detect In this approach, the CAD WS performs both query and retrieve and then detection. This method only involves the PACS server and
Ch19.qxd 2/12/04 5:15 PM Page 506
506
PACS-BASED MEDICAL IMAGING INFORMATICS
TABLE 19.3
CAD with DICOM PACS: CAD WS Q/R and Detect
At the PACS server • Connect CAD WS to PACS • Register CAD WS (IP address, port number, AE title) at PACS From the CAD WS Develop DICOM Q/R client and Storage Class to select/accept patient/study/images • CAD • Develop database to archive results •
PACS WS Q/R, CAD WS Detect PACS Server
1
PACS WS C-GET
2
CAD WS
Figure 19.11 CAD with PACS: PACS WS query/retrieve, CAD WS detect.
CAD WS Q/R and Detect PACS WS
PACS Server
1
2
CAD WS DICOM Q/R
Figure 19.12 CAD with PACS: CAD WS query/retrieve and detect.
the CAD WS. The function of the PACS server is identical to that of the last method. The difference is that the last method uses the PACS WS for Q/R images, whereas in this method the CAD WS performs the Q/R. Because of this, DICOM Q/R must be installed in the CAD WS. Table 19.3 and Figure 19.12 describe the steps. PACS WS with CAD Software The next approach is to install the CAD software in the PACS WS. This will eliminate all components in the CAD system and its connection to the PACS. Table 19.4 shows the steps involved. Integration of CAD Server with PACS/MIII In this method, a CAD server is developed that is connected to the PACS server. This server can also be attached to the
Ch19.qxd 2/18/04 2:08 PM Page 507
SUMMARY OF MIII
507
TABLE 19.4 CAD with DICOM PACS: PACS WS with CAD Software • • •
• • •
Install CAD software at PACS WS PACS WSs Q/R patient/study/images Establish linkage for CAD software to access DICOM images Develop DICOM format decoder to CAD format CAD at PACS WS Develop CAD database to archive results
TABLE 19.5 • • • • •
Integration of CAD Server with PACS/MIII
Connect CAD server to PACS controller and MIII CAD Server performs Q/R patient/study/images from PACS Archive DICOM format decoder Distribute images to CAD WS
Integration of CAD server with PACS/MIII PACS Images & Associated Data
MIII
PACS Controller & Archive
Display Workstation
CAD Server
CAD WS
Figure 19.13 Integration of CAD server with PACS/MIII.
MIII as a resource. The CAD server is used to serve all CAD and MIII WS. This concept is similar to the distributed and web servers described in Sections 13.3 and 13.4. Table 19.5 and Figure 19.13 describe the steps involved.
19.4
SUMMARY OF MIII
MIII is an organized method to perform large-scale longitudinal and horizontal studies to advance research, education, and clinical service. It takes advantage of the rich data from the PACS and related databases and sometimes shares some of
Ch19.qxd 2/12/04 5:15 PM Page 508
508
PACS-BASED MEDICAL IMAGING INFORMATICS
the PACS resources. However, there are certain resources PACS does not have to handle large-scale databases because the original missions of PACS are for facilitating clinical services. Therefore, MIII must develop some resources that are necessary in large-scale study. These include image content indexing, 3-D rendering and visualization engine, distributed and grid computing, and CAD methodologies that are discussed in this chapter. We will find other resources that may be needed as we advance our understanding from the current level of medical imaging informatics to the next level. In Chapter 20, we discuss some large-scale medical imaging applications that are being carried out in the medical imaging community as illustrations of the MIII concept.
Ch20.qxd 2/12/04 5:14 PM Page 509
CHAPTER 20
PACS as a Decision Support Tool
In Chapter 19, we presented the PACS-based medical imaging informatics infrastructure (MIII) and its potential use for large-scale horizontal and longitudinal research, education, and clinical services. In the discussion of MIII, we described its organization, tools, and resources, as well as emphasizing the importance of taking advantage of available PACS resources. In this chapter we discuss three applications that use the concept of PACS-based MIII: outcome analysis of radiation therapy planning based on temporal CT images, imaging matching of pediatric neurodiseases with data mining from large MRI databases, and bone age assessment of children with a digital hand atlas. In the discussion, we highlight tools and resources used in the PACS-based MIII.
20.1 OUTCOME ANALYSIS OF LUNG NODULE WITH TEMPORAL CT IMAGE DATABASE 20.1.1
Background
Although the progression of lung nodules over time can be monitored qualitatively by using spiral and multislice CT scans, a qualitative result alone is not sufficient for assessing the effectiveness of ongoing therapeutic treatment. Outcome analysis using CT requires an application-specific temporal image database management system (a component in the MIII) connected to the PACS server and the necessary tools, including: (1) Interactive image processing to extract tumor volume from longitudinal CT scans (2) Visualization and graphic user interface (GUI) to qualitatively assess the number and size of the tumors (3) Statistical methods for outcome analysis Without these tools, quantitative description of the effectiveness of a treatment plan to control the disease is a formidable task. However, to perform a longitudinal quantitative analysis requires access of temporal CT images with related patient records
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
509
Ch20.qxd 2/12/04 5:14 PM Page 510
510
PACS AS A DECISION SUPPORT TOOL
through time with the PACS database and relevant patient information from other health care databases and derivation of quantitative measures from the lesions. These requirements fall into the domain of the MIII. 20.1.2
System Architecture
The design of the temporal medical image database is based on a three-tiered client/server architecture including the MIII database server, the PACS central archive, and the MIII client workstation shown in Figure 20.1. The image database server in the MIII is a centrally located core of application programs for accessing, processing, and managing chest CT images and associated textual reports from the RIS and HIS. The server requires three tools: a chest imaging database to store CT images and processed multimedia chest data, a relational database engine for data query and retrieve, and an image processing engine (Zhu, 1996). Medical images and text are retrieved from the PACS archive server through the DICOM interface. The image processing engine consists of the necessary segmentation and user interface tools for interactive editing of the outline of the segmented tumor. It also has functions for various quantitative measurements related to tumor size and shape. The CT images, extracted tumors, and their measurements are stored in the chest image database. The relational database engine is used for tabulation of data from the chest image database for outcome analysis. Because the image processing functions are standard tools available in the public domain, they will not be elaborated on here. Instead, we discuss the graphic user interface at the workstation.
MIII Client Workstation with GUI
Image Processing Engine
MIII
Database Engine
Chest Imaging Database
Image DB server
DICOM PACS Central Archive (Raw images, RIS and HIS textual data)
Figure 20.1 The 3-tiered client server architecture of the temporal chest imaging database system under the realm of PACS-based MIII. The PACS archive providing images and related data is at the bottom, the temporal chest imaging database within MIII is in the middle, and the MIII client workstation is at the top.
Ch20.qxd 2/12/04 5:14 PM Page 511
OUTCOME ANALYSIS OF LUNG NODULE WITH TEMPORAL CT IMAGE DATABASE
20.1.3
511
Graphic User Interface
Figure 20.2A shows the layout of the graphic user interface. When a particular patient is selected, a predetermined spiral or multislice CT image is displayed in the left window. When a nodule is identified visually, the user initiates the image processing tools by moving the mouse pointer anywhere within the nodule and then clicking the mouse button to activate a series of image processing steps to segment out the nodule and to calculate 3-D information of the nodule automatically. After the calculation is completed, a pop-up window will appear, showing the nodule and allowing the user to inspect the result of automatic segmentation, as shown in the right bottom window in Figure 20.2A. If the result is satisfactory, the 3-D nodule information would be stored in the chest imaging database. Otherwise, the user can interactively correct any segmentation errors as follows. A second pop-up window magnifies the image with all pixels segmented as the nodule marked with a plus (“+”) sign. The user can easily include or exclude one pixel by clicking the mouse key at the pixel position. The corrected nodule is then stored. The top row in the upper window in Figure 20.2A shows the segmented nodules from different CT slices. To ensure that a nodule is not calculated twice, the processed nodules will turn a red color on the display (Fig. 20.2B). This feature is found to be helpful for metastasis application because a patient with lung metastases usually has many nodules and it is difficult to keep track of which ones have been processed. The temporal image management system database can also display the characteristics of each lesion, as shown in Figure 20.2C. 20.1.4
An Example
20.1.4.1 Case History We illustrate a case of lung metastasis to demonstrate the usefulness of the temporal database system.The patient is a woman with a family history of breast cancer. In 1989, she complained of a small lump in her breast, which was found to be benign after aspiration. Two years later (1991), she complained of two small masses, one in each breast. Excisional biopsy and pathology showed that the right mass was a breast carcinoma. She was treated with radiation therapy and chemotherapy. In 1993, multiple small lung nodules were detected in chest radiographs. Biopsy of the nodules showed they were adenocarcinoma. The patient was given various chemotheraptic agents. Between September 1994 and March 1995, the patient was given several experimental agents. In March 1995, marked progression of the lung metastatic nodules was found on a chest CT examination. Diffuse bone metastases were also noted. After March 1995, the patient received Taxol treatment (chemotherapy) and in June radiation therapy (4500 cGy) to her thoracic spine. After radiation therapy, adriamycin (chemotherapy) was administered.The problem posed is how to assess the effectiveness of the treatment plan quantitatively. 20.1.4.2 Temporal Assessment From the PACS archive server, a total of five spiral CT studies of the patient were retrieved into the temporal image database management system. The first was in August 1994, and the most recent one was in November 1995. The first two studies were scanned at 5-mm collimation, and the
Ch20.qxd 2/12/04 5:14 PM Page 512
512
PACS AS A DECISION SUPPORT TOOL
A
B Figure 20.2 (A) The MIII visualization engine in the outcome analysis of lung nodule treatment. (A) Three windows in the graphic user interface. Left: a CT chest image. Lower right: an enlarged segmented nodule; +, lesion pixel. Upper right: segmented nodules from CT scans stored in the chest image database. (B) Processed nodules in a CT image is turned “red” color to ensure that a nodule has been and is not processed twice. Two consecutive sections are shown. (See color insert.)
Ch20.qxd 2/12/04 5:14 PM Page 513
513
OUTCOME ANALYSIS OF LUNG NODULE WITH TEMPORAL CT IMAGE DATABASE
C Figure 20.2 (C) Graphic user interface (GUI) showing that the user can review statistics of each processed nodule. (See color insert.)
images were reconstructed with 3-mm slice thickness. The last three studies were scanned at 7-mm collimation and were reconstructed with 7-mm slice spacing. The change of protocol between the first two and the last three studies is a result of the change of the radiologists involved in the case. Nodule segmentation and volume estimates were performed by a radiologist aided by the software tools. The total volume and the mass center of each nodule were saved into the database. After all nodules in the five studies had been segmented, the progression of the lung disease could be tabulated by using the relational database engine and graphically displayed as in Figure 20.3. The horizontal axis plots the time; the vertical axis plots both the number of nodules (scaled at left) and the total volume of the nodules in each study (scaled at right). The volume is measured in cubic millimeters. Below the horizontal axis, two lines denote the intervals during which the patient was under different treatment plans. In the first interval, the patient was taking an experimental medication. Unfortunately, both nodule volume and nodule number grew with time. Total nodule volume is tripled during this experimental period. After March 1995, the patient was under a combination of chemotherapy and radiation therapy, which were effective in controlling tumor volume. We can note that the total volume of the cancer reduces by a factor of 5 from its peak value in March 1995. 20.1.5
Temporal Image Database and MIII
The temporal image database relies on extensive searching of the PACS archive server for relevant CT examinations. This search is simple, quick, and reliable. Without the PACS search, this type of outcome analysis using longitudinal images is extremely difficult to perform because of the logistics of data collection.The image
Ch20.qxd 2/12/04 5:14 PM Page 514
514
PACS AS A DECISION SUPPORT TOOL
Nodule History 80000
140 Nodule number
Nodule Number
60000 100 40000 80
Total Nodule Volume
Total nodule
120
20000 60
40 7/24/94
11/1/94
2/9/95
5/20/95
8/28/95
0 12/6/95
Time
Experimental Drug
Chemo/Radiation Therapy
Figure 20.3 Temporal history of the patient nodule number and total nodule volume. March 1995 marks the change of treatment plan, showing drastic improvement of the patient in terms of nodule number as well as volume.
processing engine contains standard image processing functions for segmentation and interactive editing. The chest image database is designed such that the original, segmented nodules, as well as the processed data, are organized for the relational database to analyze the treatment outcome.The three major components in the temporal image database are subcomponents in the MIII infrastructure shown in Figure 19.1 and described in Section 19.1. Once the MIII is established, the majority of the building blocks of the temporal image database are almost readily available.
20.2
IMAGE MATCHING WITH DATA MINING
20.2.1 The Concept of Image Matching (Zhang et al., 2002, Nelson et al., 2002, Leventon et al., 2002) Image Match is a branch of CAD that is a system that permits a user with access to (a) large database of a certain category of medical images with extracted parameters and (b) high-speed broadband networks like the Internet 2 connection to submit an image as a query and receive in return a set of images from the database of those already diagnosed images that are most similar and relevant. The degrees of similarity and relevance are in terms of the extract parameters. An Image Matching system utilizes sophisticated image processing algorithms, combined with robust database technology and state-of-the-art web-enabled hardware, and packages these capabilities into a tightly integrated work flow environment in the framework
Ch20.qxd 2/12/04 5:14 PM Page 515
IMAGE MATCHING WITH DATA MINING
515
of medical image informatics infrastructure (Huang, 1997). Later in this section we give an example of an Image Matching system based on a large MR image database of the brain. 20.2.2
Methodology
20.2.2.1 Data Collection of the Image Matching Database Let us use brain MR image database as an example to illustrate the method of data collection. For an image set in the database to be meaningful, several types of data in addition to the image set are required. These include • • • • •
Relevant patient information Image data set in PACS DICOM format Corresponding radiological reports Histopathology reports, if available Any other relevant reports
When a new case is collected into the database, various preprocessing steps are performed. The images are registered to a common global coordinate system as defined by all other images in the database. This alignment step ensures the continued consistency of the database as additional images are input. The pathology in each database image is then localized and highlighted with a region of interest (ROI) in a segmentation step to focus the matching algorithm on the visual cues corresponding to the given pathology. After the registration, alignment, and segmentation steps, a select set of features is extracted from the imagery with image processing. The features are chosen to be robust to slight misalignments and invariant to various imaging artifacts while capturing the salient information contained within the imagery and highlighting the indications of pathology. Features can include the location, size, shape, texture, and denseness, among others. These features are input into the database appended to the image set and will be used as keys to the Image Match algorithm(s). Figure 20.4 shows the procedure. 20.2.2.2 Image Registration and Segmentation The concept of image matching is measurement of the similarity of the submitted images to images already in the database. Similarity can be determined by those parameters extracted from each image set in the database. These parameters are characteristics of different applications and imaging modalities used. To extract these parameters requires advanced image processing algorithms for parameter extraction. To measure the similarity between the submitted image and images in the database requires access to this large image database, rapid measurement of these predetermined parameters, and return of similar images as queried by the user. Two key organizational factors for the images in the database are registration and segmentation. •
Registration is the key aspect of the organization of the database, and it ensures that only corresponding regions of images are compared during a match, so as to save computation time and reduce the possibility of false matches. A regis-
Ch20.qxd 2/12/04 5:14 PM Page 516
516
PACS AS A DECISION SUPPORT TOOL
Registration
ROI selection
Feature extraction
Registration
ROI selection
Feature extraction
Database
… Figure 20.4 Populating the database with already diagnosed images. Each new image is first registered to a global coordinate system. The seed pixel is then selected, and the region of interest (ROI) is generated. Finally, image processing is used to extract several select image parameters, which will be used to perform the image matching. The database is populated with these parameters, along with the original (unprocessed) images.
•
tered database of medical images provides tremendous information about statistical properties of regions of the anatomy in combination with various pathologies. Segmentation, or labeling of the images in the database, provides a means of focusing the attention of the algorithms on a particular region of interest. The attention mechanism eliminates outside distracters that can contribute to a false match.
When measuring the similarity between two images, it is of utmost importance that the images themselves be at least somewhat aligned, to ensure that certain anatomical structures from one image are compared with the corresponding structures in the other. However, any two medical images, in general, will not be in alignment, because they may be acquired at different times, on different subjects, or in different scanners. Accurate comparison of grossly misaligned medical images is almost impossible. Thus the sophisticated registration algorithms are incorporated to roughly align all images into a common global coordinate system, in combination with comparison measures that are robust to the slight, residual misalignment that remains after the registration is performed. When a query is submitted, the images must also be aligned to the database global coordinate system before matching is performed. Without such alignment, subtle pathologies apparent in the query image may not be matched to the correct region of a database image showing signs of a similar pathology. The same set of features previously extracted from each of the reference images in the database is extracted from the query image in a similar manner. The features are then matched to every relevant image in the database, the match scores are compared and sorted,
Ch20.qxd 2/12/04 5:14 PM Page 517
IMAGE MATCHING WITH DATA MINING
ROI selection
Registration
517
Feature extraction
Query image containing unknown lesion
Database
Compare with images in database
Display most similar images
Figure 20.5 Matching an image (“query image”) with an unknown lesion to images stored in the database. The query image is first registered to the same global coordinate system as that used to populate the database. Seed pixel selection can be done either manually by the user or automatically. The same set of image parameters computed for each of the images in the database is then computed for the query image. These parameters are used to compute the similarity between the query image and each of the images in the database.
and the best matches are reported to the query, along with additional information, such as diagnoses and findings associated with each matching case (Leventon et al., 2002). Figure 20.5 depicts how to submit a query image for comparison with images in the database. 20.2.2.3 An Example of MRI Brain Image Matching with Images in the Database A well-organized medical image database can offer much useful related medical information, such as high-probability distribution of one particular disease related to anatomical location, age, gender, country, and so on. Figures 20.6 and 20.7 show the similarity in locations of two brain diseases, subdural hematoma and meningioma, in an existing MRI brain database. For each disease, a lowprobability threshold is first applied on the distribution map (combination of parameters of similarity) of similar disease in the database; the anatomical locations of the distribution map (in color) is then overlaid on the middle slice of an MR T1 image (in gray scale). In these figures, the color scale from dark red to red to whiteyellow of the distribution map means higher occurrence chance of the disease. In the case of subdural hematoma (Fig. 20.6), 107 cases were used. After the image matching, it is obvious that this tumor is distributed as two symmetric halfrings near the skull. For meningioma (Fig. 20.7), 380 cases were used. The tumor is mainly distributed in the middle of the frontal lobe.
Ch20.qxd 2/12/04 5:14 PM Page 518
518
PACS AS A DECISION SUPPORT TOOL
Figure 20.6 Distribution of subdural hematoma in a database of 107 cases. Bright yellow, highest probability. (Courtesy of Drs. M. Leventon, M. Zhang, and L. Liu.) (See color insert.)
Figure 20.7 Distribution of meningioma in a database of 380 cases Bright yellow, highest probability. (Courtesy of Drs. M. Leventon, M. Zhang, and L. Liu.) (See color insert.)
20.2.3 Image Matching as a Diagnostic Support Tool for Brain Diseases in Children Let us consider another example of submitting a pediatric MRI with brain disease to match images in a pediatric MR image database. 20.2.3.1 Methods In this example, we describe an image matching method as a diagnostic support tool for brain diseases in children in our laboratory. The first step
Ch20.qxd 2/12/04 5:14 PM Page 519
IMAGE MATCHING WITH DATA MINING
519
is to assemble the database of already diagnosed brain MR images. Figure 20.4 illustrates the steps for constructing the database. The Talairach brain atlas was used to define the global coordinate system (Talairach and Tournoux, 1988). As new, already diagnosed images are added to the database, the slice position of each image is determined by first segmenting the various anatomical structures of the brain, such as white matter, gray matter, and basal-ganglia. An anterior-posterior, asymmetric, elliptical multiloop polar tile coordinate system is then fit to each image (Zhang et al., 2002). Finally, a mathematical model relating slice position to image tile features was used to roughly estimate the slice position for each image with respect to the global coordinate system. These registration steps ensure that a given pixel position in one image corresponds to positions in each of the other images in the database that correspond to the same anatomical feature. In this way, a database of registered, already diagnosed images containing data for 2500 children with a variety of brain diseases was assembled. The database contains a complete MR study for each patient, along with a patient radiology report summarizing the diagnosis. Once the images have been registered to the global coordinate system, a seed pixel within the pathology-bearing region (PBR) in each image is selected manually by an expert clinician. From the seed pixel, a region of interest (ROI) for each image, is then generated automatically by the database software. Finally, feature extraction techniques are used to extract a select set of pixel-based parameters from each image in the database. These parameters include geometric and texture features, as well as Fourier coefficients. The ROI is given extra “weight” when computing this parameter set. This set of parameters is used by the image matching algorithm to compute the similarity between an image with an unknown pathology (“query image”) and each of the already diagnosed images in the database. Similarity is defined as the Euclidean distance between two sets of parameters. Note that the similarity measurements used are robust to the slight residual misalignment that remains even after the images have been registered. 20.2.3.2 Submitting an Image for Matching Figure 20.5 illustrates the image matching process. This example is a comparison of an image with unknown pathology in a brain MR image (the figure shows an unknown lesion) with the already diagnosed images in the database.The initial steps of this process are similar to those taken when populating the database with a new image (see Fig. 20.4). The query image is first registered to the Talairach atlas. The user then marks a seed pixel in the image. From the seed pixel, the software then generates a geometric shape that represents the ROI. Next, the same set of image parameters previously extracted from each of the already diagnosed images in the database is computed. This parameter set is then used to compute the similarity between the query image and each of the images in the database. The best matches are then retrieved and presented to the user. Figure 20.8 shows part of the graphical user interface (GUI) used to perform image matching in this study. The user submits one of his/her own brain MR images as a query image. The user then manually selects a seed pixel, and the software generates an ROI (open circle). The program then searches the already diagnosed image database for images that contain a pathology that best matches the ROI of the query image. The retrieved results are shown in the bottom panel in the figure.
Ch20.qxd 2/12/04 5:14 PM Page 520
520
PACS AS A DECISION SUPPORT TOOL
Figure 20.8 GUI for matching a query image (top left) with already diagnosed images in the database. The query image in the figure is a brain MR scan of a child with an ependymoma. For the query shown, the user selected a seed pixel, and the program generated the region of interest (ROI) indicated by the open circle. The bottom panel shows the 4 images from the database that best match the query image. A close-up of the image that best matches the query image (rank 1) is also shown (top right). (Courtesy of Dr. J Nielsen.)
20.2.3.3 Evaluation To evaluate the performance of this image matching method as a diagnostic support tool, an expert pediatric neuroradiologist selected brain MR studies from seven pediatric patients at Childrens Hospital Los Angeles (CHLA). These cases represent a variety of brain lesions with radiographic appearance ranging from localized lesions to more distributed white- and gray matter diseases. For each patient, the neuroradiologist first consulted the patient pathology report describing the known diagnosis. The neuroradiologist then selected the image from the full set of images of the patient that best represented the pathology, as well as a seed pixel within the PBR. This image served as the “unknown” query image. Image matching was then performed as described in Figures 20.5 and 20.8. In this
Ch20.qxd 2/12/04 5:14 PM Page 521
IMAGE MATCHING WITH DATA MINING
521
evaluation study, the database server was located at a remote site (MD Online, Lexington, MA). The interaction with this server was done using a PC at CHLA with software for accessing the server over the Internet. The images in the server database that best matched the query image were retrieved from the already diagnosed database, and the diagnostic results corresponding to each image in the database were recorded. In addition, each retrieved case was given a rank according to degree of similarity with the query image. No information other than the MR images was used to perform the image matching, that is, no additional patient information (such as DICOM header information) was used. After the search results were received, the retrieved diagnoses corresponding to the best 12 matches were then compared with the known diagnosis from each patient’s pathology report. Note that this evaluation is a strict test of the diagnostic content of the retrieved results and does not involve a subjective measurement by the user of the apparent similarity between the query image and the retrieved images. The test procedure is summarized in Figure 20.9. The results of using image matching as a support tool for diagnosing pediatric brain diseases are shown in Table 20.1. In each case, the total search time was less than 2 s. We see from Table 20.1 that in five of the seven query cases the correct diagnosis was indeed retrieved from the database. Furthermore, among these five cases, between 10% and 60% of the search results either produced the known, true
Patient A: MR study + pathology report
Select MR sequence
Patient A’s pathology report
Perform imagematching
Retrieved matches
Compare retrieved diagnoses with pathology report
Figure 20.9 Overview of the evaluation process. The image that best represents the disease is first selected from the MR image set being studied. This image becomes the input (query image) into the image-matching process (as described in Fig. 20.5). The retrieved diagnoses are compared with the known diagnosis from the patient’s pathology report.
Ch20.qxd 2/12/04 5:14 PM Page 522
522
PACS AS A DECISION SUPPORT TOOL
TABLE 20.1 Patient
A B C D E F G
Summary of Test Results MR Sequence Type
Disease
# Matches Evaluated
% Correct
T2 T2 T2 T2 T1 PD T2
Ependymoma Neurofibromatosis Diffuse cerebral malformation Arachnoid cyst Heterotopic gray matter Arterial venous malformation Heterotopic gray matter
12 10 12 12 12 8 12
25 60 0 10 50 0 50
The total search time in each case was less than 2 s. The table lists the results for each of the 7 cases tested (A–G). For each patient, the table lists the MR sequence used, the disease, the number of cases that were evaluated by the expert neuroradiologist, and the percentage (to nearest 5%) of the retrieved results that contained a relevant diagnosis for each disease. For each patient, the top 12 retrieved matches were evaluated. For patients B and F, only 10 and 8 matches were returned by the database software, respectively. For example, for patient A, 3 of the top 12 matches (or 25%) contained either (i) the correct (known) diagnosis or (ii) a diagnosis that was judged to be a possible alternate (differential) diagnosis.
diagnosis or diagnoses that were judged by the expert neuroradiologist who performed this evaluation to represent possible alternative (differential) diagnoses for each disease. In other words, up to 60% of the match results contain relevant and potentially useful diagnostic information. Furthermore, it is reassuring to note that the accuracy of the retrieved results for patients E and G, who both suffer from heterotopic gray matter, is comparable, even when tested on different MR sequences (T1 vs. T2). For patients C and F, who exhibited diffuse cerebral malformation and arterial venous malformation, respectively, the retrieved diagnoses were not considered to represent possible alternative diagnoses for the disease in question. 20.2.4
Summary of Image Matching
Image matching is a new tool of the diagnostic support system. This method allows the user to quickly search a large image database designed to provide diagnostic support for pediatric brain diseases. In light of this evaluation of the method for diagnosis of pediatric brain diseases, it can be concluded that image matching can be effective in assisting the radiologist in surveying possible differential diagnoses for a range of diseases. Furthermore, these results suggest that the accuracy of the search results may vary for different diseases. It should be emphasized, however, that the poor performance for cases C and F in Table 20.1 does not necessarily mean that the retrieved images in these two cases appear different (as viewed by a human observer) from the query images, because the image matching only retrieved results on the basis of their diagnostic content, and not on appearance. We believe the image matching performance for such malformations will improve as more cases are added to the database of already diagnosed images. Image matching as a diagnostic support tool is most useful for diseases that are difficult to diagnose and for which many differential diagnoses exist, as for diseases in other regions like the liver and lung. Also, this image matching method could be
Ch20.qxd 2/12/04 5:14 PM Page 523
IMAGE MATCHING WITH DATA MINING
523
a useful tool for general radiologists and for radiologists working in smaller hospitals with few colleagues to consult with. In addition to performing image matching, there is another interesting and useful way to use the large database technology. As described above, all already diagnosed images in the database are registered to the Talairach brain atlas. This means that it is possible for the user to select an ROI in a three-dimensional (3-D) model of the brain and then retrieve the cases from the database for which the PBR(s) overlaps with the user-selected ROI. This, in essence, is image content indexing as discussed in Section 19.2.2.1. Figure 20.10 shows a GUI that can be used for selecting a single point in a 3-D brain volume. Once the ROI is selected, the user can search the already diagnosed database and retrieve the cases that contain a pathology relating to the selected ROI (not shown). As such, this becomes a powerful tool for brain disease education for students and residents, as well as a research tool for exploring the statistical properties of brain diseases.
Figure 20.10 A GUI for selecting an ROI in a 3-D brain atlas. The crosshair in the axial view indicates the user-selected point. A 3-D view is shown in the top right panel, along with the 2-D axial, coronal, and sagittal planes containing the user-selected point. (Courtesy of Dr. J. Nielsen.) (See color insert.)
Ch20.qxd 2/12/04 5:14 PM Page 524
524
PACS AS A DECISION SUPPORT TOOL
This image matching method is a general technique for information retrieval from medical image databases. Diagnostic support is only one of the several possible applications for this technology. For example, an intelligent search method could enable “data mining” or “knowledge discovery” tools based on large medical image database. This, in turn, holds the promise of tapping into an enormous reservoir of “hidden” knowledge in the large medical databases that already exist today.
20.3
BONE AGE ASSESSMENT WITH A DIGITAL HAND ATLAS
In this section we present a CAD method to assess bone age of children from a digital hand atlas. This is a large-scale horizontal and longitudinal study: horizontal in the sense that it is across different ethnic origins, male and female and longitudinal because it is also temporal. We present it in detail as related to imaging informatics methodology but leave the image processing techniques to interested readers to consult the current literature cited in the reference list. This ongoing project has been developed over the past 7 years in our laboratory. 20.3.1
Why Digital Hand Atlas?
Bone age assessment is a procedure frequently performed on pediatric patients to evaluate their growth. It is an important procedure in the diagnosis and management of endocrine disorders, diagnostic evaluation of metabolic and growth abnormalities, and deceleration of maturation in a variety of syndromes, malformations, and bone dysplasias. It is also used for consultation in planning orthopedic procedures. The most common clinical method for bone age assessment is atlas matching by a left-hand wrist radiograph against a small reference set of atlas patterns of normal standards developed by Greulich and Pyle in the 1950s based on data collected in the first half of the twentieth century. (The Tanner–Whitehouse method is a second method that requires extensive measurement and comparison of 20 bones against 9 possible stages of development. It is tedious and unsuitable for a clinical environment and thus rarely used.) Unfortunately, the Greulich and Pyle reference radiographs do not reflect skeletal development in today’s children and adolescents of European, African, Hispanic or Asian descent. Increasing racial diversity in the United States and other countries, as well as changing nutritional and behavioral habits of children, necessitate reevaluation of the use of skeletal age standards. The digital atlas method collects a large standard set of normal hand and wrist digital images of children of four diverse ethnic groups. It comprises 1120 reference images and computer-extracted bone objects and quantitative features and removes the disadvantages of the out-of-date, ethnically constrained Greulich–Pyle atlas and the tediousness of the Tanner–Whitehouse scoring method. These images are collected from evenly distributed normal 3 months to 18 years of age, male and female, of European, African, Hispanic, and Asian descent. A web-based CAD system for bone age assessment is implemented in which quantitative features similar to those in the atlas are extracted on a web server and then compared with patterns from the digital atlas database as well as computer-generated scores to assess bone age.
Ch20.qxd 2/12/04 5:14 PM Page 525
BONE AGE ASSESSMENT WITH A DIGITAL HAND ATLAS
525
The advantage of the digital hand atlas with its fully integrated web-based server as compared with the classic method is that it can provide quantitative, consistent, and reliable bone age assessment that can be performed anywhere, anytime. The user interface is simple to apply and resembles current bone age assessment procedures. Two immediate benefits are improved real-time clinical bone age assessment and the production of a research tool that will allow for ongoing evaluation of the traditional gold standard techniques for bone age assessment. These advantages, combined with the higher accuracy and fast turnaround time in diagnosis, would gain acceptance by both pediatric radiologists and other clinicians replacing the traditional method. This section describes the imaging informatics technical component of the digital atlas in detail. 20.3.2
PACS-Based and MIII-Driven CAD of Bone Age Assessment
PACS permits diagnostic images to be acquired, stored, and distributed over a computer network. Local and remote data access has become an everyday clinical routine. Workstations designed by various vendors are dedicated to certain imaging modalities. Large-scale PACS and extended image databases have created an opportunity to fulfill those demands. A link to Hospital Information Systems (HIS), Radiological Information Systems (RIS), and/or PACS allows the diagnostic reports to be stored with the related images and radiological findings extracted manually or automatically. A large volume of available data makes development of new methodologies possible. A new area of CAD in the clinical, research, and educational fields has been derived. Methodologies supporting image diagnosis have been developed. A computer-aided diagnosis may be performed on three levels. The first level enhances certain radiological findings, segments out predefined anatomical regions, and suppresses one region while enhancing another. Second, it delivers quantitative parameters or measures extracted automatically or semiautomatically. Third, it performs a decision-making process and is able to point out certain abnormalities. These levels are optional, and not all of them have to be included in all applications. In this section an integration of a CAD with a PACS is discussed in implementation of a CAD in bone age assessment (CAD-BAA). 20.3.3
Bone Age Assessment with Digital Hand Wrist Radiograph
Bone age assessment is based on the analysis of ossification centers in the carpal bones and epiphyses of tubular bones including distals, middles, proximals, radius, and ulna (Fig. 20.11). Epiphyses normally ossify after birth. When the development progresses, the bony penetration advances from the initial center of ossification in all directions (Fig. 20.12a). Penetration continues until the edges of metaphyses are reached (Fig. 20.12b). The gap between the shaft and the ossification center diminishes progressively (Fig. 20.12c, d) until it disappears when the epiphysis and metaphysis fuse into one adult bone (Fig. 20.12e). Carpal bones are another part of the information source. In the early stage they appear as a dense pinpoint on a radiograph. While developing, they increase in size until finally they reach their optimal size and characteristic shape. Medical studies
Ch20.qxd 2/12/04 5:14 PM Page 526
526
PACS AS A DECISION SUPPORT TOOL
Figure 20.11 Epiphyseal regions superimposed over a hand radiograph. These 6 regions of interest are used to assess the bone age of the subject from the radiograph (see also Fig. 20.15).
/a/
/b/
/c/
/d/
/e/
Figure 20.12 Epiphyseal ROIs at various stages of development. These regions are used to extract parameters for bone age assessment (see also Fig. 20.13).
(Johnston and Jahina, 1965) indicate that, because of the nature of maturity of the carpal bones, their analysis does not provide accurate and significant information for patients older than 9–12 years of age. In this stage of development the phalangeal analysis yields more reliable results, because of the principle “the more distal the better” (Kirks, 1984). The analysis of this region is not included in the current study. This study describes three steps necessary for clinical implementation in the hospital wide information system in a PACS-based and MIII-driven CAD. They
Ch20.qxd 2/12/04 5:14 PM Page 527
BONE AGE ASSESSMENT WITH A DIGITAL HAND ATLAS
527
are image analysis (Section 20.3.4), database structure (Section 20.3.5), and the integration of CAD with PACS (Section 20.3.6). The image analysis component is generally called the CAD server. 20.3.4
Image Analysis
Image analysis is performed on a left-hand wrist radiogram in a standard upright position. It includes three phases: image preprocessing, region of interest (ROI) extraction, and feature extraction. The analysis starts with the background subtraction, based on a histogram analysis. The hand object is then extracted and subjected to a procedure that locates the axes of the 2nd, 3rd, and 4th phalanges. A detailed description can be found in (Pietka et al., 2001). The selection of regions of interest is based on medical knowledge of the changes in anatomical structures during the developmental process. An overview of medically accepted diagnostic method (Greulich and Pyle, 1971; Tanner and Whitehouse, 1975) indicates, that epi-metaphyseal regions of interest appear to be the most sensitive area reflecting the developmental stage. Thus along the epiphyseal axes six regions of interest (Fig. 20.11) are extracted (Pietka et al., 2001). For the image analysis the process of skeletal developmental is divided into two phases: an early and a later stage of development. At the early stage of skeletal development (Fig. 20.12a, b) the epiphyses are separated from the metaphyses. In the process of development they change from a disk shape with concave borders to a full size when the epiphyses cap the metaphyses. In this phase features describing the size and shape of epiphysis are of high discrimination power. In the later stage, the gap between epiphysis and metaphysis starts disappearing (Fig. 20.12c) and fusion begins (Fig. 20.12d, e). It continues until fusion is completed and one adult bone is found. In this phase the stage of fusion is estimated. Because of the very different appearance of epiphyses in each phase, various methods of image processing are employed. In the early stage of development, regions of interest are subjected to an image segmentation procedure that separates the epiphyses from the metaphyses. A wavelet analysis is used when the epiphyseal fusion has started. 20.3.4.1 Segmentation Procedure The goal of the segmentation procedure is to separate the epiphyses and metaphyses from the soft tissue. The segmentation procedure consists of two steps: (1) preliminary clustering using the c-means algorithm and (2) segmentation by means of the Gibbs random fields and estimation of the intensity function. The results are 11 parameters F1, . . . , F11 shown below. The symbols are defined in Figure 20.13. The details of derivation and methods are given in references (Pietka et al., 2003). F1 =
dh_epi d_meta
F4 =
area1 + area6 dnv3 * d_meta
F2 =
d_meta dist_m_l F5 =
F3 =
dist_m_e dh_epi
area3 + area4 dnv3 * d_meta
Ch20.qxd 2/12/04 5:14 PM Page 528
528
PACS AS A DECISION SUPPORT TOOL
d_meta
(dnv1...dnv5)
dist_m_e
dist_m_l
epi_area, area1...area6
dh_epi
Figure 20.13 Values defined by symbols are extracted from the epiphyseal region. The values are used to computed parameters F1, . . . , F11 given in Section 20.3.4.1. Symbols: epiphyseal and metaphyseal diameters (dh_epi, d_meta), vertical diameters (dnv1 . . . dnv5), distance between a metaphysis and diaphysis (dist_m_l), distance between a metaphysis and epiphysis (dist_m_e), epihyseal area (epi_area), area of epiphyseal sectors (area1 . . . area6).
F6 =
area1 + area6
(dnv1 + dnv5) *d_meta
F7 =
epi_area dh_epi *d_meta
F8 =
dh_epi * dist_m_et dist_m_l * d_meta
F9 =
m bony - msoft m bcg (normalized contrast — for an epiphyseal sub – region) msoft + m bony
F11 =
F 10 =
dh_epi * dist_m_e dist_m_l * d_meta
m bony - msoft m bcg (normalized contrast — for a whole region of interest) msoft + m bony
20.3.4.2 Wavelet Decomposition At the age of 10 years in girls and 12 years in boys epiphyses and metaphyses start to fuse until the gap between them is completely filled out. Features measuring the size of bones are no longer pertinent at this developmental stage. To assess the stage of fusion the wavelet decomposition is used. The separation is achieved by convolving the image with low- and high-pass wavelet filters along the rows and columns (Mallat, 1992). The decomposition yields an approximation component and three detailed components, vertical, horizontal, and diagonal, which can be used for edge detection as shown in Figure 20.14. It can be noted that the advancement of epiphyseal fusion suppresses the horizontal edges
Ch20.qxd 2/12/04 5:14 PM Page 529
BONE AGE ASSESSMENT WITH A DIGITAL HAND ATLAS
Level 1
529
Level 2
Diag
edges
Hor Vert Apr /a/
/b/
wavelet angle /c/
/d/
/e/
Figure 20.14 ROI subjected to wavelet decomposition. Edges are very well defined after the decomposition. /a/ region extracted from a hand image; /b/ subregion subjected to the wavelet decomposition; /c/ components of the first level of decomposition; /d/ components of the second level of decomposition; /e/ wavelet angles and extracted edges. Apr: after wavelet processing.
Middle IV
Distal II
Distal III
Distal IV
Middle III
Middle II
Final Aggregation
Defuzzyfication
Figure 20.15 Scheme of fuzzy system for classification of bone age. Single line arrows represent crisp values; double line arrows represent fuzzy values. The age is evaluated for every ROI independently; then the results are aggregated into a final assessment. (See Fig. 20.16 for an example.)
of bones. Thus calculation of features for bone age assessment in older children is based on the horizontal component of the wavelet decomposition. 20.3.4.3 Fuzzy Classifier The fuzzy system consists of six subsystems (Fig. 20.15) referred to six ROIs. Each subsystem includes two classifiers: One, for younger children, uses the shape and size features (Section 20.3.4.1), and the other, for older children, applies the wavelet features (Section 20.3.4.2). When both stages of development interfere and all features are available, the outputs of both subsys-
Ch20.qxd 2/12/04 5:14 PM Page 530
530
PACS AS A DECISION SUPPORT TOOL
tems are averaged. A Mamdani fuzzy reasoning system (Mamdani, 1974) is used as a classifier. The knowledge is represented as a set of if-then rules linking the features and the skeletal age. A connection of input variables and an implication are interpreted by a minimum operator. An aggregation of rules is performed by a maximum operator. Quantization of the input and output space is based on years of age for the sake of simple natural linguistic interpretation. Because the discrimination power of a feature may differ in various age groups, the domains of input variables are divided into a fuzzy set describing feature values corresponding to one or several consecutive age groups. Fuzzy sets of the output variable (referred to as the bone age) represent classes corresponding to the years of age. The bone age is assessed in two stages (Fig. 20.16). First, the age is assessed independently for each ROI. This yields fuzzy sets. The results are then combined into
1 II distal 0.5 0 1 10
12
14 16 12.2639
18
12
14 16 12.3535
18
12
14 16 13.9795
18
12
14 16 11.6463
18
12
14 16 13.4344
18
II middle 0.5 0 1 10 III distal
0.5 0 1 10
III middle 0.5 0 1 10 IV distal 0.5 0 1 10 IV middle 0.5 0
10
12
14
16
18
1 12.4217
Bone age 0.5 0
10
12
14
16
18
Defuzzyfied age: 12.45
Figure 20.16 An example of the bone age evaluation. Six functions show the intermediate results for each ROI individually, the bottom function presents the fuzzy assessment of bone age along with a “defuzzyfied” value. The chronological age of the healthy subject is 12.38 years, whereas the defuzzyfied age referred to as the assessed bone age is 12.45 years. Shape of fuzzy sets reveals that for some ROIs (IV and II middle) the evaluation of the age is more precise than for others. The very low activation level of the IV distal ROI is caused by features yielding inconsistent hints to age classification. The match of the chronological and the bone age is seldom so close.
Ch20.qxd 2/12/04 5:14 PM Page 531
BONE AGE ASSESSMENT WITH A DIGITAL HAND ATLAS
531
the fuzzy verdict of the system. Finally, the fuzzy output is “defuzzyfied” with a center of gravity method in order to give an exact classification of the bone age. The system displays both the exact age and the underlining fuzzy set. Thus a user may compare the precision and fuzziness of the assessment. At the final aggregation step the classifier must deal with a changing number of inputs and a tolerance to partially erroneous features. The former happens when an error in processing of an ROI is automatically detected and the features are not extracted. The applied aggregation procedure is able to adapt to the number of actual inputs. The latter problem is due to minor errors in the processing of an ROI that have passed undetected. It can be solved by applying an aggregation method that extracts a common kernel of the age assessment for all ROIs. Figure 20.16 shows an example of using the fuzzy classifier to determine the bone age of a subject to be 12.45 years. 20.3.5
Image Database
The analysis is performed on left-hand wrist radiographs selected from a normal population and organized into eight groups as defined above. On the basis of the preliminary result, for pre-pubertal children (0–9 years old) 5 images for each age group are collected whereas for children during puberty (10–18 years old) 10 images for each age group are collected. This gives 135 images per group and a total of 540 images. These images have been acquired at the Childrens Hospital Los Angeles/USC. Radiographic procedure of taking a hand X-rays image is as follows: • •
•
•
A certified radiology technologist is designated to obtain all hand/wrist x-rays. After removing all rings, bracelets and watches, each subject is positioned with the left hand/wrist in the PA (posterior-anterior) position. The middle finger is aligned with the radius. The arm is even with the shoulder, with the elbow flexed at 90 degrees. The hand/wrist is aligned with the long axis of the cassette with fingers slightly spread. The full length of each finger and the palm are flat and touching the radiographic table. A dedicated 10 ¥ 12 Kodak Lanex cassette with Ektascan-B RA single emulsion film (for conventional X-rays) or a 10 ¥ 12 imaging plate (for CR) is utilized for each image. The technique used for all subjects is 48–57 KVP (Kilovolt) and 1.2–1.6 MAS (milliamp/sec).
All relevant patient information, including the chronological age, date of birth, date of examination, race, sex, height, weight, trunk, and Tanner maturity index are stored in the database. Radiographs selected from the normal population are subjected to the image analysis. This results in a set of features (described in previous sections) that are stored in a SQL database. Separate structural objects are created for the feature set of early and later stages of development and for data permitting the region of interest to be viewed. The SQL query also retrieves images from the database. Database management provides the integration of the image data, patient information, and radiological findings. New patient records may be added, or existing
Ch20.qxd 2/12/04 5:14 PM Page 532
532
PACS AS A DECISION SUPPORT TOOL
TABLE 20.2 Race Blk
Digital Hand Atlas: African American Female and Male Groups
Sex
Group ID
Age Group
Images Required
Images Collected
F
BLKF00 BLKF01 BLKF02 BLKF03 BLKF04 BLKF05 BLKF06 BLKF07 BLKF08 BLKF09 BLKF10 BLKF11 BLKF12 BLKF13 BLKF14 BLKF15 BLKF16 BLKF17 BLKF18
00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18
M
BLKM00 BLKM01 BLKM02 BLKM03 BLKM04 BLKM05 BLKM06 BLKM07 BLKM08 BLKM09 BLKM10 BLKM11 BLKM12 BLKM13 BLKM14 BLKM15 BLKM16 BLKM17 BLKM18
00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18
0 5 5 5 5 5 5 5 5 5 10 10 10 10 10 10 10 10 10 135 0 5 5 5 5 5 5 5 5 5 10 10 10 10 10 10 10 10 10 135
3 5 5 5 5 5 5 5 5 5 10 10 10 10 10 11 10 10 10 139 4 5 5 5 6 5 5 5 5 5 11 10 10 11 10 11 11 10 10 144
sum
sum
Images Not Used
0
0
Images Missing 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Ch20.qxd 2/12/04 5:14 PM Page 533
BONE AGE ASSESSMENT WITH A DIGITAL HAND ATLAS
Web Server
Assessment Report Reference Images
533
DB Server Atlas
Digital Hand Radiograph Pushed in Image Preprocessing
Bony Feature Extraction
Bone Age Assessment
CAD Application Server Figure 20.17 Data flow for web-based bone age assessment starting from the left where a clinical hand image is to be assessed.
records may be deleted or corrected. Relationship among medical records will permit a verification of bony structures. The database will be a source of information for a user graphic interface that displays the classification results or is able to view the data describing a particular stage of development. Changes in the database will result in retraining of the classifier. Table 20.2 shows one category (African American female) among the eight groups (Caucasian, African American, Hispanic, and Asian, male and female).
20.3.6
Integration with Clinical PACS
20.3.6.1 Web-Enabled Hand Atlas Database Figure 20.17 shows a webenabled hand atlas database attached to the CAD server. The architecture is referred to as the hand atlas server. The server uses a Apache web server and an Oracle9i database for the digital hand images and their patient data and Sun’s Java server linking the database with the web interface. A user can access and view through the web all normal hand images along with other patient-related parameters in the database. An example is shown in Figure 20.18A, in which the group African American male, 12 years old is presented. Figure 20.18B illustrates the image content indexing. The question is to find the average image from the group of African American male, 12 years old, average in terms of the parameter F1 (see Section 20.3.4.1). Graphic user interface allows the user to pick any parameters for the average. The result is returned as the image on the left in Figure 20.18B. The complete architecture of the bone age assessment system is depicted in Figure 20.19. 20.3.6.2 CAD Server and its Integration with PACS Workstation A DICOM software package for query, retrieval, push and storage of hand images is integrated to the bone age CAD server. The CAD server consists of hand image processing software and the DICOM interface. It can accept clinical hand images from DICOM-compliant scanners or PACS workstations, extract the bony features, and assess the bone age by comparing with the digital atlas database. The work flow is shown in Figure 20.20. 20.3.6.3 Steps in Development of the Web-Based Digital Hand Atlas Figure 20.21 summarizes the steps in development of the web-based digital hand atlas. The
Ch20.qxd 2/12/04 5:14 PM Page 534
534
PACS AS A DECISION SUPPORT TOOL
Figure 20.18A Automatic selection of an average or the most representative image in an age group. The image number 5 (age 12.42 yr) is the closest one in chronological age to the average age, about 12.50 yr in the group. The image number 1 is the most representative one, close to the average skeleton maturity in the group. DOB, date of birth; DOE, date of exam; Tanner, Tanner maturity index.
Figure 20.18B Image content indexing. In this case, the question is to find the hand in this group that represents the average of the group based on a parameter among all 11 shown in Section 20.3.4.1. In the display, F1 was the parameter chosen. (Courtesy of A. Zhang.)
Ch20.qxd 2/12/04 5:14 PM Page 535
BONE AGE ASSESSMENT WITH A DIGITAL HAND ATLAS
Data Collection
535
Hand Atlas CAD Server
CHLA Ref. Images
Oracle Server
CAD Modules Application Server
Clinical Images
Image Database
Web Server
PACS
Client: Web Browser Figure 20.19 System architecture of the CAD bone age assessment. PACS supplies clinical images. CHLA, Childrens Hospital, Los Angeles for data collection; CAD, computer-aided diagnosis performs the bone age assessment.
Clinical image
Scanner
1.5 x 2K
1.5 x 2K
LCD
LCD
Digital Atlas
Workstation
PACS Server
Atlas image
Figure 20.20 Integration of the CAD-BAA workstation with PACS. The clinical image (upper right) is to be assessed by using the atlas images (lower right).
first step (see numerals in Fig. 20.21) is to refine bone feature extraction algorithms derived for age assessment and image matching. Parameters in the algorithms are obtained based on the normal cases collected at the Childrens Hospital Los Angeles (CHLA). Accuracy of the automatically extracted features is evaluated by comparing them with interactively extracted measures. In Step 2, the over 1000 normal hand images already collected are validated as normal based on three existing standards [Brush Foundation Studies—Skeletal age vs. bone age; NCHS 2000, BMI (bone mass index); and Tanner Maturity Index]. They are then organized into an atlas database with extracted bony features. Results derived from the digital atlas are compared with those acquired via the conventional method based on the Greulich and Pyle (G-P) method. Clinical data previously collected at UCSF are
Ch20.qxd 2/12/04 5:14 PM Page 536
536
PACS AS A DECISION SUPPORT TOOL Preliminary Clinical Evaluation
Clinical Images & Data Collected at UCSF & CHLA 800 Cases
Normal Images & Data Collected at CHLA
step 1
Bone Feature Extraction
Refinement
1
step 3
Bone Age Assessment
step 2
Retrospective Evaluation 300 Cases
3
Digital Hand Atlas 1,080+400 Cases, Validation of Normal
3 Refinement
3
Retrospective Evaluation
step 4
Refined Digital Hand Atlas Is Greulich-Pyle Atlas adequate for today children bone age assessment ?
1
step 5
5
Manual Method
Patients 8 6 7 Refinement step 7
Comparison
step 6
Prospective Clinical Evaluation at both CHLA and UCLA
Distribution and Web Deployment of the Digital Hand Atlas
Figure 20.21 Steps in the development of the web-based digital hand atlas. See text for step numerals.
used in Step 3 for retrospective studies to evaluate and refine the digital atlas and the bone age assessment software. The refined digital atlas server is implemented and web-enabled in Step 4. A prospective study using both manual and digital atlas methods is performed in Step 5. The comparison of the two methods in Step 6 is to further refine the CAD and image matching algorithms and improve the system software applied in Step 7. Step 8, which is still in progress, with provide initial data and insight as to the continued appropriateness of the G-P method for contemporary populations, thus providing impetus for additional research in this area by clinical experts. 20.3.6.4 Operation Procedure Figure 20.22 shows the operation procedure. When a patient needs to have his/her bone age assessed, the first step is to obtain a hand/wrist CR image from the patient. The physician retrieves this image from the PACS database and requests the digital atlas for an automatic bone assessment from the workstation. The system returns the bone age, and the physician can request comparisons among patients with similar symptoms and age group. If the user is satisfied with the result, both the image and extracted data will be auto-
Ch20.qxd 2/12/04 5:14 PM Page 537
BONE AGE ASSESSMENT WITH A DIGITAL HAND ATLAS
537
Procedure involved in Bone Age Assessment Using a Digital Atlas
Hand Wrist Digital Radiograph from PACS Bone Age Assessment
Comparing features/images in the Digital Atlas
Pre- & Post Processing carpal, wrist, phalanges, epiphyseal fusion
Auto features Extraction
Enrichment
GUI Figure 20.22 Work flow of the bone age assessment with a digital atlas starting with the subject taking a hand radiograph.
matically appended in the atlas to increase its statistical power. If the result is doubtful, different queries can be invoked to the atlas for other alternatives. 20.3.7 Summary of PACS-Based and MIII-Driven CAD for Bone Age Assessment In this section, we have presented a decision support tool for assessing bone age of children based on PACS and MIII technologies in great detail. The presentation highlights the rationale behind the digital atlas methodology, collection of data from normal individuals, image processing for parameter extractions, segmentation, and bone age determination. It also describes the method of connection of the CAD server to the PACS and MIII. This example, although very clinical specific, can nevertheless be used as a road map for other kinds of PACS-based decision support tools.
Ch21.qxd 2/12/04 5:12 PM Page 539
CHAPTER 21
ePR-Based PACS Application Server for Other Medical Specialties
HIS Database
Generic PACS Components & Data Flow Reports
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS Controller & Archive Server
Radiation Therapy (RT) Server
Workstations
Image-Assisted Surgery System (IASS) Server
Figure 21.0 PACS components and data flow and its connection to the radiation therapy (RT) server and the image-assisted surgery system server (IASS) discussed in this chapter. Application servers are normally not included in a commercial PACS. The PACS manufacturer or the user can develop these servers according to need.
21.1 21.1.1
PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES ePR-Based PACS Application Server
The essence of medical imaging informatics is to use informatics methods to extract and synthesize the information-rich PACS and other patient-related databases to further advance medical research, education, and clinical services. Along the way, we need to develop methodology, tools, infrastructure, and applications. In Chapter 19, we discussed the concept of medical imaging informatics infrastructure (MIII) PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
539
Ch21.qxd 2/12/04 5:12 PM Page 540
540
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
for large-scale horizontal and longitudinal studies and methods to connect computer-aided diagnosis (CAD) server to the PACS. In Chapter 20, we gave three examples of the use of PACS as a decision support tool. These are outcome analysis of lung nodules under treatment, image matching and data mining of neuroimages, and bone age assessment with a digital hand atlas. All these three examples rely on PACS as a decision support tool. These applications obtain image data from PACS, use image processing, visualization, and graphic user interface (GUI) heavily, and require the concepts of image content indexing and data mining. In this chapter, we present two different application systems, image-assisted surgery system (IASS) and DICOM-based radiation therapy server (RT). These applications not only rely on PACS as a decision support tool but also must combine PACS image data with other medical specialties’ data, as in radiation therapy and in surgery to form new medical informatics servers. Figure 21.0 shows the logical position of IASS and RT servers in the PACS data flow. Two examples of these servers used in surgery and in oncology are discussed later in the chapter. These servers are for patient treatment and not solely for diagnostic purposes.The approach of developing these new servers is the use of the concept of electronic patient record (ePR). In both applications to be discussed, the patient is under treatment and, whether it is surgery or radiation therapy, the concern of the surgeon or the oncologist is the sole patient record. In this approach, the patient’s image data will first be extracted from PACS and then be merged with other specialty data. The server so developed will be used in the respective medical specialty areas instead of radiology. Examples of data unique to surgery are patient vital signs, EKG, surgical video and dictation, 3-D rendering and surgical simulation images, and some on-line images during surgery. Examples of data unique to radiation therapy are radiation treatment plans, dose distribution, simulator images, and treatment records. Figure 21.1 shows the differences in concept between the PACS as a decision support tool server and as application server for different medical specialty applications. Although these two applications are still in the developmental stage, the reader should grasp the concept of these servers and apply the principles to other medical sectors to better health care delivery. We review briefly the concept of ePR in Section 21.1.2. 21.1.2
Review of Electronic Patient Record (ePR)
The electronic patient record is an emerging concept to replace or supplement the hospital- or clinic-based health care information system (Section 6.6). The major functions are: • • • • •
Accept direct digital input of patient data. Analyze across patients and providers. Provide clinical decision support and suggest courses of treatment. Perform outcome analysis and patient and physician profiling. Distribute information across different platforms and health information systems.
The concepts of both IASS and RT server use the ePR architecture with an individual patient as the focus.
Tools & Resources
- Image processing tool - Visualization - Graphic user interface (GUI)
Image content indexing
Data Mining
May need all three tools
ePR-based Server for other Medical Specialty
The server is used at other Medical specialty areas
Form a new imaging informatics oriented Server
As a new Informatics Server: Chapter 21
Merge with other medical specialty data
PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
Figure 21.1 Two types of PACS application server. Left: as a radiology decision support tool (Chapter 20). Right: ePR-based for other medical specialties (Chapter 21). Middle: tools and resources required.
Decision Support Tool
Form a self-sufficient diagnostic oriented Server
As a decision support tool: Chapter 20
Use PACS images and related patient data
PACS Application Server
Ch21.qxd 2/12/04 5:12 PM Page 541
541
Ch21.qxd 2/12/04 5:12 PM Page 542
542
21.2
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
IMAGE-ASSISTED SURGERY SYSTEM
This section gives an example on how to develop an ePR-based imaging informatics system, called image-assisted surgery system (IASS), to assist surgery, in particular, minimally invasive surgery for lower back pain. The challenges of the IASS are that at least four types of images and data are collected live during surgery. These are digital C-arm radiography, endoscopy images, live surgical video, and surgical dictation. All of them have large file sizes and are acquired in real time. In addition, these multimedia input devices are connected to the IASS workstation just before surgery. Quality assurance of these inputs is of utmost importance during the surgical procedure. 21.2.1
Concept of the IASS
The IASS is an informatics system designed to assist surgery based on the concept of ePR and taking advantage of the existing knowledge of PACS and MIII. We use the minimally invasive spinal surgery technique as an example in developing this concept. In subsequent sections, we develop the concept of the IASS server and IASS workstations using existing and emerging PACS and medical informatics technologies. 21.2.1.1 System Components Figure 21.2 shows the three components of an IASS, the minimally invasive spinal surgery server, the on-site surgical workstation, and the remote surgical workstations, with the server connected to the PACS controller. 21.2.1.2 PACS Workflow and the Spinal Surgery Server Following Figure 21.2, the IASS server supports both on-site and remote IASS. The ePR design concept allows the patient-centered images and data to be staged at the server and delivered to the workstation for review before, during, and after surgery. The remote workstations can be connected to the on-site workstation by a high-speed broadband network, where second opinion and remote teaching can take place. 21.2.2
Minimally Invasive Spinal Surgery Work Flow
Figure 21.3 shows the minimally invasive spinal surgery work flow before surgery, at the surgical site, during surgery, and after surgery (some of the concept is contributed by Dr. John Chiu). The two rectangular boxes at the bottom right represent the workstations shown in Figure 21.2. 21.2.3
Definition and Functionality of the IASS
21.2.3.1 Definition The IASS server is connected to the PACS server by using DICOM to obtain the patient images and related data. The IASS workstations are mobile and connected to the stationary IASS server. The purpose of the stand-alone workstation at the surgical suite is to support neurosurgery operation and documentation; the remote workstations are for education, teaching, and second opinion.
Ch21.qxd 2/12/04 5:12 PM Page 543
IMAGE-ASSISTED SURGERY SYSTEM
HIS Database
543
PACS Workflow & Spinal Surgery Server
Database Gateway
Imaging Modalities
Acquisition Gateway
PACS CONTROLLER:
Workstations
Image Database&Archive Selected Surgical Patient
Image-assisted Surgery System (IASS) Server
On-site Surgical IASS Workstation
Tele consultation
Remote IASS Workstations
Figure 21.2 PACS work flow and the image-assisted neurosurgery system (IASS). IASS has three components: the server, workstations (WSs), and communication networks. The IASS server is connected to the PACS controller; on-site WS is used in the surgical suite; and remote WSs are for second opinion, teleconferencing, and teaching.
They can be connected via either intranet or Internet to support teleconsultation, remote teaching, and second opinion during surgery. 21.2.3.2 Functionality (1) CT/MR/US/CR and prereconstructed 3-D images along with pertinent clinical data of the surgical patient can be preloaded through PACS or from DICOM-compliant imaging modalities to the IASS server (see Fig. 21.3). These images and data can be pushed from the server to the workstations for viewing before, during, or after surgery. (2) Digital C-arm radiography and endoscopy are normally used during the surgery; they can be connected to the workstation at the surgical suite before surgery. Images from these modalities can been transmitted in real-time during surgery to the workstation with DICOM standard through an intranet and to the remote workstations if broadband networks are available. Because the amount of on-line image data is large, the transmission rate requirement is high. These images can be appended to the patient’s worklist in the workstation for viewing during surgery and for archive to the database in the IASS server after surgery. (3) A CCD video camera with associated software and a digital voice dictation phone for surgical documentation and for teleconferencing at the surgical
Ch21.qxd 2/12/04 5:12 PM Page 544
544
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
Minimally Invasive Spinal Surgery Work Flow Before Before Surgery
At At Surgical SurgicalSite Site
During During Surgery Surgery
Preload Data from PACS
Connection to
• Preload CT, MR, CR, 3-D
• CT, MR, CR
• C-ARM X-ray
• 3-D Rendering images
• Endoscopy
• Preload Patient Information
• Patient Information • Simulated Surgical Plan
• Surgical video • Digital voice dictation
• Real-time C-ARM, Endoscopy images • Real-time surgery video & dictation
Post Surgery Patient Surgical Database
• All Images • Patient Information • Surgical Movie • Quantization Analysis
Networking WS: Real-Time Second Opinion
WSs:Remote Training
Figure 21.3 Minimally invasive spinal surgery workflow in the IASS: before surgery, at the surgical suite, during surgery, and after surgery.
(4)
(5)
(6)
(7)
suite can be attached to the workstation. If this is used as a stand-alone surgical workstation, the surgeon can document the surgical procedure via voice and video recorders. A dual-cursor teleconsultation package in the workstations can be used (Section 14.6) for expert opinion from a remote site during surgery, or it can be used for surgical training or continuing education. Teleconsultation can be performed by using a pair of workstations connected via either intranet or Internet. Before teleconsultation, relevant data and images described in Items 1 and 2 can be preloaded to the workstation at the surgical suite from the IASS server through an authoring session provided by the consultation software package. The authoring session is prepared by personnel either from the surgical or the expert site, providing that a site has access to the IASS server. During a teleconsultation session, images described in Items 1 and 2 can be synchronized and displayed in real time at both workstations. A dual-cursor mechanism can be used to discuss images by the surgeon and the expert in real time. In addition, the teleconferencing software can be used for visual and voice communication. The database in the workstation automatically manages the surgical document and teleconsultation transaction.
Ch21.qxd 2/12/04 5:12 PM Page 545
IMAGE-ASSISTED SURGERY SYSTEM
21.2.4
545
The IASS Architecture
The IASS architecture consists of four components: hardware, software, network, and industrial standards. Figure 21.4A shows the components of the IASS, and Figure 21.4B depicts the workstation prototype design sketch. 21.2.4.1 IASS Hardware Components • PC NT workstation(s) • Video camera and recorder • Voice phone and recorder • Power supply and UPS (uninterrupted power supply) • Ethernet port and modem • Mobile cart • Flat panel display (from 2 to 4) 21.2.4.2 IASS Software Modules • Standard PACS workstation display functions (commercially available) • Standard teleconferencing software (commercially available) • Dual-cursor teleconsulation software package (available in research laboratory; see Section 14.6) • DCIOM acquisition gateway software [commercially available (DICOM, 1996)] 21.2.4.3 Communication Network Connection (See Chapter 9) • Local Area Network (LAN)—Fast Ethernet or ATM (asynchronous transfer mode) technology • Wide Area Network (WAN)—Modem via voice line or data line ISDN DSL T1 to T3 ATM Internet 2 (Yu, 2000) 21.2.4.4 Industrial Standards • Images: DCIOM (DICOM, 1996) • Patient information: HL7 (HL-7, 1991) 21.2.5
Summary and Current Status
PACS and imaging informatics are two emerging technologies that can facilitate the practice of minimally invasive spinal surgery. PACS provides the collection and distribution of required medical images and related data, and imaging informatics allows this information to be used efficiently in spinal surgery. The concept of IASS presented here is based on these two technologies as aids for minimally invasive
A
Live
Other Vital data
Other Vital data
2) During Surgery On-site WS
Preload
Intranet, internet
Live, real time, synchronized
Teleconsultation
CT, MR, US, CR, 3D
-Surgical documentation voice, text & video - On-line images -Annotations (voice, text)
Intranet Internet Power
Remote Site
Remote WSs
DICOM
PACS
Figure 21.4 (A) Components and set up in the IASS. Top: IASS server. Left: WS at the surgical suite. Right: remote WS. (B) IASS WS prototype design sketch (Contributed by Dr. Brent Liu and Ms. Grace Liu.).
Surgical Suite
3) Post surgery management
Video Voice Data line
Back panel connections to on-line equipment:
1b) Before Surgery connection
Voice
CCD
Endoscopy
Digital C-arm radiography
1a) Before Surgery
Image- Assisted Surgery System (IASS) Server
546
Equipments at Surgical Suite
Surgical center
Ch21.qxd 2/12/04 5:12 PM Page 546
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
Ch21.qxd 2/12/04 5:12 PM Page 547
IMAGE-ASSISTED SURGERY SYSTEM
All-purpose tabletop space.
Ball joint for complete rotational adjustment of monitors Bracket has full 360 horizontal motion
Goosenecked camera and microphone for easy manipulation.
547
.Pole is sectional for 2 or 4 monitor configurations. .
Heavy-Duty metalbars for manipulating cart on all sides.
Keyboard and integrated mouse trackball located in pullout tray for protection.
Laptop and UPS accessible via front door panel.
Backpanelwith lock for easy service access.
Redundant Modem and Ethernet jacks, Power switch, and printer . into cart port all integrated
IASS WS Prototype Design Sketch
B
Figure 21.4 Continued
spinal surgery. IASS workstations can be used as a stand-alone image-assisted tool during surgery or as a remote teaching resource during or after surgery. Many components in the IASS are commercially available technologies; the system integration technology is in the prototype stage in our laboratory. The PACS and informatics components technology are readily available, but to integrate them with images and data acquired during surgery is challenging. Among the challenges are: (1) Several on-line imaging systems including real-time images from digital Carm radiography and from endoscopy are being acquired during surgery. (2) Video teleconferencing is used during surgery. (3) Surgical video and dictation are used during surgery. (4) Items (1), (2), and (3) are all real-time image/data collection that are not in the patient worklist before surgery. Meticulous and synchronizing integration of image/data from these items during surgery is not simple because of the amount of data involved and the real-time
Ch21.qxd 2/12/04 5:12 PM Page 548
548
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
requirement. Postsurgery editing of the image/data acquired during surgery requires heavy manual efforts.
21.3
RADIATION THERAPY SERVER
In this section we describe the concept of the radiation therapy (RT) server that is under development at the PACS Laboratory, the Hong Kong Polytechnic University. The RT server is another PACS application server that is for other medical specialty, namely, radiation oncology. We first explain why an RT server is needed and where it will be used. We review conventional RT work flow and the definition of DICOM RT objects. Following the PACS concept we introduce the PACS and imaging informatics-based RT server and data flow. We use the RT treatment of nasopharynx carcinoma (a common cancer in Southeast Asia) as an example to walk through the functions and work flow of the RT server. The relationship between the RT server and the electronic patient record (ePR) is described with radiation treatment for nasopharyngeal carcinoma as an example. Finally, we discuss how to integrate RT-related information with the RT server, the RT data model in the RT server, and the methods of display. 21.3.1
Radiation Therapy, PACS, and Imaging Informatics
Radiation therapy (RT) is an image-based treatment. It requires images from projection X rays, computed tomography (CT), magnetic resonance (MR), positron emission tomography (PET), and linear accelerator for tumor localization, treatment planning, and verification of treatment plans. In the process, it requires patient information to plan a treatment, image registration to identify regions to be treated and markers to align images; image fusion to delineate pathological structures from various imaging modalities; anatomy to identify the shape, size, and location of the targets and radiosensitive vital organs; and dose computation to ensure delivery of uniform high doses to the target but avoidance of critical structures. In addition, carefully monitoring treatment, optimization, and dose calculation are essential for a successful patient outcome (Hendee, 2000; Sternick, 2000). In these processes, PACS and imaging informatics technologies are used extensively. However, they are not integrated with these technologies as a complete radiation treatment system. PACS is an image management system. It combines both imaging modalities and related patient information systems. Although it has not been stated formally, PACS was developed for diagnostic purpose and RT has not been an emphasis. Figure 21.5 shows the generic PACS architecture and its connection to the RT server. Imaging informatics is the computer software technology that extracts pertinent information from the PACS database to facilitate various medical applications. The infrastructure in medical imaging informatics, PACS database, patient-relevant information, high-speed network, image visualization and presentation, and diseaserelated knowledge base, is described in Figure 19.1. The aforementioned RT processes can utilize PACS and imaging informatics technologies extensively and can be integrated as a total RT PACS and imaging informatics-based system. Among RT techniques, intensity-modulated radiation
Ch21.qxd 2/12/04 5:12 PM Page 549
RADIATION THERAPY SERVER
549
PACS Components & Data Flow HIS/RIS Database
Reports, Other Results
Database Gateway Imaging Modalities
Acquisition Gateway
PACS CONTROLLER:
Workstations
Image Database & Archive
Radiation Therapy Server Figure 21.5 A generic PACS with components and data flow. An application server can be developed as an RT server. In this case, the PACS controller sends images and data of those patients requiring radiation therapy to the RT server.
therapy (IMRT) and image-guided radiosurgery (using the Cyberknife) are even more image intensive. In the former, a networked computerized work flow of the patient image and informatics information would provide more efficient planning and treatment procedures. The latter is based on a linear accelerator mounted on an image-guided robotic arm for delivering the optimal uniform dose to the targets. Both technologies rely on images from PACS and derived data from the use of imaging informatics technology. 21.3.2
Work Flow of RT Images
In this section, we review the conventional work flow of images used in RT. Based on this work flow and the DICOM RT objects described in Section 21.3.3, we can introduce the DICOM RT concept. Figure 21.6 shows the conceptual work flow model of conventional radiation therapy in a typical clinical department. The work flow can be described in the following routes. Route 1 (i) CT scans of the region of the body for radiation treatment planning can be acquired from a diagnostic CT scanner or a CT simulator (Fig. 21.7A). (ii) The CT images can be sent either to: (a) The virtual simulator (VS) when only beam positioning and shielding are required (b) The treatment planning system (TPS) when radiation distribution isodose plans also need to be generated in addition to what is done in (a).
Ch21.qxd 2/12/04 5:12 PM Page 550
550
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
Diagnostic (CT) or CT Simulator
Simulator
4
1b
1a
Hand planning
Virtual Simulator
7
2b
Treatment Planning System (TPS)
3
2a DRR
5
RT workstations
6 Digitizer
8a
EPI
Electronic portal images from Linear Accelerator
8b Manual verification using hardcopy Simulator film and Portal film
Figure 21.6 Conceptual work flow of image-based conventional radiation therapy.
Route 2a (i) At the VS, digital reconstructed radiographs (DRRs; Fig. 21.7B) are generated for field planning, that is, inputting the appropriate beam sizes, collimator angle, gantry angle, and beam blockings. The target volume and treatment volume can also be visualized on the DRR. (ii) The plans in the VS can later be retrieved at the RT workstations for comparison with the verification portal images from the linear accelerator. Route 2b (For plans requiring dose computation) (i) Plans from the VS that require dose computation will also be sent to the TPS for dose computation. (ii) DRRs can also be generated from the TPS. Route 3 (i) The finished treatment plans (Fig. 21.7C and D) can be retrieved at remote stations by radiation oncologists for review and approval.
Ch21.qxd 2/12/04 5:12 PM Page 551
551
RADIATION THERAPY SERVER
A Figure 21.7 (A) a head image from diagnostic CT or CT simulator. (B) digital reconstructed radiography (DDR). (C) isodose distribution from a treatment plan. (D) dose volume histogram (DVH) from a treatment plan. (E) Hand planning is done on the hard copy film. (F) double-exposure electronic portal image (EPI) from the linear accelerator. (G) singleexposure EPI from the linear accelerator. (H) Left, Hard copy portal film with the hand drawn treatment region acquired from the linear accelerator. Film was digitized for display.
(ii) If the plan is unsatisfactory, the planning radiographer/radiation therapist/ dosimetrist will be informed to replan it. Route 4 (i) This is for situations when only plain films were done with a treatment simulator. Hand planning is done on the hard copy film (Fig. 21.7E). Route 5 (i) After hand planning, the hard copy film can be digitized by the digitizer or be kept as it is for later comparison with the treatment portal image either manually or at a workstation.
Ch21.qxd 2/12/04 5:12 PM Page 552
552
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
B Figure 21.7 Continued
Route 6 (i) The digitized simulator images can be kept as a record and be retrieved at a remote workstations as reference images for verification of portal images. Route 7 (i) For a simulator with direct capture of electronic image facility, images from the simulator can be sent to the appropriate RT workstations when verification with treatment portal image is required. Route 8a (For linear accelerator with electronic portal imager) (i) The electronic portal image (EPI; Fig. 21.7F and G) of the treatment region is acquired on the linear accelerator.
Ch21.qxd 2/12/04 5:12 PM Page 553
553
RADIATION THERAPY SERVER
C Figure 21.7 Continued
(ii) The EPI is sent to the RT workstation for comparison with DRR or digitized simulator images. Route 8b (For manual procedure of verification) (i) Hard copy portal image of the treatment region is acquired on the linear accelerator (Fig. 21.7H) (ii) The hardcopy portal film so acquired is compared with the hard copy simulator film acquired at the planning stage for treatment verification purpose. 21.3.3
DICOM Standard and RT DICOM
DICOM is the de facto standard in medical imaging format and communication and has been used by PACS users and vendors for many years (DICOM, 1996). RT uses
Ch21.qxd 2/12/04 5:12 PM Page 554
554
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
D
E Figure 21.7 Continued
Ch21.qxd 2/12/04 5:12 PM Page 555
RADIATION THERAPY SERVER
F
G Figure 21.7 Continued
555
Ch21.qxd 2/12/04 5:12 PM Page 556
556
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
H Figure 21.7 Continued
radiological images and images from the simulator and linear accelerator for planning, treatment, and verification. It is therefore advantageous to use DICOM (see Section 7.4) as its image and data communication standard (Borras, 2000). Between 1994 and 1997, DICOM Working Group 7 was formed under the auspices of NEMA (National Electrical Manufacturers’ Association) to define the DICOM standard for RT. As of 1999, seven RT-specific DICOM objects were ratified (Neumann, 1999): RT Image (97): Includes all images taken with radiotherapy equipment such as conventional simulators, digitizers, or electronic portal imagers. RT dose (97): Contains dose data such as dose distribution generated by treatment planning system, DVH (dose volume histogram), and dose points. RT structure set (97): Contains information related to patient structures, markers, isocenter, target volume, contours, and related data. RT plan (97): Refers to information contained in a treatment plan such as beam angles, collimator openings, and beam modifiers. RT beams treatment record (99): Records for external beam treatment. RT brachy treatment record (99): Records for brachytherapy. RT treatment summary record (99): Summary of a patient’s radiation treatment. Most RT vendors are at various stages of implementing these objects, in particular the 97 ratified objects, RT structure set, plan, image, and dose. The three record objects are still in their preliminary stage of implementation by vendors. Figure 21.8 categorizes these objects and their contents against those of diagnostic radiology.
Ch21.qxd 2/12/04 5:12 PM Page 557
RADIATION THERAPY SERVER
Diagnostic Radiology (DICOM Objects)
557
Radiation Therapy (Seven DICOM RT objects)
Patient
Study
Series
Images
Study
Seven RT Objects
RT Images
Simulator Image DRR Portal Image
RT Dose
Dose data Isodose curve DVH Dose points
RT Structure Set
Tumor Critical organs Isocenter Markers
RT Plan
Gantry angle Collimator openings Beam Modifiers
Three RT Treatment Record Objects
Beams Treatment Record Brachy Treatment Record Treatment Summary Record
Figure 21.8 Data structure of diagnostic radiology and the seven radiation therapy (RT) objects. DDR, digital reconstructed radiography; DVH, dose volume histogram.
The advantages of having these DICOM objects in RT are obvious. First, information and images within an object can be transferred across the boundary of different RT vendors with minimum efforts from the users. Second, it allows the total integration of RT components from various vendors. Third, the work flow of RT treatment can be closely monitored and analyzed, resulting in better health care delivery to the patient. Fourth, an individual patient’s RT treatment can be integrated under the scheme of the electronic patient record (ePR), a current trend of patient-oriented health care delivery systems. The individual RT treatment ePR can be combined with other related information including demographic, diagnostic, pharmacy, and clinical laboratory under the same format and standard. This will result in a portable ePR of the patient, a giant leap from the current hospital or health care information system, which is organization oriented.
Ch21.qxd 2/12/04 5:12 PM Page 558
558
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
21.3.4
PACS and Imaging Informatics-Based RT Server and Dataflow
The concept of a PACS and imaging informatics-based RT server is to integrate diagnostic imaging PACS and the seven DICOM RT objects as a total networked DICOM-based system with minimum human intervention during information transfer. The server coordinates the data flow in the complete treatment system with the following functions: • • • • •
Integrates all patient information and images pertinent to the treatment Records RT dose data Records RT plans Records the daily radiation treatment Generates a comprehensive patient record containing all this information
In addition, the server provides one-stop shopping for the treatment history of this patient. Figure 21.9 shows the functions of the PACS and imaging informatics-
Route 1
Route 9
PACS Server
Confirm plan
Imagebased RT Server
Diagnostic modalities Route 2a Route 3a
Virtual Simulation (VS)
Route 8
RT workstations
DRR Route 4a 2b
CT Simulator
Route 3b
Simulator or digitized localization images
Portal images from Linear Accelerator or digitizer
4b Treatment Planning System (TPS)
Route 5
Route 6
Route 7
Plan Modification Route 10
Workflow of the PACS and imaging informaticsServer. EPI (single exposure) Manual portal image (double exposure)
DRR: Digital reconstructed radiograph EPI: Electronic Portal Image
Figure 21.9 Data flow of the PACS and imaging informatics-based RT server. Procedures used in RT treatment of nasopharynx carcinoma (NPC) are used as an example to illustrate the image and data flow. (Courtesy of YY Law.) DRR, digital reconstructed radiograph; EPI, electronic portal image.
Ch21.qxd 2/12/04 5:12 PM Page 559
RADIATION THERAPY SERVER
559
based RT server and the physical data flow of a conventional radiation treatment. Procedures used in RT treatment of nasopharynx carcinoma (NPC), a common cancer in Southeast Asia, are used as an example. Route 1 (i) Acquire images of the region of the body for diagnosis or treatment planning from modalities such as CT, MR, and PET before treatment. (ii) Store the images in the PACS server. (iii) Select relevant images and send the images to the image-based RT server. Such images can be easily retrieved for review by oncologists at a later stage. They can also be kept in the oncology department as records as long as necessary without being tied up by the radiology practice. Route 2a & b (i) Retrieve the acquired CT images from the PACS server at the virtual simulation (VS) workstation (2a) or treatment planning system (TPS) for treatment planning (2b). Route 3a & b (i) Retrieve CT simulator images at the VS workstation (3a) or at TPS (3b). Route 4a & b (i) Generate digital reconstructed radiographs (DRR) from VS workstation. (ii) Perform field planning in the VS workstation, the plans are forwarded to either the RT server (4a) to be retrieved at the linear accelerator for treatment after approval or to the TPS (4b) for dose computation if required. Route 5 (i) CT Images from either the PACS or the CT simulator retrieved at the TPS will be used for treatment planning. DRR can also be generated from the CT images. (ii) Isodose distribution can be computed, showing the radiation dose to the tumor and critical structures. (iii) Dose volume histograms (DVH) can be generated for evaluation of the treatment plans (see Figure 21.7D). (iv) The resultant RT plans together with the CT images and the DRRs will be sent to the image-based RT server. Remarks: Each treatment plan consists of the following files: CT image file (DICOM 3.0), structure set file (DICOM-RT object), plan setup file (DICOM-RT object), and dose file (from TPS only, DICOM-RT object).
Ch21.qxd 2/12/04 5:12 PM Page 560
560
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
Route 6 (i) For cases that do not require CT for planning, acquire conventional simulator film image for treatment planning. (ii) Treatment fields are drawn manually on hard copy simulator film and then digitized. (iii) Send digitized image to image-based RT server. Route 7 (i) At the linear accelerator, retrieve the patient information from the RT server including the treatment plan. (ii) Acquire portal image from linear accelerator either manually or by using the electronic portal imager. (iii) Send digitized image or electronic portal image to the image-based RT server. Route 8 (i) At the RT workstations, retrieve the relevant images from the imagebased RT server for evaluation of treatment plans or for verification purposes. (ii) In the former, the radiation oncologist can confirm a treatment plan. (iii) In the latter, simulator image (reference image) of a patient can be compared with his portal image from the Linear Accelerator (LINAC) whereby verification is confirmed or modification is ordered. Route 9 (i) The written confirmation or comments of the treatment plan or the verification of treatment by the radiation oncologists can be sent back to the image-based RT server. (ii) An electronic patient record (ePR) is generated at the RT server. Route 10 (i) If the treatment plan or the portal image is not satisfactory, the oncologist can request the plan or the portal image to be modified or repeated. To summarize, the image-based RT server contains the diagnostic images of patients undergoing radiation therapy and all seven of the DICOM-RT objects. It also contains organized treatment summaries. 21.3.5 An Example of an Electronic Patient Record (ePR) in the PACS and Image Informatics-Based RT Server When the DICOM RT standard is complied, informatics in the RT server can be standardized. To illustrate RT information stored in the image informatics-based RT
Ch21.qxd 2/12/04 5:12 PM Page 561
RADIATION THERAPY SERVER
561
server with the DICOM RT objects, the example of a conventional radiation treatment for nasopharyngeal carcinoma is used. The patient undergoes CT, MR, or PET diagnostic examinations (DICOM standard) of the head and neck, images of which are archived in the PACS server. If fusion from different modalities needs to be performed, related images can be retrieved from the PACS server for fusion at a fusion application workstation. TPS or the VS workstation can also retrieve the relevant CT images or the CT-MR fused images for treatment planning. The diagnostic images are ordinary DICOM images. In the TPS or virtual simulator, the tumor in the nasopharynx is outlined as well as critical organs such as the brainstem and spinal cord. These are the RT structure set objects. With such information, an RT plan is designed with all the treatment parameters such as beam size, gantry angle, and collimator angle. The resultant treatment plan should deliver a uniform dose to the target volume of the nasopharyngx while avoiding delivery of a dose to the outlined critical organs. Digital reconstruction can also be performed in either the TPS or VS workstation when the radiation fields are inserted. The DRRs so produced are RT image objects. Dose calculation is done in the TPS, and a plan with isodose distribution is generated. With the DVH (dose volume histogram), the resultant plan can be evaluated. All such dose-related information is in the arena of RT dose. Information of the treatment plan (RT plan object) for the nasopharyngeal carcinoma from the TPS can be exported to a LINAC to initiate the treatment. A portal image of the treatment region, RT image object, is taken in the LINAC and compared with the DRR generated from the TPS/VS workstation or a simulator image (when no virtual simulation facility is available), another RT image object. The LINAC will generate the daily treatment record or summary record (both are treatment records objects) for each patient. When the diagnostic images and the RT objects are stored in the same RT server, patients’ diagnostic and treatment information can be easily retrieved and distributed for evaluation and confirmation. This is the content of an ePR. This database is particularly advantageous when treatment progress must be assessed at times of follow-up. It also reduces a lot of the effort in searching for old clinical files and treatment files/films when retrospective studies are called for. The data stored in the image-based RT Server can also be retrieved to enrich the store of teaching materials. 21.3.6
RT Data Flow—From Input to the Server to the Display
Figure 21.9 depicts the physical organization of the imaging informatics-based RT server. In this section we present the RT data flow from input to the RT archive server, to the ePR Web application server, and to the display in Figure 21.10. In the presentation, we also highlight the data flow and functions of each component. Because the RT server is in the R&D phase, an RT modality simulator is needed to assemble test patient information used in the data flow, this component is also illustrated [Fig. 21.10 (6) RT Mod SIM]. However, in the clinical RT server, this component will not be in the data flow. Table 21.1 describes the functions of the translators, RT DICOM gateway (GW), and the RT archive server in more detail. Figure 21.10 shows the 10 components, their corresponding functions, and the data flow, which are also described below:
Ch21.qxd 2/12/04 5:12 PM Page 562
562
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
RT Web Application Server and Data Flow (1) PACS PACS Modality Simulator
CT SCU MR
(2) Treatment Planning
PACS Server
PACS DICOM GW
(8) RT Archive Server RT Archive Server
SCU
(7) RT DICOM GW Oracle Database
RT DICOM GW DICOM
SCP TPS/ Virtual Simulator SCU
SCP DICOM RT objects: Plan, Dose, SS, image
DICOM
RT Objects DICOM Images
SCP
SCU
Auto-Routing Non-DICOM objects
RT INPUT
CT Simulator
TranslatorNon-DICOM to DICOM
SCU
(9) RT Web App. Server
DICOM Image
SCU
SCU
Simulator or Film SCU Digitizer
RT Receiver
RT Web App. Server
RT objects DICOM RT objects: RT image
RT Decoder
Put data into RT Tables
DICOM Images
(3) Info. Sys Textual Data (non-DICOM)
TranslatorNon-DICOM to DICOM
Information Systems
Access Database
RT Plan, RT Beams Record
Windows IIS Web Server
SCU
∑ RT Tables ∑ RT Views
HTTP
(4) LINAC DICOM RT objects: RT Plan Linac
SCP
(5) Portal Imager Portal Imager
Pixel Data (non-DICOM)
TranslatorNon-DICOM to DICOM
(10)
RT client workstations
RT Beams Record RT Image RT Image
SCU (6)
SCU
Display Windows RT Web Client
RT Modality Simulator RT Dose RT Plan RT Structure RT Image DICOM Images
Input
RT Modality Simulator SCU
Gateway
Servers + WS
DICOM - represents both DICOM data format and communication protocol. SCU - Service Class User; SCP - Service Class Provider IIS - Internet Information Service HTTP - Hyper Text Transfer Protocol SS - Structure Set
Figure 21.10 Integration of the RT server. See text and Table 21.1 for component descriptions. The RT server is for organization of all RT DICOM objects and DICOM images for any RT applications. The RT web application server is ePR based for special display windows related to the patient treatment records and progress retrievable by web clients. (Courtesy of MYY Law and L. Chan.) Left: input components. Middle: the RT DICOM gateway. Right: the RT server, RT web application server, and web clients.
Ch21.qxd 2/12/04 5:12 PM Page 563
RADIATION THERAPY SERVER
563
TABLE 21.1 Functions, Input, and Output of Translator (2, 3, and 5 Middle), RT DICOM (7) Gateway, and RT Archive Server (8) described in Figure 21.10 Translators (2, 3, 5 Middle) Translator 1 2
Input Textual Pixel
Output RT Plan, RT Structure, RT Dose DCM image, RT Image, RT Dose
RT DICOM Gateway (7) Functions Buffer Object grouping
Input
Output
DCM RT objects DCM image
— Patient folder-RT objects under same patient and therapy course
RT Archive Server (8) Functions
Input
Oracle DB
DCM RT objects DCM images
Output RT data model (modified from DICOM data model) ePR folder
DCM, DICOM.
(1) PACS acts as a source of DICOM images, such as CT/MR. (2) TPS exports DICOM and non-DICOM data, the latter can be converted by a translator to DICOM format. CT Simulator exports DICOM images and Simulator or film digitizer export DICOM RT Image. (3) Information systems (Info. Sys.) provide non-DICOM RT Plan and RT Record data, which can be converted to DICOM RT Plan and Record. (4) Linear Accelerator (LINAC) is the source of textual data. Some of such data would be saved in the Information Systems. If SCU is available, DICOM transmission is possible after being converted to DICOM RT Record. (5) Portal Imager generates DICOM RT Image or non-DICOM image, e.g., bitmap. Non-digital images can be converted to DICOM images by a film digitizer. (6) RT Modality Simulator provides a means to trace the connectivity between components and to send DICOM objects produced by translators. During developmental stage, it is used to assemble most RT objects before they are routed to the RT DICOM Gateway. This component will not be needed in the clinical RT Server. (7) RT DICOM Gateway (GW) acts as a buffer before DICOM objects entering the RT Archive Server and can group RT objects of the same patient and the same therapy course. (8) RT Archive Server is for the storage of all RT objects and RT related DICOM images. It supports all RT applications. (9) RT Web Application Server functions: (i) Receives RT objects and DICOM images, (ii) Decodes RT objects to corresponding positions in Microsoft
Ch21.qxd 2/12/04 5:12 PM Page 564
564
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
Access database used in the Web server, and (iii) Organizes data into Web viewing mode for display in RT Web client (10). (10) RT Web clients retrieve RT-related information of the patient from the RT Web application server. Figure 21.11A & B depicts a sample display window in the web client.
First Frame
Second Frame
Third Frame
I.9 I.10 I.11 I.12 I.13 I.14
A Figure 21.11 A typical display with cascade windows from the RT web client. (Courtesy of YY Law and L. Chan.) (A) A display window for setup details: the first frame presents the patient demographic and the total number and number of therapeutic courses. The display of the upper frame remains unchanged for any window of the same patient. The second frame allows the users to select course and phase with pull-down menus, and the display of the third frame can be switched among summary, setup details, plan, images and dose according to the corresponding links and the selection of course and phase. In the upper table of the third frame, the information of prescription is shown with respect to regions of the same course (Course 1). The lower table of the third frame shows a list of field setup records of the same phase (Phase IV). (B) A display window for images: the first and second frames of this display window are the same as that for setup details shown in Fig. 21.11A. The third frame displays a series of CT or MR images in order, simulator images, and portal images of the same course and phase selected by users through the second frame.
Ch21.qxd 2/12/04 5:12 PM Page 565
RADIATION THERAPY SERVER
565
B Figure 21.11 Continued
21.3.7
Summary and Current Status
RT is an image-intensive treatment. Images are required not only from diagnostic modalities but also from treatment equipment. Currently, RT treatment still relies mostly on tedious manual image and data transfer methods because the community as a whole has not championed the concept of system integration. System integration of RT treatment has many benefits including lower equipment and operation costs, streamlined treatment procedures, and better health care delivery to the
Ch21.qxd 2/12/04 5:12 PM Page 566
566
ePR-BASED PACS APPLICATION SERVER FOR OTHER MEDICAL SPECIALTIES
patient in terms of quality. We have discussed the concept of the RT server as an attempt to integrate diverse health care information systems, imaging modalities, and RT equipment into one seamless treatment system. The work flow and data flow from various pieces of equipment to the server have been depicted, and the use of DICOM standard and DICOM RT have been emphasized in each step. The ePR-based RT server is still in the R&D phase; much effort is needed before it can become a clinical tool. The reader should understand the different types of images as well as information required to be integrated into the server and appreciate the power of using such a server in the practice of radiation oncology.
c22.qxd 2/12/04 5:09 PM Page 567
CHAPTER 22
New Directions in PACS Learning and PACS-Related Training
HIS Database
Generic PACS Components & Data Flow Reports
Database Gateway
Imaging Modalities
Acqu isition Gateway
PACS Controller & Arch ive Server
Workstations
Web Server
Figure 22.0 PACS components and work flow.
The picture archiving and communication system (PACS) is an image information system that is now widely installed. For its successful implementation, training has been found indispensable. A review of PACS training thus far shows that major emphasis has been placed on the use of display workstations and PACS users’ experience. Although these topics of training are very valuable to clinical and other health care users getting into the field, as an integrated system having many components that need connectivity, customization, and work flow analysis, PACS is much broader than the display workstations and clinical experience. With the many potentials for further development, a more comprehensive education program on PACS is called for and a PACS simulator as a stand-alone training and research tool not interfering with the daily clinical operation is necessary. This chapter discusses three
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
567
c22.qxd 2/12/04 5:09 PM Page 568
568
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
topics: the new direction in PACS education, the concept of distance learning of PACS, and interactive self-learning of radiology subspecialties. 22.1
NEW DIRECTION IN PACS EDUCATION
22.1.1
PACS as a Medical Image Information Technology System
A PACS is an image information technology (IT) system for the transmission and storage of medical images, as shown in Figure 22.0. After 20 years of development, the PACS is widely adopted in clinical departments. The PACS is an IT system of components with a clinical focus: no one person can claim expertise in all of these components. A review of the history of implementation shows that the segregation of expertise in health professionals and IT professionals was a hurdle. Health professionals did not have enough IT knowledge to maintain the daily running of the PACS or to communicate with the IT specialists, and IT-based engineers lacked the knowledge and experience in radiology departments and hospital environments to provide a seamless work flow. After these years, we now come to the awareness that for the successful implementation and operation of a PACS, a team effort consisting of radiologists, radiological technologists, hospital administrators, radiological clerks, nurses, and IT specialists, and hence more comprehensive training about the PACS is required. Not only clinician radiologists or radiological technologists (radiographers) need the training, but also others like IT personnel, system engineers, radiological administrators/clerks, and hospital administrators. Above all, the team requires a PACS manager or administrator who acts as a coordinator of the team, maintains the day-to-day operation of the PACS as well as training the different groups of staff to use and service the system. This section first reviews the training of PACS offered in the past in order to assess the additional requirements in PACS training and education. A new training model and tool will then be introduced. 22.1.2.
Education and Training of PACS in the Past
The history of the training of PACS can be divided into two periods. The first period started from the time the clinical PAC systems were launched, which is about 10odd years ago, to about 5 years ago, and the second period is from 5 years ago to the present. 22.1.2.1 PACS Training Ten to Five Years Ago Starting about 10 years ago, PACS was much discussed in scientific meetings. Refresher courses on PACS were offered in the annual meetings of the Radiological Society of North America (RSNA). Various manufacturers organized special private workshops by application specialists, but the emphasis of such training was on how to retrieve images and reports and display them on the review workstations and how to use the tools in the workstations to enhance image interpretation (Protopapas et al., 1996). The audience of such training was radiologists and physicians. At about the same time, special modality “PAC systems” such as CT PACS, MR PACS, US PACS, and CR PACS were sold to radiology departments. The training that was packaged with such purchases consisted of special workshops for technologists and radiologists organized by vendors. The training provided was on the use of the modality and archiving the images in the local database.
c22.qxd 2/12/04 5:09 PM Page 569
NEW DIRECTION IN PACS EDUCATION
569
22.1.2.2 Five Years Ago to the Present In the last 5 years, similar training has continued while changes have also evolved. Instead of didactic refresher courses, special hands-on workshops have been offered by major vendors under the auspices of the RSNA during the Annual Meetings on how to use their respective viewing workstations. The hands-on sessions usually last 1 hour each, with perhaps a second hour for “Advanced PACS,” which is for customizing the display workstations to the users’ preference. Interfacing of the PACS with RIS is also demonstrated. The target audience is radiologists or physicians, but any participants of the Meetings can enroll into the workshops if they are interested. In the last 3 years, the Integrating the Healthcare Enterprise (IHE) [an initiative launched in 1998 with the goal of encouraging integration of information systems within the health care enterprise (http://www.rsna.org/IHE/participation/ index.shtml 2001–2002)] has organized seminars and demonstrations during the RSNA Meeting to showcase the data integration capabilities between manufacturers with the aim of improving work flow and information sharing in support of better patient care. Over 20 manufacturers participated in the venture in the RSNA Meeting in 2001 (see Section 7.5). 22.1.3. New Trend of Education and Training Using the Concept of a PACS Simulator 22.1.3.1 Problems with the Past and Current Training Methods Past training efforts demonstrated the benefits of the PACS and thus attracted professionals to consider implementing the PACS in their respective departments. However, the transition from film to digital has been a challenging task for all professionals who used to work with films (Hirchom et al., 2002; Carrino et al., 1998). It is true that the viewing workstations encompass a lot of the procedures from film development to image interpretation in the era of hard copy films. This has resulted in increasing the use of workstations as a major emphasis in PACS training. However, although the most frequently used component, the viewing workstations are just one component of the PACS. Queries and retrievals from workstations are simple tasks if every other component runs smoothly. Nevertheless, a smooth work flow is never guaranteed, because there are too many factors that could cause problems for the system. A single point of failure in the network could hold the system in suspense and the users in agony. The number of such points should be minimized. Merging of past and current images could result in “loss” of image in some modalities, and it must be sorted out. Below are a few examples of queries frequently raised by clinical professionals (see Chapter 18): • • •
• •
•
How can images be sent to another workstation or a remote site? Why can some images shown in the worklist not be read? Why cannot a new modality be connected to the PACS although it claims to be DICOM compliant? How can I know there is no image loss during the whole exam transmission? Where is the bottleneck when multiple workstations query/retrieve simultaneously? How can we troubleshoot when the workstations cannot query/retrieve image from the PACS archive server?
c22.qxd 2/12/04 5:09 PM Page 570
570 • • •
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
How can we make a clinical PACS archive server continuously available? How can I access images for teaching purposes? What do I do with the PACS at “downtime”? What will be the impact?
These are hiccups that may or may not be related to the workstations. Of course, if there were on-site IT engineers, the solutions to these problems might be made easier. However, these are very practical clinical events that the IT engineer may or may not be able to help. Moreover, few manufacturers will provide such level of support except for the first few weeks of the operation of the new modalities. Although remote manufacturer support may be available, it cannot replace the services of a full-time PACS administrator or manager (Edward, 2001; Beird, 1999, 2000). It can be seen that the existing training on the viewing workstations can serve the needs of some professionals but is not adequate for the implementation and daily operation of the PACS. PACS training is usually conducted in the clinical site. However, scheduling the training sessions in a clinical department during working hours is not easy: either the radiologists or physicians are busy or the workstations are being used (Protopapas et al., 1996). Stability of the system as well as security and privacy of the patients’ data were added concerns. 22.1.3.2 New Trend: Comprehensive Training to Include System Integration Unlike an imaging modality, which is usually a stand-alone unit, the PACS involves integration of tens to hundreds of components, including all imaging modalities and other information systems such as RIS or HIS. The difficulties of its installation have been depicted in the literature (Pilling, 1999). For its implementation, knowledge of the department/hospital requirements (in terms of workload and speed), its work flow, the networking infrastructure, as well as the budget are some prerequisites. Strategy development, market survey, specification requirement (Pilling, 1999), involvement of users in planning, technical support (Crivianu-Gaita et al., 2000), and training people to utilize the PACS to its full potential (Watkins, 1999) are essential factors to success. Although viewing workstation workshops are still necessary for training clinicians, nurses, and clerks, past field experience indicates that comprehensive training that includes system integration in the radiology/hospital work flow is the new direction for equipping operators and administrators of the PACS so that they can handle the day-to-day running of the system. The trend of PACS installation shows that as more and more of such PACS managers are required, the comprehensive training program becomes indispensable. Depending on individual levels of responsibilities, the trainees should train accordingly in the areas of clinical work flow, IT, and networking. 22.1.3.3 New Tools—The PACS Simulator Such training can be conducted in a clinical radiology department where a PACS is implemented. However, as mentioned above, finding a time slot for the training is never easy for most busy departments because PACS is a 24/7 clinical system. Also, it is difficult to provide hands-on experience with PACS in a hospital environment because of network stability and security requirements. A PACS simulator in a laboratory environment noninvasively connected to the clinical PACS should provide a setting for a more
c22.qxd 2/12/04 5:09 PM Page 571
NEW DIRECTION IN PACS EDUCATION
571
basic understanding of the PACS and practical experience for trainees under a more controlled environment. A generic PACS simulator consists of eight components as shown in Figure 22.1. Functionality of Each Component 1. The RIS simulator randomly generates a pseudopatient registration and examination order and sends the order to the acquisition modality simulator. 2. The acquisition modality simulator (AMS) simulates the clinical imaging modalities. It inputs thousands of images of various types (CT, MR, CR, US) randomly each day from the clinical PACS (patient’s name and ID are deleted first, and a pseudoname and ID are then assigned) to simulate the production of digital images in a clinical department. The AMS simulates the generation of DICOM exams based on the RIS simulator order and sends the exams to the DICOM gateway automatically or manually. 3. The DICOM gateway receives DICOM images of any modality and automatically sends them to the PACS controller. It verifies the image format and reformats non-DICOM images into DICOM formats. More importantly, it verifies the successful transmission of images. In other words, it guarantees no loss of images. If the PACS controller is at fault in receiving images, the DICOM gateway will keep the images until the PACS controller resumes its function. 4. The PACS controller is the heart of the PACS (see Section 6.1.2). In addition to receiving and archiving DICOM images from modalities, it is responsible for the automatic or on-demand distribution of images to the PACS web server and viewing workstations.
report 1. RIS Simulator report
order Clinical PACS
2. Acquisition image Modality Simulator
3. DICOM Gateway
image
4. PACS Controller & Archive
5. PACS Web Server
image
image
6. Viewing Workstation
7. Web Display Client (Browser)
8. PACS Monitoring System
Figure 22.1 A generic PACS simulator of 8 components with a noninvasive connection to the clinical PACS.
c22.qxd 2/12/04 5:09 PM Page 572
572
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
5. The PACS web server is a PACS application server that converts DICOM images to web-based images (see Section 13.5) for distribution and retrieval by web clients. 6. The viewing workstation is for the viewing of images. It is for physicians and radiological technologists to query and retrieve images for viewing or for quality assurance. The images can be sent automatically or on request to the workstation. In most workstations with good graphic user interface, image processing tools are incorporated to facilitate radiologists’ reading and reporting. A pseudodiagnostic report can be generated for the exam and sent back to the RIS simulator for storage. 7. The web display clients can retrieve images from the web server in web-based image format. 8. The PACS monitoring system is connected to all components in the PACS simulator. It monitors the work flow and data flow of each component. Normal operations or artificial interruptions of the simulator are recorded and can be reviewed at the system monitor for tracing the work flow or sources of error. What Trainees Can Learn from the Simulator 1. Observe the clinical PACS operation, component by component. DICOMcompliant images triggered by the RIS simulator are generated and manually sent to the DICOM gateway at the acquisition modality simulator (AMS). On receiving the images from the AMS, the DICOM gateway automatically sends them to the PACS controller. Once the PACS controller receives the images, it stores them in the database and archive device for long-term storage. It also distributes the images to the web server and viewing workstations either automatically or by query/retrieve. The web server and viewing workstations put the images into the local database and hard disk and have them ready for clinical review. 2. Trace the flow through each component as described in 1.Trainees can observe the image flow from the logging messages on the screen of each component as well as from the PACS monitoring system. 3. Query and retrieve images from workstations or web clients and observe the image flow. Trainees can query and retrieve images from the viewing workstations and web clients and observe the image flow messages on the screen of both the workstations, web clients, web server, PACS controller, and the PACS monitoring system. 4. Review images at workstations. Trainees can review the images at the viewing workstations and web clients and use the image processing tools to enhance image interpretation. 5. Induce failure in a component to observe its impact on the PACS operation. Because the PACS simulator is stand-alone, trainees can manually induce failures to any component and observe their impact on the PACS operation. For example, trainees can disconnect the DICOM gateway from the network while the AMS is sending images to it. Of course, the images will not be archived at the PACS controller and will not be able to be reviewed, either. The PACS monitoring system will record such a failure event.
c22.qxd 2/12/04 5:09 PM Page 573
NEW DIRECTION IN PACS EDUCATION
573
6. Identify image flow bottleneck. Trainees can launch query/retrieve simultaneously at multiple workstations and web clients and watch the image flow on the workstations. This is possible because of the ability of processing multiple tasks of the PACS controller as well as the web server. Usually the speed of image transmission will be slower than when only one workstation launches query/retrieve. 7. Troubleshoot the common problem encountered in the daily running of a clinical PACS. There might be some minor problems in the daily running of a clinical PACS, for example, the power cut off from a DICOM gateway machine, disconnection of the network from a viewing workstation, or other software bugs of some PACS components. These problems can be simulated in the PACS simulator for trainees to learn how to troubleshoot them.
22.1.4
Examples of PACS Simulator
The following paragraphs describe two examples of such a PACS simulator for training and research purposes. 22.1.4.1 PACS Simulator and Training Programs in the Hong Kong Polytechnic University The first example is from the Hong Kong Polytechnic University (PolyU). It has a Radiography Division under the Department of Optometry and Radiography. The Division offers a B.Sc.(Hon) degree in radiography with a specialty either in medical imaging or radiation therapy for radiographers. It also offers M.Sc. and Ph.D. programs as well as continuing education courses. In addition to the academic programs, it operates a radiography clinic in joint venture with University Health Services. It provides general radiography and mammography service for university staff and students. Clients from public and private sectors are also have examinations performed under special arrangement. The installation of the PACS simulator of the PolyU started in 2000 (Tang et al., 2001: Law et al., 2001; Law and Huang, 2003). It was built on a generic PACS simulator with clinical imaging modalities such as CR and film scanner connected to it. Figure 22.2 shows major components in the current PACS in the PolyU. As the university does not have a CT or MR scanners, it relies on the modality simulator to generate those images for simulation. To streamline the work flow, there are three DICOM gateways for images from different modalities. There are altogether 12 viewing workstations (10 in a Image Management Laboratory and 2 in the PACS Laboratory) with the Cedara I-Report software and other manufacturers’ workstations installed for viewing images. The PACS Server is also connected to a web server (Cedara, Toronto, Canada), The PolyU PACS is also connected to a workstation with I-Report software in the University Health Services using the university network Ethernet at 100 Mbps and to a teleradiology site for an external consultant radiologist using a broadband at 1.5 Mbps. The PolyU PACS simulator is a training tool for the undergraduate and postgraduate programs. For the undergraduate program, the basic principles of the PACS are incorporated into a course called Medical Informatics. Students are briefed on the concept of PACS, and its components. In addition, they practice on image transmission, query, and retrieval using the workstations.
c22.qxd 2/12/04 5:09 PM Page 574
574
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
For postgraduate or continuing education programs, the target audiences of the courses are the operators of the PACS, and thus the direction of the training is geared toward the whole system with a clinical and practical emphasis. There are two training goals. The first is to provide the trainees with an understanding of the PACS and its work flow as a system. The second is to enable trainees to do some basic troubleshooting of the PACS. The training is planned at two levels, basic and advanced. The basic course is a refresher course to update practitioners who have not been exposed to digital images or the PACS. The advanced level is for practitioners who will perform the daily operation of the PACS. Both courses have handson components. Tables 22.1 and 22.2 give an outline of the training content of both courses. The trainees enjoy the various tasks they are able to perform during the handson sessions. They also make full use of those occasions to relate their learning to their work situation and discuss hypothetical problems with tutors. Trainees with different responsibilities and from different departments have different needs of their own. As expected, the courses so far have been found suitable for the operator level. During our discussion, those from the managerial level expressed more concern about the design and work flow planning, potential problems in PACS implementation, and budgeting. They liked more the exchange of their experience with different vendors. In view of this, cost analysis and implementation of PACS for managers could be another course offered in the education programs. A
LAN Environment
WAN Environment
Figure 22.2 (A) PACS in Department of Optometry and Radiography, the Hong Kong Polytechnic University. Setting of the PACS Simulator Lab and Image Management Lab and other imaging modalities. (B) Work flow of the PolyU PACS.
c22.qxd 2/12/04 5:09 PM Page 575
NEW DIRECTION IN PACS EDUCATION
575
Ethernet 100 Mbps Computed Radiography
Acquisition Gateway 1
Viewing Workstation (PACS Lab)
Monitoring System
Ultrasound unit
Acquisition Gateway 2
PACS Controller
Viewing Workstation (Image Lab)
Modality Simulator Acquisition Gateway 3
Long-Term Archive
RAID Viewing Workstation (Health clinic)
DICOM Film Scanner
Mammography unit
PolyU LAN
Viewing Workstation (Radiologist)
WAN Broadband (1.5 Mbps)
Figure 22.2 Continued
TABLE 22.1
Basic PACS Training at PolyU: Course Contents
Lectures 1. 2. 3. 4.
From film to filmless Concept of digital images PACS components, and functions IT concept in PACS—image size, network issues, and communication speed 5. Standards used: DICOM standards and TCP/IP, IHE concept 6. PACS work flow
Hands-On Workshop 1. i) Acquiring and transmitting images from modalities: a. CR b. U/S c. Scanner d. Modality simulator ii) Monitoring completion of transmission at the PACS monitor 2. Nonstandard input of patient data (error simulation exercise) 3. Use of display software to query and retrieve images transmitted in 1 4. Comparing viewing monitors
B
c22.qxd 2/12/04 5:09 PM Page 576
576
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
TABLE 22.2
Advanced PACS Training at PolyU: Course Contents
Lectures
Hands-On
1. PACS data flow design and monitoring Adding components to PACS 2. DICOM SOP class and scheduled work Setup and configure PACS components flow in IHE — Adding different viewing stations 3. Basics in connecting imaging and computer — Adding digital scanner components to the PACS with DICOM — Adding US scanner networking concepts and DICOM Hardware and software debugging communications Simulation of hardware failure: network 4. Adding new modalities to PACS failure 5. PACS archive server Software debugging: wrong DICOM 6. Teleradiology configuration, software components 7. PACS value-added application Work flow design exercise or estimation of archive need
Fault-Tolerant PACS Simulator RAID Modality Simulator
DICOM Gateway
DLT
Fault-Tolerant Archive Server
Diagnostic Workstation Diagnostic Workstation
Figure 22.3 Architecture of PACS simulator at IPI, University of Southern California, with connection to the clinical PACS for live images. The bottom figure was taken at the InfoRAD Exhibit, RSNA 2002, Chicago. From left to right: PACS monitoring system, RIS simulator, modality simulator, DICOM GW, FT PACS controller and server, WS, and WS.
22.1.4.2 PACS Simulator and Training in the Image Processing and Informatics Laboratory, University of Southern California The second example of a PACS Simulator is found in the Image Processing and Informatics Laboratory (IPI), Department of Radiology, University of Southern California (USC). Its architecture is illustrated in Figure 22.3, and the hardware and software components and functions are given in Table 22.3. The USC PACS simulator is connected to the clinical PACS, which provides a supply of images to the AMS. Unlike the PACS Lab in the PolyU, which offers training for undergraduate radiographers/technologists, the IPI is both a training and research laboratory. This
c22.qxd 2/12/04 5:09 PM Page 577
NEW DIRECTION IN PACS EDUCATION
TABLE 22.3
577
PACS Simulator Hardware and Software Components at IPI, USC
PACS Components
Hardware
Software
1. RIS simulator
Dell Optiplex Gx 150: PIII 1.8 GHz, 128 MB memory, 20 GB hard disk, 10/100 MB Ethernet card
2. Acquisition modality simulator
Dell Optiplex Gx 150: PIII 1.8 GHz, 128 MB memory, 20 GB hard disk, 10/100 MB Ethernet card
3. DICOM Gateway
Dell Optiplex Gx 150: PIII 1.8 GHz, 256 MB memory, 40 GB hard disk, 10/100 MB Ethernet card
4. PACS controller
SUN Ultra2: UltraSparc 300 MHz, 512 MB memory, 30 GB hard disk, 10/100 MB Ethernet card
6, 7 Viewing or web client workstations
Dell Precision 330: PIII 2.0 GHz, 512 MB memory, 40 GB hard disk, 10/100 MB Ethernet card
System: Windows 2000/XP, ORACLE 8i Windows client*, IE browser Application: RIS simulator software package System: Windows 2000/XP, ORACLE 8i Windows client*, Microsoft Office 2000 Application: Modality simulator software package System: Windows 2000/XP, Microsoft Office 2000 Application: DICOM gateway software package System: Solaris 2.6/2.7/2.8, ORACLE Enterprise 8i for Solaris Application: PACS controller software package System: Windows NT4.0/2000, Application: Cedara VR Read 4.0 or web client software
5, 8. PACS monitor
Dell Optiplex Gx 150: PIII 1.8 GHz, 128 MB memory, 20 GB hard disk, 10/100 MB Ethernet card
System: Windows 2000/XP, ORACLE 8i Windows Client*, IE browser, Microsoft Office 2000 Application: PACS monitor software package Application: Apache web server
* Need ORACLE Enterprise 8i server in Solaris.
PACS simulator was developed as a stand-alone training and research tool for radiologists, new PACS administrators, relevant health care providers as well as engineers. With the in-house PACS simulator (with non-invasive connection to the clinical PACS) and PACS expertise at various levels, clinical and engineering, the IPI has organized PACS training courses, advance and basic, for various categories of personnel using the PACS. These include residents, fellows, radiologists, technologists, PACS administrators, and managers. Table 22.4 gives an example of a comprehensive training curriculum for PACS administrators. It is assumed that the trainees already have a certain depth of IT knowledge that is above the basic level. Three approaches to teaching are used in the curriculum. These are lectures and tutorials, hands-on sessions, and clinical site visits. The training is more technically oriented although clinical components are also offered. Besides the training offered by the PolyU, the IPI PACS administrators’ course also includes PACS
c22.qxd 2/12/04 5:09 PM Page 578
578
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
TABLE 22.4 Mode of Training Lectures and tutorials
Training Curriculum for PACS Administrators at IPI, USC Topics Introduction to computer systems and networks
Basics of PACS
Clinical PACS work flow
PACS implementation Advanced technologies and teleradiology Future PACS applications
Hands-On
PACS components Adding components Troubleshooting Fault-tolerant system Archive
Clinical site visits
Content UNIX/PC/Mac systems Networking basics Server processes Storage devices Database UNIX server and network administration basics System and network troubleshooting Basic concepts and standards PACS components: HIS/RIS, PACS broker, modality, DICOM gateway, PACS controller, diagnostic workstations and web server Review of work flow Examples of work flow pitfalls Clinical perspective on display workstations and PACS work flow Implementation methodology Identification of resources Implementation pitfalls Internet 2 Telemedicine and teleradiology Fault-tolerant PACS server Disaster recovery solution PACS archive upgrade PACS security ASP backup archive Design of PACS application server: Web, PC, UNIX PACS-based CAD server Demonstration and hands-on Configuration and setup of components with PACS simulator Troubleshooting each component and the entire PACS simulator and work flow System faults and fail over demonstration Backup and upgrade Clinical PACS overview Follow through live clinical examinations Walk through daily PACS operations Troubleshooting clinical PACS: issues and examples
c22.qxd 2/12/04 5:09 PM Page 579
NEW DIRECTION IN PACS EDUCATION
579
implementation, fault tolerance, backup and upgrade of archive, data recovery solutions, and research applications of the PACS. In addition to serving the functions described in the generic PACS simulator for training, this PACS simulator also has been used to test the fault tolerance module added to the PACS server (see Chapter 15). During the test, the clinical environment was simulated by continuously feeding images from the modality simulator to the fault-tolerant PACS server 24 hours/day, 7days/week for 3 months to test its continuous availability (CA). This proves that the PACS simulator can be a good standalone testbed for any new radiology imaging components, including both software and hardware. Because the PACS simulator supports most DICOM-compliant devices or software, users can test whether the new components can be connected to an existing PACS system and do not need to connect the device to a clinical PACS system. The PACS training classes are continuously being offered as a stand-alone 1- or 2-week workshop, but they have also been branched out as two three-credit courses in the Department of Biomedical Engineering (BME), School of Engineering, for senior undergraduate and graduate students, as well as radiology residents and fellows. Together with other required courses, it leads to a Master of Science (M.Sc.) degree in Medical Imaging and Informatics under the BME Department. 22.1.5
Future Perspective in PACS Training
So far the discussion about PACS training has been focused on its operation in radiology. However, PACS training should not stop at the operation level because the PACS is an evolving tool with immense prospects for development. The current PACS with its standard DICOM format is just an initial model of how medical images in radiology can be transmitted, displayed, and archived. The concept can be extended to other medical specialties where images are also used. Such extension can be found in the development of a radiation therapy (RT) server and an imageassisted surgery system (IASS), discussed in Chapter 21. To cater for such future needs of PACS development and application, a comprehensive PACS education program should include an additional course that covers image informatics, work flow design, research, and application in PACS. Figure 22.4 shows the hierarchy of a comprehensive PACS education. The PACS fundamentals and the advanced PACS are prerequisites to the three courses at the bottom of the hierarchy. The last three courses, PACS Administrators, PACS for Managers, and Image Informatics and PACS Applications can be electives depending on the participant’s role. 22.1.6
PACS Training Summary
The review of the experience with PAC systems and the PACS training programs unveils an urgent need to implement training programs beyond the viewing workstations as more and more PAC systems are being installed and applications of the PACS to other specialties are in progress. Extending PACS education to other specialties would promote better utilization of the PACS and its concepts. Such applications can be found in specialties in which the use of images is intensive. To maximize the benefits of the PACS and to make it more popular in all clinical
c22.qxd 2/12/04 5:09 PM Page 580
580
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
PACS Fundamentals ß
PACS components and functions Standards Image acquisition, transmission, query & retrieve Using display software
ß ß ß
Advanced PACS
ß ß ß ß ß ß
PACS Administrators ß ß ß ß ß ß ß ß ß
UNIX/PC/Mac systems Database management Networking, Internet 2 Clinical PACS review Teleradiology Archive backup & upgrade Security Fault tolerance Disaster recovery solution
PACS workflow & monitoring Networking basics DICOM SOP class, IHE Display workstations Adding components Basic troubleshooting
PACS for Managers ß ß ß ß ß ß ß
Workflow design PACS implementation Cost analysis Identification of resources Budgeting PACS management Training
Image Informatics and PACS Applications ß ß ß ß ß ß ß ß ß
Image informatics Image processing DICOM structure Web server Application server design PACS-based CAD server PACS Simulator Search engine design Electronic patient record
Figure 22.4 The hierarchy of a comprehensive PACS education program with five modules: fundamentals, advanced, administrators, managers, and applications.
departments, a more comprehensive education program looking at the PACS as one whole system in the clinical work flow, not only within a department but extending to the hospital, with an emphasis on its application is called for. It should cover areas of clinical work flow, IT and networking, and PACS-related topics to include troubleshooting, PACS implementation, and maintenance. For department managers or hospital administrators, cost analysis, training, and budgeting issues are topics of interest. The PACS simulator is the new tool for the comprehensive training.
22.2 CHANGING PACS LEARNING WITH NEW INTERACTIVE AND MEDIA-RICH LEARNING ENVIRONMENTS In this section, we introduce an innovative technology of learning, called simPHYSIO (simulation physiology) using an interactive and media-rich learning
c22.qxd 2/12/04 5:09 PM Page 581
CHANGING PACS LEARNING WITH NEW INTERACTIVE
581
environment. SimPHYSIO, together with the concept of distance learning and the PACS simulator discussed in Section 22.1, may provide a means to facilitate PACS learning in a fast-paced and always busy clinical environment. 22.2.1
SimPHYSIO and Distance Learning
22.2.1.1 SimPHYSIO PACS requires diversity in personnel to interface with the intricate and complex system. To successfully implement PACS, a comprehensive education program would enable personnel to interleave all of its components seamlessly. SimPHYSIO’s pedagogical design and methodologies can be applied to build an educational environment for all levels of personnel involved in the integrated PACS. It is a suite of interactive on-line teaching media designed to integrate and dynamically teach complex systems with media-rich animations, real-time simulations, and virtual environments in physiology, which has been proven to be successful in interactive learning of the subject. The strength of simPHYSIO also includes Internet distribution, scalability, and content customization. It provides the convenience of anytime web access and a modular structure that allows for personalization and customization of the learning material. Figure 22.5 shows the infrastructure of the simPHYSIO and how to connect the PACS learning module to it. Figure 22.6 depicts an example showing a window in the Cranial Nerve Exam Simulator Neuro Module of the simPHYSIO. For details of the methods, readers are referred to (C. Huang, 2003).
Users Guide
Login
Site Map Glossary Quizzes
HOME PAGE - Announcements - Search - Change password
Chat About Us Contact Us Credits Survey
Neuro
Cardiovasc
Respiratory
Gastrointest
Renal
PACS
Vision Cranial Nerves Sleep
Figure 22.5 The infrastructure of the simPHYSIO. The left-hand boxes are supported tools and resources, and the bottom row contains existing modules. For example, the neuro module has 3 submodules: vision, cranial nerves, and sleep. The PACS learning can be a new module (rightmost: dotted lines) with 5 submodules: fundamentals, advanced, managers, administrators, and applications (shown in Fig. 22.4), all supported by the tools and resources.
c22.qxd 2/12/04 5:09 PM Page 582
582
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
Figure 22.6 An example showing a window in the Cranial Nerve Exam Simulator Neuro Module of the simPHYSIO. After the tutorial on learning the cranial nerves in the brain, the user can go to the patient simulator to apply the knowledge to a clinical setting. In this example, the student learns how to perform the diagnostic tests on a normal patient or a lesioned patient. (Courtery of Drs. Huang and J. Devoss.)
22.2.1.2 Distance Learning Distance learning can be defined as a student gaining knowledge through a workstation either physically or temporally distant from a live instructor. The ubiquity of computers and network technology targeting a wide audience can provide up-to-date information and, most importantly, promote student learning outside the classroom. A prevailing method of distance learning is the use of on-line tutorials with pedagogic complexity, ranging from a digital textbook to high-end interactive simulations. Although the web is sufficient for timely intranet transmission of digital, textbook-style material within an institution, complex didactic media (i.e., time-based animations, interactive virtual laboratories, or mathematical-based simulations) will require higher-bandwidth communications technology.
c22.qxd 2/12/04 5:09 PM Page 583
INTERACTIVE DIGITAL BREAST IMAGING TEACHING FILE
583
Once the PACS learning module based on the contents described in Section 22.1 along with the multimedia techniques described in Sections 22.2.1 and 22.2.2 has been developed, it can be incorporated with simPHYSIO by using its infrastructure tools and resources. It can go live and be connected through broadband networks (e.g., Internet 2) to learners at other institutions. 22.2.2 Perspectives on Using SimPHYSIO Technology for PACS Learning Current training of the whole PACS involves taking a course like those discussed in Section 22.1 or having an on-site trainer—both types of training require a devotion of time and coordination with many departments. The methodologies behind simPHYSIO can be applied to the development of an on-line training course for the integrated PACS based on the contents described in Section 22.1.4. We mentioned previously that the successful implementation of PACS requires a multidisciplinary team: 1. PACS manager: Needs to understand the whole PACS system including hardware, software, and data flow. 2. IT personnel: Need to know the components of the system, system integration, addressing failures of the system. 3. Hospital administrators: Need to understand the overall concept of how data flows through the hospital to maintain efficiency of the work flow. 4. Technologists: Need to know the work flow inside the PACS and how patient data moves from one point to another. 5. Radiologists: Need to learn how to use the workstation to acquire images. 6. Nurses and physicians: Need to know how to access the images and find solutions to problems from IT or the administrators. A centralized, interactive on-line training system for the integrated PACS based on the simPHYSIO technology would offer convenience, ease of distribution and access, customization/modularity, and efficient pedagogical training that would benefit its implementation and operation. A training system like simPHYSIO offers the convenience of learning the material when time is available to each of the team members. Housing a tutorial on-line would provide anytime, anywhere access to the training material through a computer with an Internet browser. Because these tutorials are modular, they can be scaled and customized for each level of the team in terms of appropriateness and level of depth. An interactive training tutorial is also beneficial in terms of delivering high-quality content in a dynamic way, for example, simulations. This strategy will engage the user to problem solve and understand the whole PACS. The resulting benefit is the coordination of the knowledge base for the different hierarchies of background for improved work flow. 22.3
INTERACTIVE DIGITAL BREAST IMAGING TEACHING FILE
This section discusses a self-contained interactive digital breast imaging teaching file. The goal is to supplement the standard radiology resident or fellow breast
c22.qxd 2/12/04 5:09 PM Page 584
584
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
imaging training with an interactive learning teaching file. The method can be extended to other radiology subspecialty teaching as well (Cao 1997). 22.3.1
Background
Routine screening with periodic mammography has proven to be effective for the early detection of breast cancer and in lowering its death rate. Because of limited resources and facilities that provide hands-on instruction in breast imaging interpretation, there is a need to develop an alternative method to supplement or replace the current film-based teaching file method of training. A digital-based system can alleviate this shortage because a digital teaching file can be duplicated easily with preserved quality and disseminated widely, to be used simultaneously by many individuals at different sites for continuing medical education. We present here an interactive digital breast imaging teaching file as a training tool for educational purposes using some existing components in the medical imaging informatics infrastructure (MIII) discussed in Chapter 19. 22.3.2
Computer-Aided Instruction Model
The interactive teaching file is based on a computer-aided instruction (CAI) model. The CAI model specifies the sequence of questions, image display, instructions, and explanations of cases dynamically based on user responses during an interactive teaching session. It permits image visualization, allows the user to detect imaging abnormalities by pointing and clicking on the 2K monitors, provides questions and displays both correct and incorrect answers, and leads the user through the analysis and management of each case in a clinically relevant sequence. Follow-up questions are also included in most cases, once the correct diagnosis and management are determined by the user. The CAI is best explained by using the general work flow of this model shown in Figures 22.7 and 22.8. Figure 22.7 shows the work-up sequence of a question with a simple true/false or multiple choice answer, whereas Figure 22.8 involves followup questions based on the choice of the first answer. At the start of a case, the system first gives a brief description of the case history and then presents to the user a set of digital mammograms on a two 2K-monitor workstation (left-most box “Exam Images,” Figure 22.7; see Figure 22.10). After examination of those images, the user is asked to use a mouse pointer, identifying on the monitors any mammographic abnormalities, as shown in Figure 22.7. The user’s selection is compared with prerecorded region of interest (ROI) data. If the user succeeds in identifying the ROI, he/she will proceed to a multiple choice or true/false question that is relevant to his/her findings: otherwise, the user must try again. If a user has not marked the abnormality after three tries, the system will display the ROI on the monitor, accompanied by an explanation; the user then proceeds to the next question. Based on user response to a question, the computer system presents different work-up sequences, defined in the CAI model as illustrated in Figure 22.7. The system provides instructions, explanations, and answers to the question, guiding the user toward the next question. During this process, additional images are displayed when appropriate. This work flow serves, in a structural way, to aid the user in navigating through the medical knowledge embedded in the teaching file cases. Note that a
c22.qxd 2/12/04 5:09 PM Page 585
INTERACTIVE DIGITAL BREAST IMAGING TEACHING FILE
585
True/ False
Exam images
ROI select
Correct
Question
Multiple choice A-E
Incorrect <3 times
3 times
Incorrect
(Show ROI)
(Explanation)
Correct (Explanation)
Next Question
Figure 22.7 A general algorithm for the CAI model and basic work-up sequences used for the breast imaging teaching file. ROI, True/false, and multiple choice are the three decision steps.
START HERE Incorrect answer
Question 1
(A) Choice (A)-(E) (B)
(E)
Follow up Question
Correct answer
Follow up Question
(D)
Follow up Question
No
Incorrect
Choice (A)-(D) Correct
Mouse Pointing
(C) True/ False
Yes
Next Question
Figure 22.8 An example of nested building blocks and different work-up sequences, based on 5 possible choices, (A)–(E), for a hypothetical Question 1.
user cannot proceed beyond a multiple choice question until the question is answered correctly. The immediate repetition of an incorrectly answered question serves as a powerful educational device to reinforce correct answers. There is often more than one suitable approach to working up a specific breast imaging problem. The questions and instruction sequences built into the CAI
c22.qxd 2/12/04 5:09 PM Page 586
586
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
support this by accepting more than one choice as correct and by providing different pathways with tailored follow-up questions. Figure 22.8 shows, as an example, how the system can call up different work-up sequences and follow-up questions in responding to user choices. Marking the abnormality, multiple choice, and true/false questions are the three basic building blocks of the interactive teaching file. They are nested together to create an effective teaching tool. 22.3.3
The Teaching File Script and Data Collection
The teaching file consists of over 1000 pathologically proven cases, each of which begins with a mammography examination no more than 7 years old. The case material demonstrates the various problem-solving imaging approaches available in reaching accurate imaging diagnoses. Mammograms, as well as breast sonograms, CT images, and MR images (if they are included in the case), are collected from screening mammography examinations and digitized with the high-resolution film scanner (2K ¥ 2.5K ¥ 12) if they are not already in digital format. Related medical information and image description textual data from PACS and RIS are inserted into the header record of the image files. To prepare a case for the teaching file, a radiologist uses the CAI interface to mark the appropriate ROIs on the digital images and uses the teaching file script (TFS) language to write the accompanying interactive teaching text. The TFS language is similar to but much simpler than the hypertext markup language (HTML) used in web home pages. It is simply an ordinary text file together with tags that tell the computer how to identify each element in the teaching file, how to query and display images, and how to respond to the teaching file user’s actions. After a case is prepared, a navigational browser in the CAI package can be used to read and navigate through the TFS file, producing an interactive response-driven teaching session. 22.3.4
Graphic User Interface
An easy-to-use graphic user interface (GUI) is used for the interactive teaching file. It responds dynamically with detailed instructions to every action of the user. Combined with on-line help, the GUI makes it easy for the user to manipulate digital mammograms and navigate through the teaching session. Each user first types in his/her name on the login screen. The login process then captures the user name and starts the interactive teaching file program. User performance scores are time-stamped and recorded in a log file. Users also can click the “COMMENTS” button to get a pop-up text editor window in which they can write their comments and suggestions on how to improve the teaching file. Figure 22.9 shows the main control screen of the interactive teaching file. Following Figure 22.9, the teaching session begins when the user selects the “START HERE” button on the top left corner, which presents the choice of starting a new session, resuming the previous session, or starting at a specific case. At the left and the middle bottom of the screen, there is an array of image tools that allow the user to do window-level adjustments, image magnification, measurement, and image shifting (up/down and left/right). At the right bottom corner, there is a depiction of a mouse with three selections for manipulating individual images on
c22.qxd 2/12/04 5:09 PM Page 587
INTERACTIVE DIGITAL BREAST IMAGING TEACHING FILE
587
Figure 22.9 Main control screen of the interactive teaching file. The upper text window provides the question under consideration. The user clicks a middle-row icon for the chosen answer for the question. The left column and lower row are GUI tools.
the two 2K monitors. The on-line description of mouse button function is displayed dynamically during the session. The information window at the mid-right portion of the screen displays the case and question number, the number of correct and wrong answers, as well as the total elapsed time. The user navigation buttons are located at the middle of the screen. The user must click on these buttons to make choices, answer questions, and initiate the pointer for marking abnormalities on the 2K monitors. Teaching file text (questions and answers) and system navigation instructions are shown dynamically in the large upper-central text window, according to the user’s actions. 22.3.5
Interactive Teaching File as a Training Tool
Radiology residents, fellows, staff radiologists, and visiting radiologists have used the digital teaching file for continuing medical education at UCSF after it was developed. The digital teaching file maintains 50 active cases, with approximately 500 digital mammograms and approximately 800 questions. It takes about 10 hours for an average user to complete a 50-case set. Using the teaching file script (TFS) and the image management tool described in Section 22.3.3, the breast imaging section constantly adds new cases, thereby replenishing the teaching file with new material.
c22.qxd 2/12/04 5:09 PM Page 588
588
NEW DIRECTIONS IN PACS LEARNING AND PACS-RELATED TRAINING
Figure 22.10 The two-monitor 2K workstation for the interactive breast imaging teaching file. The third screen on the right depicts the main control shown in Figure 22.9.
A comprehensive user log mechanism is used to record user activities.This system documents user progress and performance by time stamping the user activity and recording the user’s correct/incorrect answers. The user is allowed to log off any time in the middle of a session. The system automatically saves an unfinished session so that the user can resume from where he/she logs off. The user log provides performance evaluation feedback for users and can be used to test the effectiveness of the digital teaching file compared with traditional film-based methods of teaching file instruction. A mammography display must be able to portray the entire breast with such fine detail that tiny structures are readily visible. This display must be done at near realtime speed. The quality of digital mammograms is highly dependent on the original digital image, the film digitizer, and the display system, whereas the speed of image display and on-screen image manipulation is determined by system hardware architecture and image processing software. All these components are included in the MIII. Using a two-monitor 2K display workstation and a high-resolution film digitizer, the image quality of digitized and displayed mammograms has been found to be adequate and acceptable for interactive clinical teaching.
c23.qxd 2/12/04 5:07 PM Page 589
PART V
ENTERPRISE PACS
c23.qxd 2/12/04 5:07 PM Page 591
CHAPTER 23
Enterprise PACS
23.1
BACKGROUND
Around the world now, because of the need to improve operation efficiency and provide more cost-effective healthcare, many large-scale health care enterprises have been formed. Each of these enterprise groups hospitals, medical centers, and clinics together as one enterprise health care network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in better-quality and more cost-effective health care delivery at the enterprise level. As a result, many large-scale enterprise-level PACS/image distribution pilot studies, full design, and implementation, are under way. Examples of large-scale enterprise PACS being designed, developed, and implemented around the world are: •
• •
USA—United States Department of Veterans Affairs Medical Centers VistA Project (Section 12.5.2) Europe—The Saxony TeleMed Project, Germany (Lemke, 2003) Asia—The Total South Korea PACS Project. (Inamura et al., 2003; Kim, 2002)
We can learn from their experience in planning, design, implementation, as well as cost-benefit analysis. The characteristics of these systems are: • • • • •
Scale: large enterprise level, from 39 to 399 hospitals and medical centers Complexity: total health care IT (information technology) integration Ambition: complete system deployment to the enterprise Costs: extremely expensive Difficulty of Implementation: culture, resources, timeline, and overcoming legacy technologies
We have briefly introduced the concept of enterprise PACS in Section 6.6. In this chapter we provide an overall view of the current status of enterprise PACS and image distribution. The concept of enterprise-level PACS/image distribution, its characteristics, and components are discussed. Business models for enterprise-level implementation available from the private medical imaging and system integration PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
591
c23.qxd 2/12/04 5:07 PM Page 592
592
ENTERPRISE PACS
industry are highlighted. Two case studies based on the Hong Kong health care environment are used as an illustration of possible models of implementation.
23.2 23.2.1
CONCEPT OF ENTERPRISE PACS AND EARLY MODELS Concept of Enterprise-Level PACS
PACS is a work flow-integrated imaging system designed to streamline operations throughout the entire patient care delivery process. One of its major components, image distribution, delivers relevant electronic images and related patient information to health care providers for timely patient care either within a hospital or in a health care enterprise. This chapter emphasizes the enterprise-level PACS. Enterprise-level health care delivery emphasizes sharing of enterprise integrated resources and streamlining operations. In this respect, if an enterprise consists of several hospitals and clinics, it is not necessary for every hospital and clinics to have similar specialist services. A particular clinical service like radiology can be shared among all entities in the enterprise, resulting in the concept of a radiology expert center relying on the operation of teleradiology. Under this setup, all patients registered in the same enterprise can be referred to the radiology expert center for examinations. In this scenario, the patient being cared for becomes the focus of the operation. A single unique index like the patient’s name/ID would be sufficient for any health care provider in the enterprise to retrieve the patient’s comprehensive record. For this reason, the data management system would not be the conventional hospital information system (HIS), radiology information system (RIS), or other organizational information system. Rather, the electronic patient record (ePR) concept will prevail. Traditional HIS is hospital operation oriented; sometimes several separate data information subsystems must be invoked simultaneously before the same patient information can be retrieved during a visit. The concept of ePR is that “The patient’s record goes with the patient” (Section 12.5). In this operation, the ePR server would have necessary linkages connected to all information systems involved. Using the patient’s name/ID as the search index, the health care providers at the ePR client can seamlessly retrieve all relevant information of the patient across different systems in the enterprise. The majority of current ePR systems are still restricted to the stage of textural information, but to be effective for health care delivery, images must be incorporated into the ePR server. An example was shown in Section 12.5.2. Enterprise PACS therefore should have a means of transmitting relevant patient’s images and related data to the ePR server, and the ePR server should have the infrastructure design to distribute images and related data in a timely manner to the health care providers at the proper location. Because each health care enterprise has its own culture and organization structure, methods of transmitting images/data to the ePR as well as methods of distributing them to the health care providers would be different. Figure 6.6 (reproduced here as Fig. 23.0) shows the ePR-based enterprise PACS and image distribution concept. For the numerals in the figure, see Section 6.6.
c23.qxd 2/12/04 5:07 PM Page 593
593
CONCEPT OF ENTERPRISE PACS AND EARLY MODELS
Site 1
Site N
Generic PACS Components & Data Flow
HIS Database
Imaging Modalities
Reports
Reports
Database Gateway
Generic PACS Components & Data Flow
HIS Database
Database Gateway
Acquisition Gateway
PACS CONTROLLER & Archive Server
Web Server
Application Servers
Imaging Modalities
Workstations
Acquisition Gateway
PACS CONTROLLER & Archive Server
Application Servers
1
Workstations
Web Server
1
2 7
2
7
6
EPR Web Client
6 electronic Patient Record ( ePR)
Long Term Storage
electronic Patient Record (ePR)
Long Term Storage
4 2 Primary Data Center (CA) (SPOF)
3
EPR Web Client
5 3 Secondary Data Center (CA) (SPOF)
Enterprise Data Center CA: Continuous Available SPOF: Single Point of Failure
Figure 23.0 (Fig. 6.6) Enterprise PACS and ePR with images. The enterprise data center supports all sites in the enterprise. The primary data center has a secondary data center for backup. The enterprise ePR system allows ePR with images in the enterprise to be accessible from any ePR web clients. See text in Section 6.6 for explanation of work flow steps in numerals.
23.2.2
Early Models of Enterprise PACS
Early models of enterprise PACS were based on the concept of a super PACS manager which oversees the global operations of several PAC systems belonging to the enterprise (Fig. 23.1). One of the models is the super medical image broker and archive facility (SMIBAF). Figure 23.2 and Table 23.1 show the broker functions in the work flow procedures, and Figure 23.3 and Table 23.2 illustrate the broker in the work flow and archive functions of the SMIBAF. Although this model laid the foundation for today’s ePR system concept, operation-wise there are many drawbacks; for example, there is no single index for patient information retrieval and no backup archive, and the ePR concept was not well developed.
c23.qxd 2/12/04 5:07 PM Page 594
594
ENTERPRISE PACS
PACS SUPER-MANAGER Interface
Interface
Interface
Interface
PACS 1
PACS 2
PACS N
Figure 23.1 The organization of the super PACS manager overseeing several PAC systems within the same organization.
Broker functions of the Super Medical Image Broker and Archive Facility (SMIBAF) PACS at Site A
1
3
4
PACS at Site B
7
2
PACS at Site C
2
SMIBAF (Broker and Archive Facility)
3
5
6
Storage 8 Subsystem
Figure 23.2 Broker functions in the work flow procedures of the super medical image broker and archive facility (SMIBAF). (See Table 23.1 for numerals.)
Broker and archive functions of the Super Medical Image Broker and Archive Facility (SMIBAF) 4
PACS at A Small Clinic A - 3
PACS at Large Site C - 8
1 2
5,6
PACS at Site B - 1
7
9
2 SMIBAF ( Broker and Archive Facility)
Storage 9 Subsystem
Figure 23.3 Broker and archive functions of the super medical image broker and archive facility (SMIBAF). (See Table 23.2 for numerals.)
c23.qxd 2/12/04 5:07 PM Page 595
DESIGN OF ENTERPRISE-LEVEL PACS
TABLE 23.1 1: 2: 3: 4: 5: 6: 7: 8:
Broker Work-Flow Procedures
Patient at Site A. Query request sent to SMIBAF. SMIBAF queries data from Site B, Site C, . . . , until data is located at Site C. Site C returns patient work-flow list to SMIBAF, to Site A. Site A selects images from work-flow list and submits image retrieval request to SMIBAF. SMIBAF submits images retrieve request to Site C. SMIBAF receives images from Site C. SMIBAF transmits images to Site A. Image can optionally be stored in SMIBAF for future retrieval or distribution.
TABLE 23.2 1: 2: 3: 4: 5: 6: 7: 8: 9:
595
Broker Work-Flow and Archive Procedures
Patient at Site B is transferred to Site A with a small clinic. Site A uses the broker described in last figure to obtain patient’s images from Site B. Examination is done on patient at Site A. Patient is transferred to Site C for further examination. Site A submits request to send images to SMIBAF. SMIBAF receives and stores the images. SMIBAF transmits the images to Site C (destination requested by Site A). Images produced at Sites A and B are available at Site C. Images from sites A, B and C can be archived at SMIBAF as the ePR with image distribution for future retrieval and distribution.
23.3 23.3.1
DESIGN OF ENTERPRISE-LEVEL PACS Conceptual Design
Most large-scale health care enterprises are organized according to regional clusters (see Section 12.5.2). Each cluster operates independently but follows the guidelines of the enterprise. Most likely, the enterprise provides overall ITS (information technology service) supporting enterprise IT design, implementation, and services. Its main resources are located in the enterprise data centers. Larger clusters may have their own ITS supporting cluster-related IT requirements. Larger hospitals may also have certain ITS support with a smaller data center. Figure 23.4 depicts a conceptual design of PACS/imaging distribution within a cluster and its connection to the data centers and other clusters (Huang, 2002). The three major components are: clinical components, IT supports, and clinical Specialties (first column). The clinical components (first row) are flagship, acute, convalescent hospitals, and clinics. Within a hospital, radiology, A&E (ambulatory and emergency), wards, ICU, and other clinical operation are the major units. Clinical specialties (third row) include neurosurgery, orthopedic, radiation therapy, and other subspecialties. They can be located within a hospital or within a cluster supporting other entities in the cluster. The IT support (middle row) includes both ITS primary and secondary data centers. Other enterprises or smaller healthcare providers can be connected to the enterprise data centers through a HII (healthcare information infrastructure) gateway (GW) (see top rightmost).
c23.qxd 2/12/04 5:07 PM Page 596
596
ENTERPRISE PACS
Flagship Hospital
Acute Hospital
ICU 1K
ICU 1K
Clinical Components
A&E 1K/desktop
Web, Short Term Storage SOPF
Radiology 1K/2K
Web, Short Term Storage SOPF
Ward / Opd 1K
A&E 1K/desktop Radiology 1K/2K
Convalescent Hospital desktop
Private Healthcare Provider
Distribution Center
HII Gateway
Ward / Opd 1K
Other Clusters
IT Supports
electronic Patient Record (ePR)
electronic Patient Record (ePR) Long Term Storage
Long Term Storage
Secondary Data Center (CA) (SPOF)
Primary Data Center (CA) (SPOF)
Clinical Specialties need postprocessing application
Neurosurgery
Orthopedic
Radiation Therapy
Local Storage
Local Storage
Local Storage
Other Clinical Specialties
High Speed: Gigabit/OC12 Medium Speed: 100 Mbit/OC3
Figure 23.4 Conceptual design of PACS/image distribution within a cluster and its connection to the data centers and other clusters in a health care enterprise. CA, continuous availability; SPOF, single point of failure.
23.3.1.1 IT Supports • Networking: High-speed networks (gigabit/s, or OC-12) should be used for high-volume and high-demand applications. The minimum requirement for image distribution is 100 Mbits/s or OC-3). • Standards: DICOM standards should be used for external communications and image formatting. For web-based image distribution under the umbrella of ePR, TCP/IP should be used.
c23.qxd 2/12/04 5:07 PM Page 597
DESIGN OF ENTERPRISE-LEVEL PACS
•
•
•
•
•
597
Display: Radiology should use 2K (2500 ¥ 2000 or 2000 ¥ 1600) or 1K (1600 ¥ 1200) for display. A&E, ICUs, wards and other clinical applications use 1K; convalescent hospitals and physicians use desktop. In all cases, LCD display should be used for better image quality. Storage requirement: The storage requirement should be overdesigned. It is more cost-effective to have an overdesign than performing data migration or changing archive architecture in the future. SPOF: Single points of failure (SPOFs) are at both the cluster data center and enterprise ITS data center. Careful design using CA (continuous availability) or HA (high availability) resilience architecture (see Chapter 15) is required. Image data backup is mandatory with the second copy off-site from the primary archive. Data migration: A data migration schema should be developed for migrating current image data from existing hospital archives to the enterprise architecture. Integrate image distribution with ePR, both in archive and in display.
23.3.1.2 Operation Options • Centralized versus Distributed Distributed In Terms of Each Hospital The conceptual design utilizes both centralized and distributed architecture. In the short term (less than 3 months; this number is adjustable), image data are archived at and distributed from the hospital data center as a distributed operation. The hospital archive distributes images through intrahospital and interhospital networks. Clinical units in the hospital can also query/retrieve images from the database of other hospitals. • Centralized at the Data Center Archived images in the hospital are sent continuously to the enterprise data center in the centralized long-term archive. In the data center, some selected image data are converted to web-based ePR format for ePR distribution. Image data, after 3 months, can be queried/retrieved from the data center with either DICOM or web-based ePR. • DICOM versus Web Images acquired as DICOM from imaging modalities supported with DICOM should be archived as DICOM images. In those modalities (e.g., endoscopy images) not supported by DICOM, DICOM headers should be inserted and archived as DICOM images. The centralized PACS archive is therefore DICOM based. When images are selected from the DICOM database for ePR distribution, the image format is converted to a web-based format used by the ePR. The accumulated web-based images are archived in the web-based server either within the ePR database or in an independent web image database, cross-referenced with the ePR database. 23.3.1.3 Architectural Options • The data center should be operational resilience-designed for 24/7 operation accommodating daily peak hours. • The cluster data center can be embedded as a subunit in the enterprise data center.
c23.qxd 2/12/04 5:07 PM Page 598
598 •
•
•
•
• • •
•
ENTERPRISE PACS
The enterprise data center should provide an off-site second copy image. It can either be at the secondary data center or at the cluster data center. A huge storage requirement is expected. As an example, for 300,000 conventional digital radiography procedures with one view, it will generate 3 TB of data per year (no compression). A hierarchical archive scheme should be used. The capacity of each class of storage requires a pilot implementation to verify. Local storage for clinical specialties is managed by the specialty and supported by the cluster ITS. Communication and network efficiency is very critical for a satisfactory system performance. Pilot implementation is required to obtain this data before cluster-wise and enterprise-level implementations. System availability is 24/7, and disaster recovery time should be within minutes. Image distribution method can be through both DICOM or web. A single system integrator contracted by the enterprise has the full responsibility to resolve multivendor situation. Image data security is of utmost importance. Enterprise ITS has the responsibility of implementing an acceptable scheme balancing the ease of use, cost, and security parameters.
23.3.2
Financial and Business Models
Because the enterprise PACS/image distribution is a large-scale system integration, the cost of implementation is very high. This subsection lists some of the financial models available from medical imaging manufacturers and system integration companies. A. Attach the PACS/Image Distribution with Medical Imaging Equipment Purchase Medical imaging manufacturers provide add-on PACS installation during imaging equipment purchase. The disadvantages of this model are that the add-on PACS normally is limited in scope and may not be suitable for enterprise application. The add-on PACS equipment so purchased has three disadvantages. First, most likely the add-on PACS equipment will be obsolete in 2 years. Second, it will be difficult to integrate to the ePR because the add-on PACS architecture would not be flexible. Finally, without a unified DICOM data structure in the enterprise level in the beginning, data compatibility between add-on PACS modules will create confusion in the systemwide worklist. As a result, the enterprise PACS/image distribution roll-out will be difficult. B. Outright Purchase with Maintenance Contract of PACS/Image Distribution Solution No off-the-shelf enterprise level PACS/image distribution is offered by any manufacturer, and therefore outright purchase is not possible. C. Outsourcing Outsourcing to maintain the technical components of the PACS/image distribution would be acceptable after the PACS equipment has been implemented. This assumes that the cost is justifiable. But when planning the
c23.qxd 2/12/04 5:07 PM Page 599
DESIGN OF ENTERPRISE-LEVEL PACS
599
PACS/image distribution is de novo, no outsourcing manufacturers would be knowledgeable enough to understand the intricacy of the enterprise-level operation, let alone the cluster or the hospital operation, without a long learning process. The outsourcing company would also have to learn the design of the enterprise ePR and how to interface PACS to its architecture. The time required for such a process would be too long to be beneficial for the enterprise operation. D. ASP Model (Application Service Provider) Currently, ASP is attractive for smaller subsets of the PACS/image distribution. Supporting off-site archive, long-term image archive/retrieval or second copy archive, DICOM-web server development, and web-based image database are some workable models. But for a large comprehensive enterprise PACS/image distribution, the ASP model requires further investigation by the enterprise working with a suitable manufacturer. The advantages and disadvantages of the ASP model versus equipment procurement are shown in Table 23.3. E. Pay-per-Procedure Pay-per-procedure is attractive in a smaller scale operation. This model is popular for smaller clinics where IT support is scarce. This model becomes difficult in larger health care operations containing many facets. In this situation, the formula of pay-per-procedure becomes complex. Other disadvantages are similar to those of the outsourcing and ASP models. F. Software Purchase Only A new model is software purchase only. The enterprise first designs its own image distribution, architecture, hardware, and workstation distribution. The enterprise then decides what software can be implemented in-house and what must be purchased. Most likely, all PACS-related software would have to be purchased. The enterprise then negotiates with a manufacturer to procure the necessary software. The procurement would include licensing, installation, upgrade, training, and maintenance. The enterprise purchases its own hardware.
TABLE 23.3 Concerns
ASP Model Considerations: Some of the Primary ASP Benefits and
ASP “Upsides” 1. Minimizes the initial capital procurement investment 2. May accelerate the potential return on investment 3. Reduces the risk of technology obsolescence 4. Vendor assumes the equipment costs as image volume increases 5. Provides a flexible growth strategy 6. Eliminates concerns regarding required space in data center
ASP “Downsides” 1. Historically, the ASP model has been more expensive over a 2–4 year timeframe (vs. a capital purchase). 2. Customer relinquishes ownership and management of some/all of the PACS equipment. 3. Financial equity in solution is minimized. 4. Future customizations to meet unique customer needs may be limited.
c23.qxd 2/12/04 5:07 PM Page 600
600
ENTERPRISE PACS
The VAHE VisitA Imaging component described in Section 12.5.2 uses this model for those VA Hospitals that do not want to purchase a PACS from a manufacturer and would want the direct support from the VA IT Department. VA IT then purchases the software and designs the image distribution for the hospital. G. Loosely Coupled Partnership A loosely coupled partnership is defined as the enterprise and the clusters as a group forming a partnership with a manufacturer. The partners share some defined responsibility in the planning, design, implementation, and operation. The procurement is similar to the outright purchase but with a favorable discount because of certain contributions from the enterprise. This model is being used by many PACS installations now. However, most of these partnerships are for one-time installation between the hospital and the manufacturer. The advantage of this model is a lower price of the purchase, which to many hospitals is attractive. But for an enterprise with many hospitals, this model would not be attractive because it would be difficult to have full enterprise-level integration. H. Tightly Coupled Partnership A tightly coupled partnership between the enterprise and a system integrator/manufacturer is very attractive for enterprise-level PACS/image distribution implementation. Certain contributions from both partners are necessary: •
•
•
•
•
Both the enterprise and the manufacturer should make a long-term commitment, say 5 years, to support the PACS/image distribution. The commitment includes technical, personnel, financial, and ethical aspects. In the technical aspect, both parties should share confidential technical materials related to the project. The manufacturer should guarantee that the equipment and software are up-to-date technology by periodic review, say every 12–18 months. At the end of the 5 years, the equipment and software are still state-of-the-art technology. In the personnel aspect, both parties should share personnel responsibility toward the completion of the project, as well as manpower resource distribution and requirement. The manufacturers should provide adequate training to the enterprise personnel to ensure engineering and operation sufficiency. “Sufficiency” is defined as enterprise ITS ability to install future added-on subsystems alone. Thus the daily maintenance and service from the manufacturer would be at a minimum. In the financial aspect, the enterprise has the responsibility to guarantee a mutually agreeable payment to the manufacturer annually. In return, the manufacturer has the responsibility to guarantee that the PACS/image distribution and PACS would satisfy the cluster work flow requirement and the equipment would be up-to-date after a period of, say, 5 years. Both partners can negotiate a continuing agreement after the 5 years have expired. In the ethical issue, both the enterprise and the manufacturer must abide by the ethical codes of a tightly coupled partnership recognized by the IT and medical communities.
c23.qxd 2/12/04 5:07 PM Page 601
DESIGN OF ENTERPRISE-LEVEL PACS
601
23.3.3 The Hong Kong Hospital Authority Healthcare Enterprise— An Example In this section, we present the design of the enterprise PACS and image distribution plan of the Hong Kong Hospital Authority (HK HA) based on the concept developed in Sections 23.3.1 and 23.3.2. Hong Kong, including Kowloon and the New Territories, has a population of about 7 million, as shown in Figure 23.5. Two major health care organizations are the HK HA and the Department of Health. The former is in charge of the operation of all public hospitals, whereas the latter takes care of the health of the citizens. HA was founded in 1990 as the supermanager of all 44 public hospitals, about 93% of the hospital market in Hong Kong. It is organized into seven geographical clusters. Table 23.4 shows the workload of HA. Several years ago, HA developed an in-house clinical management system (CMS) supporting all hospitals with about 4000 workstations installed. It beta-tested its in-house ePR system (without images) in 2002 at two hospitals. Several hospitals have partial PACS, and one of them is ready for total filmless operation. In 2002, a consultancy study was commissioned to integrate PACS with image distribution in the ePR system (Huang, 2002). Table 23.5 weighs the suitability of various technical, financial, and business models described in Section 23.3.2 for the HK HA enterprise PACS development. The tightly coupled partnership is the preferred business model. Figure 23.6 depicts the recommended model taking the route of the specific mixed plus ePR model. A pilot project is planned to implement this
Figure 23.5 The Hong Kong Hospital Authority wide area network supporting 44 hospitals. (Courtesy of HK HA.)
c23.qxd 2/12/04 5:07 PM Page 602
602
ENTERPRISE PACS
TABLE 23.4 The Hong Kong Hospital Authority Estimated Annual Workload •
In 1999/2000 – 1,089,330 inpatient discharges – 8,216,700 specialist outpatient attendances – 2,361,600 accident and emergency attendances – About 93% of the Hong Kong hospital market
•
In 2000/2001 – Budget of HK$28,029 Million (US$3.6 Billion)
TABLE 23.5 Technical vs. Financial and Business Models Suitable for HK HA Enterprise PACS Implementation
Generic centralized model Generic distributed model Generic mixed model Specific mixed model with ePR
Bundle with Equipment Purchase
Outright Purchase with Maintenance Contract
Small scale
Small scale
Application Service Provider
Tightly Coupled Partnership
Lock-in
✓
Lock-in Lock-in
✓ ✓
model in one of the seven clusters including the flagship hospital, an acute hospital, two special clinics, and the two ambulatory and emergency wards and key specialties in both the flagship and the acute hospitals. Figure 23.4 shows the architecture of the plan. It is now in the planning of the implementation stage. Some of the materials in this section were contributed by the consultancy team as well as personnel in the HK HA.
23.4
AN EXAMPLE OF AN ENTERPRISE-LEVEL CHEST TB SCREENING
In Section 23.3, we gave the design of the HK HA enterprise PACS in general terms. This section gives a step-by-step view starting from the workload and work flow level to the system architectural design of a health care enterprise-level chest TB (tuberculosis) screening in Hong Kong under the jurisdiction of the Department of Health (DH). Some of the materials were contributed by personnel from the DH, Hong Kong (Huang, 2002). 23.4.1
Background
23.4.1.1 Chest Screening at the Department of Health, Hong Kong For TB patients, the DH in Hong Kong has adopted directly observed treatment (DOT) to ensure that TB patients take their medicine. To facilitate patients in the DOT, patients are encouraged to attend any chest clinics in HK for the DOT and the follow-ups. In general, the patient’s file and chest films are kept in the clinics they first attended. During these follow-up sessions, there is sometimes a need to examine
c23.qxd 2/12/04 5:07 PM Page 603
AN EXAMPLE OF AN ENTERPRISE-LEVEL CHEST TB SCREENING
603
Secondary Data Center PC at wards/ clinics
ePR
Long Term Storage
Work Flow Mgr
Primary Data Center
Image Postprocessing application
Diagnostic Workstation at strategic sites e.g. Radiology, ICU
• Short-term storage • Workflow
e.g. Navigation and planning
Gateway to external interfaces • Short-term storage • Workflow
e.g. ePR
• Short-term storage • Workflow
Figure 23.6 Technical option for the Hong Kong Hospital Authority enterprise PACS implementation. (Courtesy of W Chan, HA.) Upper left, two data centers; lower left, workstations for PACS; upper right, Web-based PC for wards and clinics; lower middle, special applications; lower right, ePR with images.
the patient’s previous chest X-ray films. Getting the film is often a problem when the patients were previously seen in a different clinic. In addition, TB screening and chest service in the DH has been collecting patients’ chest X-ray films continuously over the years. Chest films of TB patients are kept for life. Only films inactive for 6 years would be destroyed. The service is facing storage and image distribution problems for these X-ray films and the corresponding files. 23.4.1.2
Radio-Diagnostic Services, Department of Health
Current Operation Conditions—Workload According to 2000 and 2001 statistics from the Radiodiagnostic Services, DH has 27 centers and 4 units in the Correctional Services Department providing radiological services. These centers include chest X-ray centers, mobile units, survey centers, women’s health centers, and student health service centers. Together, they examine approximately 270,000 chest cases and produce 320,000 examinations per year. Of these centers, nine produce more than 20,000 examinations/year. For the rest, each performs close to or less than 10,000 exams/year. Between the two mobile units, each services three sites and produces about 10,000 exams/year. Currently there are approximately 5 million films in the DH film library. Of the 300,000 cases per year, there are new cases and return visits. Of the return visits, two thirds are short term, the rest are long term of over 3 months. Of the new cases, one third of them have historical examinations; if these cannot be located, new X rays must be taken again. During the return visit, historical examinations are often required for comparison purposes. Most of the revisited patients go back to the same service center. The operation conditions in the mobile units are quite different from the service centers. For each week, the two mobile units operate 18 two-hour sessions, 11 regular, and 7 prison sessions. Each session produces between 40 and 70 images.
c23.qxd 2/12/04 5:07 PM Page 604
604
ENTERPRISE PACS
23.4.1.3
Analysis of the Current Problems
Centralized TB Patients Database The TB Patients Database is managed by the DH at the primary care level at 18 chest clinics (12 full time and 6 part time) and by 5 hospitals with dedicated TB units belonging to the Hong Kong Hospital Authority at the secondary care level. These clinics and hospitals are dispersed throughout the Hong Kong territory (see Fig. 23.5). There is a need for centralizing patient data to facilitate the patient management and the conduction of DOTs. Because the X-ray diagnosis is usually made by assessing the chest films within the same premises where the X rays are taken, it is advantage to have the images distributed back to the premise for reviewing. These requirements fit into the framework of enterprise level of PACS/image distribution. The solution to such a problem can be tackled by using this enterprise-level chest TB screening PACS/image distribution system discussed in this section. Based on these data, the following is a plan of the infrastructure of a digital medical image archiving and communication system (MIACS) for the TB and Chest Service of the DH. 23.4.2
The Design of the MIACS
In view of the current clinic settings, the following is the design and implementation plan of a digital medical image archiving and communication system (MIACS) for the TB and Chest Services (TB & CS) in the DH. This infrastructure consists of seven major components: •
• • • • • •
A data center with the PACS server and archive and RIS (radiology information system) An off-site backup server and archive An acquisition and display module (ADM) at each chest center or clinic A mobile site module (MSM) at each mobile site Mobile unit modules (MUM) to support mobile units Communication networks connecting these components A communication protocol to the healthcare information infrastructure (HII) gateway, which is the future health care distribution unit for the health care of every citizen in Hong Kong.
Figures 23.7, 23.8, and 23.9 depict the infrastructure design of the MIACS, the ADM, and the MSM and MUM, respectively. 23.4.2.1
Descriptions of Components
The Data Center The data center houses two information systems, the PACS and the RIS. Specifications of the PACS (MIACS, Fig. 23.7) a. A server with capacity to service the imaging archive and distribution requirements for 20 chest centers acquiring over 300,000 images per year.
c23.qxd 2/12/04 5:07 PM Page 605
AN EXAMPLE OF AN ENTERPRISE-LEVEL CHEST TB SCREENING
605
The MIACS for TB and Chest Service Chest X -ray Center 1 ADM
Chest X -ray Center N ADM
Data Center •PACS Server & Archive •RIS
Mobile Site Module
Mobile Site Module
Mobile Unit Module
Mobile Unit Module
Off-Site Back-up Server & Archive
Figure 23.7 Infrastructure design of the digital medical image archiving and communication system (MIACS) for the services of the Department of Health based on PACS/image distribution technologies. ADM, Acquisition and Display Module (see Fig. 23.8).
b. A hierarchical archive system with a RAID (redundant array of inexpensive disks) for short-term (3 months) and a DLT (digital linear tape) library with SAN technology (Storage Area Network) for long-term (6 years) storage, respectively. Assuming 300,000 new images per year, or about 30,000 new images/month, with 10 Mbytes/CR image, the capacity requirements (no compression) would be: DLT Library: 3 terabytes/year or about 20 terabytes/6 years RAID: 300 gigabytes/month or about 1 terabyte/3 months. This short-term archive assumes that no historical images would be required. If an average of three older images per patient were required, then the capacity of the short-term archive would be reduced from 3 to 1 month. For resilience short-term storage, a dual-controller RAID should be used. Images in the RAID would maintain the most up-to-date images by using a first-in-first-out algorithm. Specifications of the RIS The RIS should serve up to 30 users concurrently. It should be able to send patient’s worklist to both the PACS server and the CR acquisition with the DICOM broker. Off-Site Backup Server and Archive Specifications The off-site backup server and archive should have the same long-term archive capacity as that of the PACS archive in the data center. The PACS server automatically transmits one copy of the newly acquired image to the backup archive. It is advisable to have an extra backup CR unit at the DH headquarters for distribution to any chest center or mobile unit should its CR be out-of-service. Acquisition and Display Module (ADM) Specifications for each site (Fig. 23.8) • A RIS workstation (WS) connected to the RIS at the data center
c23.qxd 2/12/04 5:07 PM Page 606
606
ENTERPRISE PACS
ADM RIS WS Patient Registration
Film Library
A
1
B
PACS WS Laser Film Scanner
CR
4
C 5
Acquisition Gateway
3 6
8
5
2
Acquisition Gateway D
Data Center 1
• PACS Server & Archive • RIS
Off-Site 7
Back-up Server & Archive
Figure 23.8 An acquisition and display module (ADM) at a chest X-ray center connected to the data center’s PACS server and the backup server.
•
•
•
•
A CR unit connected to the RIS for automatic acquisition of the patient’s worklist An acquisition gateway connecting the CR to the PACS server at the data center A PACS WS with two LCD displays for primary diagnosis (2000 lines/3 megapixels). The WS should have a minimum of 1 week of local storage (for a busy center, about 700 images/week), or a 7–10 gigabyte hard disk. A laser film digitizer with an acquisition gateway to digitize selected historical films for comparison viewing. Several chest units may share a laser film digitizer and the associated components.
c23.qxd 2/12/04 5:07 PM Page 607
AN EXAMPLE OF AN ENTERPRISE-LEVEL CHEST TB SCREENING
607
The MSM and the MUM Mobile Site Module (MSM)
Wireless or Cable Mobile Unit Module(MUM)
RIS WS Patient Registration
1
A
Film Library B
PACS WS
CR
Laser Film Scanner
5 4
C
Acquisition Gateway
Acquisition Gateway 5 3 6
2
8
D
Data Center wireless or Cable 1
• PACS Server & Archive • RIS
Off-Site 7
Back-up Server & Archive
Figure 23.9 The mobile unit module (MUM) and the mobile site module (MSM) connected to the data center’s PACS server and the backup server.
Mobile Site Module (MSM) Specifications (Fig. 23.9) MSM is located at the clinic where there are mobile unit (X ray) services. It consists of: • •
•
A RIS WS connected to the RIS at the data center A PACS WS with two LCD displays (2000 lines). The WS should have a minimum of 1 week of local storage (about 700 images/week), or a 7–10 gigabyte hard disk. A laser film digitizer with an acquisition gateway to digitize selected historical films for comparison viewing. Several mobile sites may share a laser film digitizer and associated components.
c23.qxd 2/12/04 5:07 PM Page 608
608
ENTERPRISE PACS
Mobile Unit Module (MUM) Specifications (Fig. 23.9) MUM is located inside the mobile van unit. It requires special network connection to the mobile site it services. • •
•
No RIS WS is needed A CR unit connected to the RIS for automatically acquiring the patient’s worklist. The connection may be wireless or via communication cables An acquisition gateway connecting the CR to the PACS at the data center. Wireless connection may be considered if the transmission speed becomes acceptable.
MSM and MUM complement each other where mobile services are provided. Communication Networks Specifications At the data center, there will be 1000 images coming in per day, or 10,000 MB or 80,000 Mbits. There are 86,400 (24 h/day ¥ 60 min/h ¥ 60 s/min) s/day. It will need a network with bandwidth of 1 Mbit/s to accommodate the input requirement.The images also must be transmitted two more times to both respective WSs at the chest centers and to the backup archive. A conservative estimate is that it will require a minimum of 4 Mbits/s bandwidth with the extra 1 Mbits/s for tolerance. In addition, if historical images are required to transmit with the patient image folder, the bandwidth will be even higher. A bandwidth of a minimum 10 Mbits/s would be needed to accommodate the communication requirements at the data center. At the off-site backup server and archive, it will require 2 Mbits/s network bandwidth. At each chest center, it will receive 100 new images, or 1000 Mbytes per day. It would require a network bandwidth of 100 Kbits/s. This does not count the historical images. For this reason, a 1 Mbits/s connection to the Data Center would be required. At the mobile site module, it will require a 1 Mbits/s connection to the data center. At the mobile unit module, a wireless antenna connected to the mobile site module as well as to the data center would be needed. Bench marking: The specifications of the network requirements need bench marking during the system design phase. 23.4.2.2 Work Flow of the Digital MIACS Following Figures 23.7, 23.8, and 23.9, the work flow of the MIACS is as follows: Current Examination (Numerals) (1) A patient registers at the center, information is transmitted to the DH data center RIS, and/or to the CR (computed radiography). (2) The RIS triggers the patient’s worklist to be sent to the CR, where patient’s information is registered in the CR image header (3) The data center sends the patient image folder to the PACS WS. (4) The patient’s CR exam is performed and images are sent to the acquisition gateway (GW) for staging and reformatting.
c23.qxd 2/12/04 5:07 PM Page 609
SUMMARY OF ENTERPRISE PACS AND IMAGE DISTRIBUTION
609
(5) The GW sends a copy of the images to the data center and the PACS display WS. (6) The data center PACS registers new images in the patient folder and sends the new images to append to the folder already at the WS (if the new images are not already in the WS from Step 5). (7) PACS sends new images to the off-site backup server and archive. (8) Physicians review images at the PACS WS. If the report is available, it would be routed to the RIS at the data center for future retrieval. Historical Examinations (Alphabets) If all the required historical images are already in the patient image folder, nothing needs to be done. If some historical images are required: (A) RIS WS alerts film library (clerk) to locate the patient film folder. (B) Film clerk, with advice from the physician or radiographer, selects historical films to be digitized. (C) Film clerk or radiographer digitizes images using the laser film scanner with proper patient ID and information. Digitized images in DICOM format are sent to the GW. (D) GW forwards images to the PACS at the data center, where the digitized images are appended to the patient image folder and forward to the PACS WS (Route 6), and to the backup server (Route 7). The work flow conditions are similar at the mobile site module and the mobile unit module. (see Figures 23.8 and 23.9). 23.4.2.3 Current Status The digital MIACS has been designed as a solution to address the chest screening problem. This approach can be extended to other types of screening with many distributed clinics in a defined geographic region.
23.5
SUMMARY OF ENTERPRISE PACS AND IMAGE DISTRIBUTION
Enterprise PACS/image distribution is the remaining frontier of PACS implementation. Enterprise PACS characteristics are: • • • •
• • •
Very large system integration Takes several years for implementation Expensive Requires contributions by both the enterprise and the imaging manufacturer/system integrator Requires both PACS and web-based technology Image distribution to referring and clinical sites would be web-based ePR Enterprise PACS with ePR-based image distribution would be cost effective and efficient for large-scale health care if implemented properly.
c23.qxd 2/12/04 5:07 PM Page 610
610
ENTERPRISE PACS
The several large-scale enterprise-level PACS/imaging distribution examples around the world mentioned in this section have proven that their implementations are inevitable, they are cost-effective, and they would improve health care delivery. We present the design concept of enterprise-level PACS, which emphasizes enterprise-level IT supports, system architecture options, and the advantages and disadvantages of various business models. The tightly coupled partnership model between the enterprise and the manufacturer is recommended. Two examples of designing the health care enterprise level PACS and image distribution in Hong Kong have been given.
ref.qxd 2/12/04 5:23 PM Page 611
REFERENCES
Chapter 1 Brody, W.R., Ed. “Conference on Digital Radiography” Proc. SPIE—The International Society for Optical Engineering, Vol. 314, September 14–16, 1981. Capp, M.P., Nudelman, S., et al. Photoelectronic Radiology Department. Proc. SPIE—The International Society for Optical Engineering, Vol. 314, 2–8, 1981. Digital Imaging and Communications in Medicine (DICOM). National Electrical Manufacturers’ Association. Rosslyn, VA: NEMA, 1996; PS 3.1. Duerinckx, A., Ed. Picture Archiving and Communication Systems (PACS) for Medical Applications. First International Conference and Workshop, Proc. SPIE, Vol. 318, Part 1 and 2, 1982. Dwyer, S.J., III, et al. Cost of Managing Digital Diagnostic Images. Radiology, 144, p. 313, 1982. Hruby, W., and Maltsidis, A. A view to the past of the future—a decade of digital revolution at the Danube hospital. In: Hruby, W., ed. Digital (R)evolution in Radiology. Vienna: Springer Publishers; 2000. Huang, H.K. Elements of Digital Radiology: A Professional Handbook and Guide. Prentice-Hall, Inc., N.J., April, 1987. Huang, H.K., Ratib, O., et al., Ed. Picture Archiving and Communication System (PACS). NATO ASI F Series. Springer-Verlag, Germany, 1991. Huang, H.K. Picture Archiving and Communication Systems in Biomedical Imaging. VCH Publishers, NY, p. 489, 1996. Huang, H.K. Picture Archiving and Communication Systems: Principles and Applications. Wiley & Sons, NY, p. 521, 1999. Huang, H.K. Editorial: Some Historical Remarks on Picture and Communication Systems, Comp Med Imaging & Graphics V27, Issues 2–3, 93–99, June, 2003. Huang, H.K. Enterprise PACS and Image Distribution, Comp Med Imaging & Graphics V27, Issues 2–3, 241–253, 2003. Inamura, K. PACS Development in Asia. Comp Med Imaging & Graphics V27, Issues 2–3, 121–128, June, 2003. JAMIT. First International Symposium on PACS and PHD, Proc. Medical Imaging Technology, Vol. 1, 1983. Law, M., and Huang, H.K. Concept of a PACS and Imaging Informatics-Based Server for Radiation Therapy Comp Med Imaging & Graphics, V. 27, 1–9, 2003. Lemke, H.U. “A Network of Medical Work Stations for Integrated Word and Picture Communication in Clinical Medicine”, Technical Report, Technical University Berlin, 1979.
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
611
ref.qxd 2/12/04 5:23 PM Page 612
612
REFERENCES
Lemke, H.U., Vannier, M.W., et al. Computer Assisted Radiology and Surgery (CARS). Proc. 16th International Congress and Exhibition. Paris. CARS 2002. Lemke, H.U. PACS Development in Europe. Comp Med Imaging & Graphics V27, Issues 2–3, 111–120, June, 2003. MITRE/ARMY. RFP B52-15645 for University Medical Center Installation Sites for Digtial Imaging Network and Picture Archiving and Communication System (DIN/PACS), October 18, 1986. Mogel, G.T. The Role of the Department of Defense in PACS and Telemedicine Research and Development. Comp Med Imaging & Graphics V27, Issues 2–3, 129–135, June, 2003. Mun, S.K., Ed. Image Management and Communication—The First International Conference. IEEE Computer Society Press. Washington, D.C. June 4–8, 1989. Niinimaki, J., Ilkko, E., and Reponen, J. Proceedings of the 20th EuroPACS Annual Meeting, Oulu, Finland, 5–7 September, 2002. Steckel, R.J. Daily X-ray Rounds in a Large Teaching Hospital Using High-resolution Closedcircuit Television. Radiology, 105: 319–321, Nov, 1972.
Chapter 2 Barrett, H.H., and Swindell, W. Radiological Imaging: The Theory of Image Formation, Detection, and Processing. Academic Press, 1981. Benedetto, A.R., Huang, H.K., and Ragan, D.P. Computers in Medical Physics. Ed., New York: American Institute of Physics, 1990. Beutel, J., Kundel, H., and Van Metter, R.L., Ed. Handbook of Medical Imaging Volume 1. Physics and Psychophysics, SPIE Press, Bellingham, Washington. P. 949, 2000. Bertram, S. “On the Derivation of the Fast Fourier Transform,” IEEE Trans. Audio and Electroacoustics, Vol. AU-18, March 1970, pp. 55–58. Bracewell, R. The Fourier Transform and its Applications. McGraw-Hill, 1965. Bracewell, R.N. “Strip Integration in Radio Astronomy,” Australian Journal of Physics, Vol. 9, 1956, 198–217. Brigham, E. O. The Fast Fourier Transform. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1974, pp. 148–183. Cochran, W.T., et al. “What is the Fast Fourier Transform?” IEEE Trans. Audio and Electroacoustics, Vol. AU-15, June 1967, pp. 45–55. Curry, T.S. III, Dowdey, J.E., and Murry, R.C. Jr. Introduction to the Physics of Diagnostic Radiology, 4th, Ed. Philadelphia: Lea & Febiger, 1990. Dainty, J.C., and Shaw, R. Image Science. Academic Press, 1974, Chap. 5. Gonzalez, R.G., and Wood, R.E. Digital Image Processing, Addison-Wesley Publishing Company, Inc., 2nd Ed. 2002. Hendee, W.R., and Wells, P.N.T. The Perception of Visual Information. Ed. 2 Ed., New York: Springer, 1997. Huang, H.K. “PACS—Picture Archiving and Communication Systems in Biomedical Imaging”, VCH/Wiley & Sons, New York, New York, 1996. Kim, Y., Horii, S.C., Ed. Handbook of Medical Imaging Volume 3 Display and PACS, SPIE Press, Bellingham, Washington. P. 512, 2000. Robb, R.A. Three-Dimensional Biomedical Imaging: 1st Ed. New York: VCH/Wiley & Sons, 1997. Rosenfeld, A., and Kak, A.C. Digital Picture Processing. 2nd Ed. Academic Press, 1997. Rossman, K. “Image Quality,” Radiologic Clinical of North America, Vol. VII, No. 3, 1969.
ref.qxd 2/12/04 5:23 PM Page 613
REFERENCES
613
Sonka, M., Fitzpatrick, J.M., Ed. Handbook of Medical Imaging Volume 2. Medical Imaging Processing and Analysis, SPIE Press, Bellingham, Washington. P. 1218, 2000.
Chapter 3 Cao, X., and Huang, H.K. Current Status and Future Advances of Digital Radiography and PACS. IEEE Eng. Med. Biol. Vol. 19, No. 5, 2000, pp. 80–88.
Chapter 4 Feldkamp, L.A., Davis, L.C., and Kress, J.W. Practical cone-beam algorithm. J. Optical Soc. Am. A. V1, 1984, pp. 612–619. Stahl, J.N., Zhang, J., Chou, T.M., Zellner, C., Pomerantsev, E.V., and Huang, H.K. A New Approach to Tele-conferencing with Intravascular Ultrasound and Cardiac Angiography in a Low-Bandwidth Environment. RadioGraphics, Vol. 20, 2000, pp. 1495–1503. Taguchi, K., and Aradate, H. Algorithm for image reconstruction in multi slice helical, CT. Med. Phys. Vol. 25(4), 1998, 550–561.
Chapter 5 Albanesi, M.G., and Lotto, D.I. Image Compression by the Wavelet Decomposition, Signal Processing, Vol. 3, No. 3, 1992, pp. 265–274. Antonini, M., Barlaud, M., Mathieu, P., and Daubechies, I. Image coding using wavelet transform, IEEE Trans. on Image Processing, Vol. 1, 1992, pp. 205–220. Cohen, A., Daubechies, I., and Feauveau, J.C. Biorthogonal bases of compactly supported wavelets, Communication on Pure and Applied Mathematics, Vol. XLV, 1992, pp. 485– 560. Daubechies, I. Orthonormal bases of compactly supported wavelets, Comm. Pure Appl. Math., Vol. 41, 1988, pp. 909–996. Lightstone, M., and Majani, E. Low bit-rate design considerations for wavelet-based image coding, Proceeding of the SPIE, Vol. 2308, 1994, pp. 501–512. Mallat, S.G. A theory for multiresolution signal decomposition: the wavelet representation, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 11, No. 7, 1989, pp. 674– 693. Strang, G., and Nguyen, T. Wavelets and Filter Banks, Wellesley-Cambridge Press, 1995. Villasenor, J.D., Belzer, B., and Liao, J. Wavelet filter evaluation for image compression, IEEE Trans. on Image Processing, Vol. 4, No. 8, 1995, pp. 1053–1060. Wang, J., and Huang, H.K. Three-dimensional medical image compression using wavelet transformation, IEEE Trans. on Medical Imaging, Vol. 15, No 4, 1996, pp. 547–554. Wang, J. “Three-Dimensional Medical Imaging Compression Using Wavelet Transformation with Parallel Computing”, Ph.D. Dissertation, UCLA, 1997. Wang, J., and Huang, H.K. Three-Dimensional Image Compression with Wavelet Transforms, in, Ed. Bankman, I.N., et al. Handbook of Medical Imaging, Chapter 52, Academic Press, 2000, pp. 851–862.
Chapter 6 Huang, H.K. Enterprise PACS and Image Distribution, Comp. Med. Imaging & Graphics Vol. 27, Nos. 2–3, 2003, pp. 241–253.
ref.qxd 2/12/04 5:23 PM Page 614
614
REFERENCES
Chapter 7 American Medical Association. Current procedural terminology 2001. Chicago, Ill: AMA, 2001. Carr, C., and Moore, S.M. IHE: A Model for Driving Adoption of Standards. Comp Med Imaging & Graphics Vol. 27, Issues 2–3, June, 2003. 137–146. Channin, D., Parisot, C., Wanchoo, V., Leontiew, A., and Siegel, E.L. Integrating the healthcare enterprise: a primer. III. What does IHE do for me? RadioGraphics 2001a;21:1351–1358. Channin, D.S., Siegel, E.L., Carr, C., and Sensmeier. Integrating the healthcare enterprise: a primer.V. The Future of IHE. RadioGraphics 2001b; 21:1605–1608. Channin, D.S. Integrating the healthcare enterprise: a primer. II. Seven brides for seven brothers: the IHE Integration Profiles. RadioGraphics 2001; 21:1343–1350. DICOM Standard 2003, http://medical.nema.org/ DICOM Standard 2003, http://www.dclunie.com/dicom-status/status.html#BaseStandard2001 Health Level Seven, http://www.hl7.org/ Henderson, M., Behel, F.M., Parisot, C., Siegel, E.L., and Channin, D.S. Integrating the healthcare enterprise: A primer. IV. The role of existing standards in IHE RadioGraphics 2001;21:1597–1603. HL7 Version 3.0: Preview for CIOs, Managers and Programmers, http://www.neotool.com/ company/press/199912_v3.htm#V3.0_preview Huff, S.M., Rocha, R.A., McDonald, C.J., et al. Development of the logical observation identifier names and codes (LOINC) vocabulary. J Am Med Inform Assoc 1998;5:276–292. International Classification of Diseases, 9th revision. Washington, DC: U.S. Department of Health and Human Services, 2001. Publication 91-1260. The Internet Engineering Task Force, http://www.ietf.org Java web service tutorial, http://java.sun.com/webservices/docs/1.1/tutorial/doc/ JavaWSTutorial.pdf Oracle8 SQL Reference Release 8.0, Oracle 8 Documentation CD. Radiological Society of North America Healthcare Information and Management Systems Society. IHE technical framework, year 3, version 4.6. Oak Brook, Ill: RSNA, March 2001. Recently Approved Supplements, http://medical.nema.org/ Siegel, E.L., and Channin, D.S. Integrating the Healthcare Enterprise: A Primer—Part 1. Introduction. RadioGraphics 2001; 21:1339–1341. SQL, http://searchdatabase.techtarget.com/sDefinition/0,,sid13_gci214230,00.html XML, http://searchwebservices.techtarget.com/sDefinition/0,,sid26_gci213404,00.html
Chapter 8 Herrewynen, J. Step-by-Step Integration, Decisions in Imaging Economics, December 2002. www.imagingeconomics.com/library/200212-13.asp www.commedica.com www.mitra.com
Chapter 9 Arbaugh, William, A., Shankar, and Wan. “Your 802.11 Wireless Network has No Clothes.” March 30, 2001. Paper available from http://www.cs.umd.edu/~waa/wireless.pdf.
ref.qxd 2/12/04 5:23 PM Page 615
REFERENCES
615
Borisov, Nikita, Goldberg, and Wagner. “Intercepting Mobile Communications: The Insecurity of 802.11.” Published in the proceedings of the Seventh Annual International Conference on Mobile Computing And Networking, July 16–21, 2001. Paper is at http://www.isaac .cs.berkeley.edu/isaac/mobicom.pdf; conference page is http://www.research.ibm.com/ acm_sigmobile_conf_2001. Gast. “802.11 Wireless Networks: The Definitive Guide”. O’Reilly, 2002. http://www.freesoft.org/CIE/RFC/792/index.htm. http://www.interne2.edu http://www.novatelwireless.com/pcproducts/g100.html http://www.siemenscordless.com/mobile_phones/s46.html Huang, C. Changing learning with new interactive and media-rich learning environments: Virtual Labs Case Study Report, Comp Med Imaging & Graphics Vol. 27, Issues 2–3, 2003, pp. 157–164. Huang, H.K. Enterprise PACS and Image Distribution, Comp Med Imaging & Graphics V27, Issues 2–3, 2003, pp. 241–253. Making the Choice: 802.11a or 802.11g, http://www.80211-planet.com/tutorials/article/ 0,,10724_1009431,00.html Mogel, G.T., Cao, F., Huang, H.K., Zhou, M., Liu, B.J., and Huang, C. Internet 2 Performance for Medical Imaging Applications. In Internet 2 RSNA Demos. Radiology V225P, p. 752, 2002. Wi-Fi Alliance home page: http://www.weca.net/ Yu, F., Hwang, K., Gill, M., and Huang, H.K. Some Connectivity and Security Issues of NGI in Medical Imaging applications. J High Speed Networks. 9, 2000, pp. 3–13.
Chapter 10 Compaq/Tandem’s Non Stop Himalaya: http://www.tandem.com. HIPAA: http://www.rx2000.org/KnowledgeCenter/hipaa/hipfaq.htm. Huang, H.K. PACS: Basic Principles and Applications, Wiley & Sons, 1999. Huang, H.K., Cao, F., Liu, B.J., Zhang, J., Zhou, Z., Tsai, A., and Mogel, G. Fault-Tolerant PACS server design. SPIE Medical Imaging, Vol. 4323-14, pp. 83–92, February 2001. Huang, H.K., Cao, F., Liu, B.J., Zhang, J., Zhou, Z., Tsai, A., and Mogel, G. Fault-Tolerant PACS Server, SPIE Medical Imaging, Vol. 4685-44, pp. 316–325, Feb. 2002. Huang, H.K., Liu, B.J., Cao, F., Zhou, M.Z., Zhang, J., Mogel, G.T., Zhuang, J., and Zhang, X. PACS Simulator: A Standalone Educational Tool. SPIE Medical Imaging, Vol. 4685-21, February 2002. Liu, B.J., Huang, H.K., Cao, F., Documet, L., and Sarti, D.A. A Fault-Tolerant Back-Up Archive Using an ASP Model for Disaster Recovery, SPIE Medical Imaging, Vol. 4685-15, pp. 89–95, Feb. 2002b. Zhou, Z., Cao, F., Liu, B.J., Huang, H.K., Zhang, J., Zhang, X., and Mogel, G. A Complete Continuous-Availability PACS Archive Server Solution, SPIE 2003.
Chapter 11 Li, M., Wilson, D., Wong, M., and Xthona, A. The Evolution of Display Technologies in PACS Applications. Comp Med Imag Graphics 27, 2003, 175–184.
ref.qxd 2/12/04 5:23 PM Page 616
616
REFERENCES
Chapter 12 Huang, H.K., Andriole, K., Bazzill, T., et al. “Design and implementation of a picture archiving and communication system: the second time”, Journal of Digital Imaging, Vol. 9, pp. 47–59, 1996.
Chapter 13 Bosak, J. In: XML, Java, and the Future of the Web, http://www.ibiblio.org/pub/sun-info/ standards/xml/why/xmlapps.htm Cao, X., Hoo, K.S., Zhang, H., et al. Web-based multimedia information retrieval for clinical application research, SPIE Proc. 2001;4323:350–358. Components and Web Application Architecture, http://www.microsoft.com/technet. CORBA® BASICS, http://www.omg.org/gettingstarted/corbafaq.htm Huang, H.K. In: Pacs Basic Principles and Applications: Chapter 12, “Display Workstation”, New York, Wiley-Liss Press, 1999. Java component architecture, http://java.sun.com/products/javabeans/ Kim, Y., and Horii, S.C. In: Hand Book of Medical Imaging: Volume 3, “Display and PACS”, Bellingham, Washington, SPIE Press, 2000. NCompass ActiveX Plugin Proc: http://www.ncompasslabs.com/ Sakusabe, T., Kimura, M., and Onogi, Y. On-demand server-side image processing for Web-based DICOM image display. SPIE Proc. 2000;3976:359–367. Zhang, J., Stahl, J.N., Huang, H.K., et al. Real-time teleconsultation with high resolution and large volume medical images for collaborative health care, IEEE Trans. Information Technologies in Biomedicine, 2000;4:178–186. Zhang, J., Zhou, Z., Zhuang, J., et al. Design and implementation of picture archiving and communication system in Huadong Hospital, SPIE Proc. 2001;4323:73–82. Zhang, J., Sun, J., and Stahl, J.N. PACS and Web-based image distribution and display. Comp Med Imaging & Graphics V27, Issues 2–3, June, 2003, 197–206.
Chapter 14 ACR/NEMA, Digital Imaging and Communications in Medicine (DICOM): Version 3.0. Washington, DC: ACR/NEMA Standards Publication, 1993. Barbakati, N. X Window System Programming, 2nd ed., Indianapolis, SAMA Publishing, Inc., 1994. Chatterjee, A., and Maltz, A. “Microsoft DirectShow: A new media architecture,” SMPTE Journal, Vol. 106, pp. 865–871, 1997. Erickson, B.J., Manduca, A., Palisson, P., Persons, K.R., Earnest, F., 4th, Savcenko, V., and Hangiandreou, N.J. “Wavelet compression of medical images,” Radiology, Vol. 206, pp. 599–607, 1998. Freier, A.O., Karlton, P., and Kocher, P.C. “The SSL Protocol Version 3.0,” Internet Draft, http://home.netscape.com/eng/ssl3/draft302.txt, 1996. Huang, H.K. “Towards the digital radiology department”, Euro. J. of Rad., Vol. 22, p. 165, 1996. Huang, H.K. “Teleradiology technologies and some service models”, Computerized Medical Imaging and Graphics, Vol. 20, pp. 59–68, 1996. Huang, H.K. Albert Wong, Andrew Lou, et al. “Clinical experience with a second-generation hospital-integrated picture archiving and communication system”, Journal of Digital Imaging, Vol. 9, pp. 151–166, 1996.
ref.qxd 2/12/04 5:23 PM Page 617
REFERENCES
617
Huang, H.K., Lou, S.L., and Dillon, W.P. “Neuroradiology workstation reading room in an inter-hospital environment: a nineteen month study”, Comp. Med. Imaging & Graphics, Vol. 21, No. 5, 1997. Huang, H.K., and Lou, S.L. Telemammography: A Technical Overview, in Haus, A.G., Yaffe, M.J., Ed. RSNA Categorical Course 1999, Oak Brook, IL, 273–281, 1999. Huffmann, D.A. “A Method for the Construction of Minimum-redundancy Codes,” Proc. IRE, Vol. 40, pp. 1098–1101, 1952. Jeffrey Richter: “Chapter 11: Window messages and asynchronous input”, Advanced Windows, 3rd ed., Microsoft Press, 1997. Johannes, N., Stahl, J.N., Tellis, W., and Huang, H.K. Network Latency and Operator Performance in Teleradiology Applications. J Digital Imag. Vol. 13, No. 3, 119–123, 2000. Lempel, A., and Ziv, J. “Compression of Two-Dimensional Data,” IEEE Trans. Inform. Theory, Vol. 32, pp. 2–8, 1986. Lou, S.L., Huang, H.K., and Arenson, R. “Workstation design: image manipulation, image set handling, and display issues”, Radiological Clinics of North America, Vol. 34, No. 3, pp. 525–544, 1996. Mitchell, J.L., Pennebaker, W.B., Fogg, C.E., and LeGall, D.J. MPEG Video Compression Standard. New York: Chapman & Hall, 1997. Nagle, J. “Congestion Control in IP/TCP Internetworks,” RFC 896, 1984. J. Tobis, V. Aharonian, P. Mansukhani, S. Kasaoka, R. Jhandyala, R. Son, R. Browning, L. Youngblood, and M.Thompson,“Video networking of cardiac catheterisation laboratories,” American Heart Journal, Vol. 137, pp. 241–249, 1999. NEMA Standards Publication PS 3.3, “Digital Imaging and Communication in Medicine (DICOM)”, National Electrical Association, 1993. Nijim, Y.W., Stearns, S.D., and Mikhael, W.B. “Differentiation applied to lossless compression of medical images,” IEEE Trans. Medical Imaging, Vol. 15, pp. 555–559, 1996. Ricke, J., Maass, P., Lopez Hèanninen, E., Liebig, T., Amthauer, H., Stroszczynski, C., Schauer, W., Boskamp, T., and Wolf, M. “Wavelet versus JPEG (Joint Photographic Expert Group) and fractal compression. Impact on the detection of low-contrast details in computed radiographs,” Investigative Radiology, Vol. 33, pp. 456–63, 1998. Schroeder, W., Martin, H., and Lorensen, B. The Visualization Toolkit, 2nd ed., Prentice Hall PTR Press (USA), pp. 83–96, 1998. Sinha, A.K. Network Programming in Windows, N.T., Addison-Wesley Press (USA), 1996, pp. 199–299. Stahl, J., Zhang, J., Song, K.S., et al. “A new design for medical image display application in advanced networked environments”, SPIE, Vol. 3335–36, 1998. (in press) Stahl, J.N., Zhang, J., Zhou, X., Lou, A., Pomerantsev, E.V., Zellner, C., and Huang, H.K. “TeleConsultation for Diagnostic Video using Internet Technology,” Proceedings of IEEE Multimedia Technology & Applications Conference, pp. 210–213, 1998. Stahl, J.N., Zhang, J., Chou, T.M., Zellner, C., Pomerantsev, E.V., and Huang, H.K. A New Approach to Tele-conferencing with Intravascular Ultrasound and Cardiac Angiography in a Low-Bandwidth Environment. RadioGraphics. 20:1495–1503, 2000. Stahl, J.N., Zhang, J., Zeller, C., Pomerantsev, E.V., Lou, S.L., Chou, T.M., and Huang, H.K. Tele-conferencing with Dynamic Medical Images. IEEE Trans Inform Tech Biom, Vol. 4, No. 1, 88–96, 2000. Stevens, W.R. “Chapter 15: Advanced interprocess communication”, Advanced Programming in the UNIX Environment”, Addison-Wesley Press (USA), 1996. Wallace, G.K. “The JPEG Still Picture Compression Standard,” Communications of the ACM, Vol. 34, pp. 30–40, 1991.
ref.qxd 2/12/04 5:23 PM Page 618
618
REFERENCES
Wendler, T., Monnich, K.J., and Schmidt, J. “Chapter 6:Digital Image Workstation”, Picture Archiving and Communication System in Medicine, H.K. Huang, O. Ratib, A.R. Babber, et al. ed., New York, Springer-Verlay Press, 1991. Wong, S.T.C., and Huang, H.K. “Networked multimedia for medial imagining,” IEEE Multimedia, Vol. 4, pp. 24–35, 1997. Zhang, J., Song, K., Huang, H.K., and Stahl, J.N. “Tele-consultation for thoracic imaging”, Radiology, Vol. 205(P), 742, 1997. Zhang, J., Stahl, J.N., Song, K.S., and Huang, H.K. “Real-Time Teleconsultation with High Resolution and Large Volume Medical Imaging,” PACS Design and Evaluation: Engineering and Clinical Issues, S.C. Horii and G.J. Blaine, Eds.: SPIE, Vol. 3339, pp. 185–190, 1998. Zhang, J., Stahl, J.N., Huang, H.K., Zhou, X., Lou, S.L., and Song, K.S. Real-time teleconsultation with high resolution and large volume medical images for collaborative health care. IEEE Trans Inform Tech Biom, Vol. 4, No. 1, 178–185, 2000. Ziv, J., and Lempel, A. “A Universal Algorithm for Sequential Data Compression,” IEEE Trans. Inform. Theory, Vol. 23, pp. 337–343, 1977.
Chapter 15 Avizienis, A. “Toward systematic design of fault-tolerant systems” Computer, Vol. 30, Issue: 4, pp. 51–58, April 1997. Bajpai, G., Chang, B.C., and Kwatny, H.G. “Design of fault-tolerant systems for actuator failures in nonlinear systems” Proceedings of the American Control Conference, Vol. 5, pp. 3618–3623, 2002. Cao, F., Huang, H.K., Liu, B.J., Zhou, M.Z., Zhang, J., and Mogel, G.T. “Fault-tolerant PACS Server”. Radiology, Vol. 221 (P), pp. 737, 2001. Cao, F., Liu, B.J., Huang, H.K., Zhou, Z., Zhang, J., Zhang, X., and Mogel, G. “Fault-Tolerant PACS Server”, SPIE Medical Imaging, Vol. 4685, pp. 316–325, 2002. Carrasco, J.A. “Computationally efficient and numerically stable reliability bounds for repairable fault-tolerant systems”, IEEE Transactions on Computers, Vol. 51 Issue: 3, pp. 254–268, March 2002. Compaq/Tandem’s “Non Stop Himalaya”: http://www.tandem.com del Gobbo, D., Cukic, B., Napolitano, R., and Easterbrook, S. “Fault detectability analysis for requirements validation of fault tolerant systems”, High-Assurance Systems Engineering, Proceedings of 4th IEEE International Symposium, pp. 231–238, 1999. Fabre, J.C., Perennou, T. “A meta object architecture for fault-tolerant distributed systems: the FRIENDS approach”, IEEE Transactions on Computers, Vol. 47 Issue: 1, pp. 78–95, Jan. 1998. Huang, H.K. PACS: Basic Principles and Applications, Wiley & Sons, p. 521, 1999. Huang, H.K., Cao, F., Zhang, J.G., Liu, B.J., and Tsai, M.L. Fault tolerant Picture Archiving and Communication System and Teleradiology Design. In Reiner, B., Siegel, E.L., Dwyer, S.J. Security Issues in the Digital Medical Enterprise, SCAR, Chapter 8, 57–64, 2000. Huang, H.K., Fei Cao, Jianguo Zhang. Fault tolerant Design and Implementation of the PACS Controller, 2000 RSNA InfoRAD Exhibit 9609. Huang, H.K., and Liu, B.J. coverage by Diagnostic Imaging at http://www.dimag.com/ webcast00/archives.shtml Monday, Nov 27, 2000, 9:20am PST, 2000. Huang, H.K., Cao, F., Zhang, J., Liu, B., and Tsai, M.L. “Fault tolerant picture archiving and communication system and teleradiology design”, in Security Issues in the Digital Medical
ref.qxd 2/12/04 5:23 PM Page 619
REFERENCES
619
Enterprise, Chapter 8, Editor: Bruce, I. Reiner et al., Society for Computer Applications in Radiology, November 2000a. Huang, H.K., Cao, F., and Zhang, J.Z. “Fault-tolerant design and implementation of the PACS controller”. Radiology, Vol. pp. 217 (P), 519, & 709, 2000b. Huang, H.K., Cao, F., Liu, B.J., Zhou, M.Z., Zhang, J., and Mogel, G.T. “PACS Simulator: A standalone Educational Tool.” Radiology, Vol. 221 (P), pp. 688, 2001a. Huang, H.K., Cao, F., Liu, B.J., Zhang, J., Zhou, Z., Tsai, A., and Mogel, G. “Fault-Tolerant PACS server design”. SPIE Medical Imaging, Vol. 4323, pp. 83–92, 2001b. Huang, H.K., Cao, F., Liu, B.J., Zhou, M.Z., Zhang, J., and Mogel, G. “A Complete Continuous-Availability PACS Archive Server Solution”, Vol. 225 (P), pp. 692, 2002. IBM S/390 Parallel Sysplex, http://ibm.com Institute of Electrical and Electronics Engineers. “A Compilation of IEEE Standard Computer Glossaries” IEEE Standard Computer Dictionary, New York, NY: 1990. Kanoun, K., and Ortalo-Borrel, M. “Fault-tolerant system dependability-explicit modeling of hardware and software component-interactions” IEEE Transactions on Reliability, Vol. 49 Issue: 4, pp. 363–376, Dec. 2000. Latif-Shabgahi, G., Bass, J.M., and Bennett, S. “History-based weighted average voter: a novel software voting algorithm for fault-tolerant computer systems Parallel and Distributed Processing”, Proceedings of Ninth Euromicro Workshop, 2001. Liu, B.J., Huang, H.K., Cao, F., Documet, L., and Sarti, D.A. “A Fault-tolerant Back-up Archive Using an ASP Model for Disaster Recovery”. Radiology, Vol. 221 (P), p. 741, 2001. Liu, B.J., Huang, H.K., Cao, F., Documet, L., and Sarti, D.A. “A Fault-Tolerant Back-Up Archive Using an ASP Model for Disaster Recovery”, SPIE Medical Imaging, Vol. 4685, pp. 89–95, 2002. Liu, B.J., Huang, H.K., Cao, F., Documet, L., and Muldoon, J. “Clinical Experiences With an ASP Model Backup Archive for PACS Images”. Radiology, Vol. 225 (P), p. 313, 2002. Muller, G., Banatre, M., Peyrouze, N., and Rochat, B. “Lessons from FTM: an experiment in design and implementation of a low-cost fault tolerant system”, IEEE Transactions on Reliability, Vol. 45 Issue: 2, pp. 332–340, June 1996. Reiner, B.I., Siegel, E.L., and Dwyer, S.J., III. Security Issues in the Digital Medical Enterprise, Ed. Society for Computer Applications in Radiology, November 2000. Resilience Company, Ultra2 Solaris servers: http://www.resilience.com Resilience Corporation Technical Report, 2000. IMEX Research.com Smith, D.V., Smith, S., Bender, G.N., et al. Evaluation of the Medical Diagnostic Image support System based on 2 Years of Clinical Experience. J Digital Imaging, Vol. 8, No. 2, 1995, pp. 75–87. Some, R.R., Beahan, J., Khanoyan, G., Callum, L.N., and Agrawal, A. “Fault-tolerant systems design—estimating cache contents and usage” Proceedings of IEEE Aerospace Conference, Vol. 5, pp. 2149–2158, 2002. Sun Microsystem: http://www.sun.com Trivedi, K., Dugan, J.B., Geist, R., and Smotherman, M. Modeling Imperfect Coverage in Fault-Tolerant Systems. Fault-Tolerant Computing, “Highlights from Twenty-Five Years.” Twenty-Fifth International Symposium, on., pp. 176, 1995. Xu, J., Randell, B., Romanovsky, A., Stroud, R.J., Zorzo, A.F., Canver, E., and Von-Henke, F. “Rigorous development of an embedded fault-tolerant system based on coordinated atomic actions” IEEE Transactions on Computers, Vol. 51 Issue: 2, pp. 164–179 & pp. 402–409, Feb. 2002.
ref.qxd 2/12/04 5:23 PM Page 620
620
REFERENCES
Chapter 16 A (Very) Brief Introduction to Cryptography, Dr lan G Graham, http://www.onthenet. com.au/~grahamis/int2010/week10/crypto.html Acken, J.M. How Watermarking Adds Value to Digital Content. Communications of the ACM, Vol. 41, No. 7, pp. 75–77, 1998. Berger, S.B., and Cepelewicz, B.B. Medical-Legal Issues in Teleradiology, Am J Roentgenolo., Vol. 166, 1996, pp. 505–510. Berlin, L. Malpractice Issue in Radiology-Teleradiology. Am J Roentgenolo., Vol. 170, 1998, pp. 1417–1422. Cao, F., Liu, B.J., Huang, H.K., Zhou, M.Z., Zhang, J., Zhang, X., and Mogel, G. FaultTolerant PACS Server. SPIE Medical Imaging, Vol. 4685–44, pp. 316–325, 2002. Complete e-Business Security for your Applications, http://www.rsa.com/products/ bsafe/brochures/BSF_BR_1100.pdf Craver, S., and Yeo, B.L. Technical Trials and Legal Tribution. Communications of the ACM, Vol. 41, No. 7, pp. 45–54, 1998. Digital Imaging and Communications in Medicine (DICOM). National Electrical Manufacturers’ Association. Rosslyn, VA: NEMA, 1996. Dwyer, S.J. Requirements for Security of Medical Data. In Reiner, B., Siegel, E.L., Dwyer, S.J. Security Issues in the Digital Medical Enterprise, SCAR, Ch 2, 9–14, 2000. Garfinkel, S., and Spafford, G. Practical Unix and Internet Security. O’Reilly & Associates, Inc., CA, 1996, pp. 139–190. Hellemans, A. Internet Security Code is Cracked. 1999. Science Vol. 285, pp. 1472–1473. HIPAA, http://aspe.os.dhhs.gov/admnsimp, U.S. Department of Health and Human Services HIPAA, http://www.rx2000.org/KnowledgeCenter/hipaa/hipfaq.htm Huang, H.K. Teleradiology Technologies and Some Service Models. J Comp Med Imag & Graphics. Vol. 20, No. 2, 59–68, 1996. Huang, H.K., Wong, A.W.K., Lou, S.L., Bazzill, T.M., et al. Clincial Experience with a Second Generation PACS. J Digital Imag. Vol. 9, No. 4, 151–166, 1996. Huang, H.K., Wong, A.W.K., and Zhu, X. Performance of Asynchronous Transfer Mode (ATM) Local Area andd Wide Area Networks for Medical Image Transmission in Clinical Environment. J Comp Med Imag & Graphics 21(3), 165–173, 1997. Huang, H.K. Picture Archiving and Communication Systems: Principles and Applications. Wiley & Sons, NY, p. 521, 1999. Huang, H.K., and Lou, S.L. “Telemammography: A Technical Overview”, RSNA Categorical Course in Breast Imaging, 1999, 273–281. Huang, H.K., and Zhang, J. Automatic Background Removal in Projection Digital Radiography Images. US Patent, No. 5,903,660. May 11, 1999. Huang, H.K., Cao, F., Zhang, J.G., Liu, B.J., and Tsai, M.L. Fault tolerant Picture Archiving and Communication System and Teleradiology Design. In Reiner, B., Siegel, E.L., Dwyer, S.J. Security Issues in the Digital Medical Enterprise, SCAR, Ch 8, 57–64, 2000. Huang, H.K., Cao, F., Liu, B.J., Zhou, M., Zhang, J., and Mogel, G.T. PACS Simulator: A Standalone Education Tool, #CE100620, Type: Computer, Category 1 CME: 12:00–1:00pm daily, Education Exhibits. 87th Radiological Society of North America, RSNA2001 Presentations, November 25–30, 2001, Chicago, Illinois. Introduction to SSL, http://developer.netscape.com/docs/manuals/security/sslin/index.htm James, A.E., Jr, James, E. III, Johnson, B., and James, J. Legal considerations of Medical of Medical Imaging. Leg Med. 1993, pp. 87–113.
ref.qxd 2/12/04 5:23 PM Page 621
REFERENCES
621
Kaliski, B.S., Jr. An Overview of the PKCS Standards. An RSA Laboratories Technical Note, 1993. Kamp, G.H. Medical-Legal Issues in Teleradiology: A Commentary. Am J Roentgenolo., Vol. 166, 1996, pp. 511–512. Lou, S.L., Sickles, E.A., Huang, H.K., et al. “Full-Field Direct Digital Mammograms: Technical Components, Study Protocols, and Preliminary Results,” IEEE Transactions on Information Technology in Biomedicine, Vol. 1, No. 4, pp. 270–278, 1997. Liu, B.J., Huang, H.K., Cao, F., Documet, L., and Sarti, D.A.A Fault-Tolerant Back-up Archive Using an ASP Model for Disaster Recovery. Booth#: 9617-PACS, InfoRAD Exhibits. 87th Radiological Society of North America, RSNA2001 Presentations, November 25–30, 2001, Chicago, Illinois. Machin, D., Campbell, M., Fayers, P., and Pinol, A. Sample size tables for clinical Studies, 2nd Edition, Blackwell Science, Malden, MA, 1997. McHugh, R.B., and Lee, C.T. Confidence interval estimation and the size of a clinical trail, Controlled Clinical Trails, 1984, 5, 157–63. Memon, N., and Wong, P.W. Protecting Digital Media Content. Communications of the ACM, Vol. 41, No. 7, pp. 35–43, 1998. Mogel, G.T., Huang, H.K., Cao, F., Liu, B.J., Zhou, M., and Zhang, J. PACS Simulator: A Standalone Education Tool, #622, Tuesday 11/27, 11:06am, S404AB, Scientific Papers, 87th Radiological Society of North America, RSNA2001 Presentations, November 25–30, 2001, Chicago, Illinois. PGP SDK User’s Guide, ftp://ftp.pgpi.org/pub/pgp/sdk/PGPsdkUsersGuide.pdf Pietka, E. Image Standardization in PACS. In Handbook of Medical Imaging, Editor-in-chief, IN Bankman, Section Edtiors RM Rangayyan, RP Woods, RA Robb, HK Huang, Academic Press. 2000. Chapter 48, 783–801. PKCS #1 v2.1: RSA Cryptography Standard, RSA laboratories, http://www.rsasecurity. com/rsalabs Public Key Infrastructure (PKI) http://home.xcert.com/~marcnarc/PKI/thesis/characteristics. html Quade, D. Rank analysis of covariance. JASA, 1967, 62, 1187–1200. The RIPEMD-160 page, http://www.esat.kuleuven.ac.be/~bosselae/ripemd160.html Rivest, R. The MD5 Message-Digest Algorithm. Document of MIT Laboratory for computer Science and RSA Data Security, Inc., 1992. ftp://ftp.funet.fi/pub/crypt/hash/papers/md5.txt Rivest, R., Shamir, A., and Adleman, L. “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems”, Communications of the ACM,Vol. 21, No. 2, pp. 120–126, 1978. RSA Crypto FAQ, http://www.rsasecurity.com/rsalabs/faq Schneier, B. Applied Cryptography: Protocols, Algorithms, and Source Code, in., C. John Wiley & Sons, N.Y., 1995, pp. 250–259. SHA1 secure Hash Algorithm-Version 1.0, www.w3.org/PICS/DSig/SHA1_1_0.html Stahl, J.N., Zhang, J., Chou, T.M., Zellner, C., Pomerantsev, E.V., and Huang, H.K. A New Approach to Tele-conferencing with Intravascular Ultrasound and Cardiac Angiography in a Low-Bandwidth Environment. RadioGraphics. 20:1495–1503, 2000. Stahl, J.N., Zhang, J., Zeller, C., Pomerantsev, E.V., Lou, S.L., Chou, T.M., and Huang, H.K. Tele-conferencing with Dynamic Medical Images. IEEE Trans Inform Tech Biom, Vol. 4, No. 2, 88–96, 2000. Telemedicine and Advanced Technology Research Center (TATRC) Integrated Research Team Radiology Imaging. Program Review. TATRC, US Army Medical Research and Materiel Command, Fort Detrick, Md., Sept. 20, 2000.
ref.qxd 2/12/04 5:23 PM Page 622
622
REFERENCES
Telemedicine and Telecommunications: Option for the New Century. HPCC Program Review and Summary. Program Book. National Library of Medicine, NIH, Bethesda, Md., March 13–14, 2001. Veracity Tutorial Manual, Appendix A: Glossary, http://www.veracity.com Verisign Onsite [http://www.verisign.com/products/onsite/onsite.pdf]. Walton, S. Image Authentication for a Slippery New Age. Dr. Dobb’s Journal, April 1995. Wong S.T.C., Abundo, M., and Huang, H.K. Authenticity Techniques for PACS Images and Records. SPIE Med Imaging, Vol. 2435, 1995, pp. 68–79. Yeung, M.M. Digital Watermarking. Communications of the ACM, Vol. 41, No. 7, pp. 31–33, 1998. Yu, F., Hwang, K., Gill, M., and Huang, H.K. Some Connectivity and Security Issues of NGI in Medical Imaging applications. J High Speed Networks. 9, 3–13, 2000. Zhang, J., and Huang, H.K. Automatic Background Recognition and Removal (ABRR) of Computed Radiography Images. IEEE Trans. Medical Imaging,Vol. 16, No. 6, 762–771, 1997. Zhang, J., Stahl, J.N., Huang, H.K., Zhou, X., Lou, S.L., and Song, K.S. Real-time teleconsultation with high resolution and large volume medical images for collaborative health care. IEEE Trans Inform Tech Biom, Vol. 4, No. 2, 178–185, 2000. Zhang, J., Han, R., Wu, D., Zhang, X., Zhuang, J., and Huang, H.K. Automatic Monitoring System for PACS Management and Operation. SPIE Medical Imaging, Vol. 4685–49, pp. 348–355, 2002. Zhao, J., Koch, E., and Luo, C. In Business Today and Tomorrow. Communications of the ACM, Vol. 41, No. 7, pp. 67–72, 1998. Zhou, X., Huang, H.K., and Lou, S.L. Authenticity and Integrity of Digital Mammogrpahy Image. IEEE Trans. Medical Imaging, Vol. 20, No. 8, 784–791, 2001. Zhou, X.Q., Lou, S.L., and Huang, H.K. Authenticity and Integrity of Digital Mammographic Images. Proc. SPIE Med Imaging, 1999. Vol. 3662, pp. 138–144. Zhou, X.Q., Huang, H.K., and Lou, S.L. A Study of Secure Method for Sectional Image Archiving and Transmission. SPIE Medical Imaging, Vol. 3980, pp. 390–399, 2000. Zimmerman, P. PGP User Guide, 4 Dec 1992.
Chapter 17 Liu, B.J., Documet, L., Sarti, D.A., Huang, H.K., and Donnelly, J. PACS Archive Upgrade and Data Migration: Clinical Experiences, SPIE Medical Imaging, Vol. 4685–14, pp. 83–88, Feb. 2002a. Osman, R., Swiernik, M., and McCoy, J.M. From PACS to Integrated EMR. Comp Med Imaging & Graphics V27, Issues 2–3, 207–215, 2003.
Chapter 18 Dayhoff, R., and Siegel, E.L. Digital Imaging Within and Among Medical Facilities. In R Kolodner (ed): Computerized Large Integrated Health Networks—The VA Success. New York: Springer Publishing, pp. 473–490. Liu, B.J., Cao, F., Zhou, M.Z., Mogel, G., and Docemet, L. Trends in PACS Image Stroage and Archive, Comp Med Imaging & Graphics V27, Issues 2–3, 165–174, 2003 Reiner, B.I., Siegel, E.L., Pomerantz, S.M., and Protopapas, Z. The impact of filmless radiology on the frequency of clinician consultations with radiologists. American Roentgen Ray Society Annual Meeting, San Diego, CA, May 5–10, 1996.
ref.qxd 2/12/04 5:23 PM Page 623
REFERENCES
623
Reiner, B.I., Siegel, E.L., Hooper, F.J., and Glasser, D. Effect of film-based versus filmless operation on the productivity of CT technologists. Radiology 1998 May; 207 (2): 481–485. Reiner, B.I., Siegel, E.L., Flagle, C., Hooper, F.J., Cox, R.E., and Scanlon, M. Effect of filmless imaging on the utilization of Radiologic services. Radiology. 2000 Apr; 215 (1): 163–167. Siegel, E.L., Diaconis, J.N., Pomerantz, S., Allman, R.M., and Briscoe, B. Making Filmless Radiology Work. J Digital Imaging. 1995; 8: 151–155. Siegel, E.L., Protopapas, Z., Pickar, E., Reiner, B.I., Pomerantz, S.M., and Cameron, E. Analysis of retake rates using computed radiography in a filmless imaging department. Abstract presented at the Radiological Society of North America Annual Meeting, Chicago, IL, December 3, 1996. Siegel, E.L. We’re off to see the wizard: Consultations in the 21st century. Diagnostic Imaging. 2000 May (5): 31, 33, 79. Siegel, E., Reiner, B., Abiri, M., Chacko, A., Morin, R., Ro, D.W., Spicer, K., Strickland N., and Young, J. The filmless radiology reading room: A survey of established picture archiving and communication system sites. J Digital Imaging. 2000 May; 13 (2 Suppl 1): 22–23. Siegel, E.L., and Reiner, B. Work flow redesign: the key to success when using PACS. AJR Am J Roentgenol. 2002 Mar; 178(3): 563–566. Siegel, E.L., and Reiner, B.I. Filmless Radiology at the Baltimore VA Medical Center: A Nine Year Retrospective. Comp Med Imaging & Graphics 2003 V27, Issues 2–3, 101–109.
Chapter 19 Bernman F., Fox G., Hey T., Grid Computing. Ed., John Wiley & Sons, Hoboken, NJ, 2003. Brady M., Gavaghan D., Simpson A., Parada M.M., Highnam R. eDiamond: a Grid-enabled federated database of annotated mammograms. In Bernman F, et al, Ed. Grid Computing, 923–943. John Wiley & Sons, Hoboken, NJ, 2003. Brinkley, J.F., Wong, B.A., Hinshaw, K.P., and Rosse, C. “Design of an anatomy information system,” Computer Graphics and Applications, Vol. 19, pp. 38–48, 1999. Computational Grids, The Grid: Blueprint for a New Computing Infrastructure, Chapter 2, Morgan-Kaufmann, 1999. Globus Toolkit 3 Core White Paper, http://www-unix.globus.org/toolkit/documentation.html Grids and Grid technologies for wide-area distributed computing, Mark Baker, etc. SP&E, 2002. Huang, H.K., Wong, S.T.C., and Pietka, E. Medical Image Informatics Infrastructure Design and Applications. Medical Informatics, Vol. 22, No 4, 279–289, 1997. Huang, H.K. “PACS—Basic Principles and Applications”, 1999, John Wiley and Sons, p. 521, NY, NY. Pietka, E., Pospiech-Kurkowska, S., Gertych, A., and Fao, F. Integration of Computer assisted Bone Age Assessment with Clinical PACS. Comp Med Imaging & Graphics, V27, Issues 2–3, June, 217–222, 2003. Rasmussen, E.M.“Indexing multimedia: Images,” Annual Review of Information Sciences and Technology, Vol. 31, 1997. SAN Technology: http://www.storage.ibm.com/ibmsan/whitepaper.html Tagare, H.D., Vos, F.M., Jaffe, C.C., and Duncan, J.S. “Arrangement: a spatial relation between parts for evaluating similarity of tomographic sections” IEEE Trans Pattern Analysis and Machine Intelligence, Vol. 17, pp. 880–893, 1995. Tagare, H.D., Jaffe, C.C., and Duncan, J.S. “Medical image databases: a content-based retrieval approach,” J. American Medical Informatics Association, Vol. 4, pp. 184–198, 1997.
ref.qxd 2/12/04 5:23 PM Page 624
624
REFERENCES
The Anatomy of the Grid: Enabling Scalable Virtual Organizations. http://www.globus.org/ research/OGSA The Grid: A New Infrastructure for 21st Century Science, http://www.aip.org/pt/vol-55/iss2/p42.html The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration, http://www.globus.org/research/papers.html The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration. http://www.globus.org/research/papers.html#OGSA What is grid computing, http://www-1.ibm.com/grid/about_grid/what_is.shtml
Chapter 20 Albanese,A., Hall, C., and Stanhope, R.The use of a computerized method of bone age assessment in clinical practice. Horm Res 3, 2–7, 1995. Al-Taani, A.T., Ricketts, I.W., and Cairns, A.Y. Classification of Hand Bones for bone age Assessment, ICECS ’96, 1088–1091, Vol. 2, ISBN 0-7803-3650-X (soft bound) and ISBN 0-7803-3651-8, (Microfiche). Archibald, R.M., Finby, N., and de Vito, F. Endocrine significance of short metacarpals. J. Clin. Endocr. 1959, 19, 1312–1322. Bach, J., Fuller, C., Gupta, A., Hampapur, A., Horowitz, B., Humphrey, R., Jain, R., and Shu, C. The Virage image search engine: An open framework for image management. Proc. SPIE Storage and Retrieval for Image and Video Databases 1996. Barnes, G.T., Wu, X., and Sanders, P.C. Scanning slit chest radiography: A practical and efficient scatter control design. Radiology, 1994, 190, 525–528. Behrman, R.E., Vaughan, V.C. III, and Nelson, W.E., eds. Textbook of Pediatrics, 13th, ed. Philadelphia, PA: WB Saunders, Co. 1987:1186. Cao, F., Huang, H.K., and Zhou, X.Q. Medical Image Security in a HIPAA Mandated PACS Environment. Computerized Medical Imaging and Graphics, 27:185–196, 2003. Cao, F., Huang, H.K., Pie¸tka, E., and Gilsanz, V. A digital hand atlas for Web-based bone age assessment: System design and implementation. SPIE Medical Imaging, Vol. 3976, 2000, 297–307. Cao, F., Huang, H.K., Pietka, E., and Gilsanz, V. Digital Hand Atlas and Web-based Bone Age Assessment: System Design and Implementation. J Comp Med Imag & Graphics. Vol. 24, 297–307, 2000. Cao, F., Huang, H.K., Pietka, E., and Gilsanz,V. Digital Hand Atlas and computer-Aided Bone Age Assessment via Web. SPIE Medical Imaging, Vol. 3662–59, 178–184, 1999. Cao, F., Huang, H.K., Pietka, E., Gilsanz,V., and Ominsky, S. Diagnostic workstation for digital hand atlas in bone age assessment, SPIE Medical Imaging, 3335:608–614, 1998. Cao, F., Huang, H.K., Pietka, E., Gilsanz, V., Dey, P., Gertych, A., and Pospiech, S. An Image Database for Digital Hand Atlas. SPIE Medical Imaging, Vol. 5033, 461–470, 2003. Cao, F., Huang, Zhou, Z., Mogel, G., et al. End-to-end Internet2 performance for medical imaging applications, Submitted to J. High Speed Networks, 2003. Cao, F., Liu, B.J., Zhou, Z., Huang, H.K., Zhang, J., Zhang, X., and Mogel, G.T. FaultTolerant PACS Server. SPIE Medical Imaging, Vol. 4685–44, 316–325, February 2002. Chakroborty, D.P. Image intensifier distortion correction. Med. Phys, 14, 249–252, 1987. Chernick, M. “Bootstrap Methods: A Practitioner’s Guide”, Wiley, 1999, New York. Darling, D.B. Radiography of Infants and Children (Charles C Thomas Publisher, 1979), 1-st, ed., Chap. 6, pp. 370–372.
ref.qxd 2/12/04 5:23 PM Page 625
REFERENCES
625
Dickhaus, H., Habich, R., Wastl, S., Maier, C., Gilli, G., and Schonberg, D. A PC-based system for bone age assessment, Proc. of the EMBEC ’99, Vienna 1999, 1008–1009. Eklof, O., and Rigertz, H. A method for assessment of skeletal maturity. Ann Radiol 10:330–336, 1967. Fan, Y., Hwang, Gill, M., and Huang, H.K. Some connectivity and security issues of NGI in medical imaging applications, J High Speed Networks, 9, 3–13, 2000. Flickner, M., Sawhney, H., Niblack, W., Ashley, J., Huang, Q., Dom, B., Gorkani, M., Hafner, J., Lee, D., Petkovic, D., Steele, D., and Yanker, P. Query by image and video content: The QBIC system. IEEE Computer, September 1995; pp. 23–32. Gertych, A., and Pie¸tka, E. An automated segmentation and features extraction from hand radiographs. ICCVG—International Conference on Computer Vision and Graphics, Zakopane, Poland 26–29. Sept. 2002 Proceedings of ICCVG 2002, Zakopane, pp. 267–274, 2002. Gertych, A., and Pietka, E. An automated segmentation and features extraction from hand radiographs. ICCVG 2002 (submitted for publication). Gertych, A., Pie¸tka, E., Cao, F., and Huang, H.K. Computer Assisted Bone Age Assessment: Region of Interest Segmentation, Symbiosis 2001, Szczyrk, 67–71, 2001. Giger, M.L., Huo, Z., Kupinski, M.A., and Vyborny, C.J. Computer-aided diagnosis in mammography. Handbook of Medical Imaging, Volume 2. Medical Image Processing and Analysis. M. Sonka and M.J. Fitxpatric, Eds. SPIE 1999; 249–272. Giger, M.L., Karssemeijer, N., and Armato, S.G. Computer-Aided Diagnosis in Medical Imaging. IEEE Transactions on Medical Imaging 2001; 20; 1205–1208. Greulich, W.W., and Pyle, S.I. Radiographic Atlas of Skeletal Development of Hand Wrist. Stanford, CA. Stanford University Press, 1959. Greulich, W.W., and Pyle, S.I. Radiographic Atlas of Skeletal Development of Hand Wrist, Stanford, CA. Stanford University Press. 2-nd, ed. 1971. Hammer, L.D., Kraemer, H.C., Wilson, D.M., Ritter, P.L., and Dornbusch, S.M. Standardized percentile curves of body-mass index for children and adolescents. American Journal of Disease of Child. 1991; 145:259–263. Huang, H.K. Picture Archiving and Communication Systems: Principle and Applications. Wiley & Sons, NY, p. 521, 1999. Huang, H.K., Pietka, E., Cao, F., and Gilsanz, V. A Digital Hand Atlas for Bone Age Assessment of Children—An application of PACS Database. SPIE Medical Imaging, Vol. 3662–23, 178–184, 1999. Huang, H.K., Wong, S.T.C., and Pietka, E. Medical Image Informatics Infrastructure Design and Applications. Medical Informatics 22, 4, 279–289, 1998. Jiang, Y. Classification of breast lesions in mammograms. In: Handbook of Medical Imaging, ed. I. N. Bankman, Academic Press, 2000. Johnston, F.E., and Jahina, S.B. The contribution of the carpal bones to the assessment of skeletal age. Am. J. Phys. Anthrop., 1965, 23, 349—354. Karlberg, P., and Taragner, J. The somatic development of children in a Swedish urban community. Acta Pacdiatr Scand Suppl. 1976. 2:58. Katsuragawa, S., Doi, K., Nakamori, N., and MacMahon, H. Image feature analysis and computer-aided diagnosis in digital radiography. Detection and characterization of interstitial lung disease in digital chest. Med. Phys., 1988, 15, 311–319. Kelcz, F., Zink, F.E., Peppler, W.W., Kruger, D.G., Ergun, D.L., and Mistretta, C.A. Conventional chest radiography vs dual-energy computed radiography in the detection and characterization of pulmonary nodules. AJR, 1994, 162, 271–278.
ref.qxd 2/12/04 5:23 PM Page 626
626
REFERENCES
Kirks, D.R. Practical Pediatric Imaging. Diagnostic Radiology of Infants and Children (Little, Brown & Company, Boston/Toronto, 1984), 1-st, ed., Chap. 6, 198–201. Kosowicz, J. The roentgen appearance of the hand and wrist in gonadal dysgenesis. Radiology, 1965, 93, 354–361. Leventon, M.E., Zhang, M., Liu, L., and Huang, H.K. CAD: Beyond breast care. Advance for Imaging and Oncology Administrators 2002; 12; 69–72. Levitt, T.S., and Hedgcock, M.W. Model-based analysis of hand radiographs, Proc. SPIE, 1989 1093, 563–570. Loder, R.T., Estle, D.T., Morrison, K., Eggleston, D., Fish, D.N., Greenfield, M.L., and Guire, K.E. Applicability of the Greulich and Pyle skeletal age standards to black and white children of today. Amer. J. Diseases of Children, 1993, 147, 1329–1333. MacMahon, H., Montner, S.M., and Doi, K. The nature and subtlety of abnormal findings in chest radiographs. Med. Phys, 1991, 18, 206–210. Mallat, S., and Zhong, S. Characterization of Signals from Multiscale Edges. IEEE Trans. Patt. Anal. Mach. Int., 14, 7, 1992. Mamdani, E.H. Application of fuzzy algorithms for control of simple dynamic plant, Proc. IEEE 121 (1974) 1585–1588. Marshall, W.A., and Tanner, J.M. Variations in the pattern of pubertal changes in boys. Archives of Disease in Childhood. 45, 13–23, 1970. McNitt-Gray, M.F., Pietka, E., and Huang, H.K. Image preprocessing for Picture Archiving and Communication System. Investigative Radiology, 1992, 7, 529–535. McNitt-Gray, M.F., Taira, R.K., Eldredge, S.E., et al. An automatic method for enhancing the display of different tissue densities in digital chest radiographs. J. Digital Imaging, 1993, 6, 95–104. Michael, D.J., and Nelson, A.C. HANDX: A model—based system for automatic segmentation of bones from digital radiographs, IEEE Trans. Medical Imaging, 1989, 8, 64–69. Mora, S., Boechat, M.I., Pieka, E., Huang, H.K., and Gilsanz, V. Skeletal Age Determinations in Children of European and African Descent: Applicability of the Greulich and Pyle Standards. Pediatric Research. Vol. 50, No. 5, 624–628, 2001. Nakamori, N., Doi, K., MacMahon, H., Sabeti, V., and Montner, S. Effect of heart-size parameters computed from digital chest radiographs on detection of cardiomegaly. Potential usefulness for computer-aided diagnosis. Investigative Radiology, 1991, 26, 546–550. National Center for Health Statistics, National Health and Nutrition Examination Survey (NHANES) III, 2000 CDC Growth Chart, http://www.cdc.gov/growthcharts/. Nelson, M., Cao, F., Liu, L., and Huang, H.K. 2002. A Pediatric MR Image Database for Teaching Neuro Anatomy and Diseases, Radiology. 225(P) P. 757, 2002. Nelson, M.D., Cao, F., Nielsen, J.F., Liu, L., and Huang, H.K. Image-matching as a Diagnostic Support Tool for Brain Diseases in Children. Radiographics. (Accepted for publication), 2003. Nielsen, J.F., Nelson, M.D., Cao, F., Liu, L., and Huang, H.K. Image-matching as a diagnostic support tool. Proceedings of SPIE Medical Imaging 2003. Nielsen, J.F., Nelson, M.D., Cao, F., Liu, L., and Huang, H.K. Pediatric MR image database for teaching neuro anatomy and diseases. RSNA 2002 InfoRAD exhibit. Niemeijer, M., Ginneken, B., Maas, C.A., Beek, F.J.A., and Viergever, M.A. Assessing the skeletal age from a hand radiograph: automating the Tanner-Whitehouse method, SPIE Medical Imaging, Vol. 5032, 1197–1205, 2003. Pappas, T. An adaptive segmentation algorithm for image segmentation, IEEE Trans. Signal Proc. Vol. 40, No. 4, April 1992, pp. 901–914.
ref.qxd 2/12/04 5:23 PM Page 627
REFERENCES
627
Pie¸tka, E., Po´spiech, A., Gertych, A., Cao, F., Huang, H.K., and Gilsanz, V. Computer Automated Approach to the extraction of epiphyseal regions in hand radiographs. Journal of Digital Imaging, 14, 165–172, 2002. Pie¸tka, E., Po´spiech, S., Gertych, A., Cao, F., Huang, H.K., and Gilsanz, V. Computer Automated Approach to the extraction of epiphyseal regions in hand radiographs. Journal of Digital Imaging, Vol. 14, No, 4, 2001, 165–172. Pie¸tka, E., Pospiech-Kurkowska, S., Gertych, A., and Cao, F. Integration of Computer assisted bone age assessment with clinical PACS. Comp. Med. Img. Graph, 27(2–3), 217–228, 2003. Pietka, E. Computer-assisted bone age assessment based on features automatically extracted from a hand radiograph. Computerized Medical Imaging & Graphics, 1995, 19, 251–259. Pietka, E. Image standardization in PACS. In: Handbook of Medical Imaging, ed. I. N. Bankman, Academic Press, 2000. Pietka, E., and Huang, H.K. Epiphyseal fusion assessment based on wavelets decomposition analysis. Computerized Medical Imaging & Graphics, 1995, 19, 465–472. Pietka, E., and Huang, H.K. Image Processing Techniques in Bone Age Assessment; In: Image Processing Techniques and Applications (ed. C.T. Leondes), Gordon & Breach Publishers, 1997, Chap 5, 221–272. Pietka, E., and Ratib, O. Dual-energy digital radiography in PACS. Proc. ECR, 1993, 199. Pietka, E., Gertych, A., and Pospiech-Kurkowska, S. Fuzzy Clustering in Evaluation of Radiographic Data of Skeletal Development, Computer Recognition Systems, KOSYR, 111–116, 2001. Pietka, E., Gertych, A., Pospiech, S., Cao, F., Huang, H.K., and Gilsanz, V. Computer Assisted Bone Age Assessment: Image Preprocessing And Epiphyseal/Metaphyseal ROI Extraction. IEEE Trans. on Medical Imaging, 20, 715–729, 2001. Pietka, E., Kaabi, L., Kuo, M.L., and Huang, H.K. Feature extraction in carpal bone analysis. IEEE Trans. Med. Imag., 1993, 12, 44–49. Pietka, E., McNitt-Gray, M.F., and Huang, H.K. Computer-assisted phalangeal analysis of hand radiographs. IEEE Trans. Med. Img. 1991, 10, 616–620. Pietka, E., McNitt-Gray, M.F., Hall, T., and Huang, H.K. Computerized bone analysis of hand radiographs. Proc. SPIE, 1992, 1652, 522–528. Pietrobelli, A., Faith, M.S., Allison, D.B., Gallagher, D., Chiumello, G., and Heymsfield, S.B. Body mass index as a measure of adiposity among children and adolescents: A validation study. Journal of Pediatrics. 1998; 132:204–210. Po´spiech, S., Gertych, A., Pietka, E., Cao, F., and Huang, H.K. Wavelet decomposition based features in description of epiphyseal fusion. In: Analysis of biomedical signals and images. Proc. of BIOSIGNAL, Brno 2000, 246–248. Po´spiech-Kurkowska, S., Gertych, A., Pie¸tka, E., Cao, F., and Huang, H.K. Wavelet decomposition based features in description of epiphyseal fusion”, Analysis of biomedical signals and images—BIOSIGNAL 2000, Brno, pp. 246–248, 2000. Pospiech-Kurkowska, S., Gertych, A., Pietka, E., Cao, F., and Huang, H.K. Wavelet Decomposition Based Features in Description Of Epiphyseal Fusion, Proc. of Biosignal 2000, Brno, pp. 246–248. Pospiech-Kurkowska, S., Pie¸tka, E., Cao, F., and Huang, H.K. Directional Analysis in Assessment of Epiphyseal Fusion. Symbiosis 2001, 61–66, 2001. Pospiech-Kurkowska, S., Pie¸tka, E., Cao, F., and Huang, H.K. “Fuzzy System for the Estimation of the Bone Age from Wavelet Features”, Proc. BIOSIGNAL 2002, Brno, 441–443, 2002. Pospiech-Kurkowska, S., Pie¸tka, E., Cao, F., and Huang, H.K. “Directional Analysis in Assessment of Epiphyseal Fusion” Proc. SYMBIOSIS 2001 pp. 61–66.
ref.qxd 2/12/04 5:23 PM Page 628
628
REFERENCES
Pospiech-Kurkowska, S., Pie¸tka, E., Cao, F., and Huang, H.K. “Directional Analysis in Assessment of Epiphyseal Fusion” (Proc. SYMBIOSIS 2001 pp. 61–66 Szczyrk 2001). Pospiech-Kurkowska, S., Pie¸tka, E., Cao, F., and Huang, H.K. Ocena procesu zrostu nasad i przynasad przy pomocy ka¸ta falkowego. Proc. XII BIB Warszawa, 711–716, 2001. Poznanski, A., Garn, S.M., Nagy, J.M., and Gall, J.C. Metacarpophalangeal pattern profiles in the evaluation of skeletal malformations. Radiology, 1972, 104, 1–11. Rehm, K., Pitt, M.J., Ovitt, T.W., and Dallas, W.J. Digital image enhancement for display of bone radiograph. Proc. SPIE, 1992, 1652, 582–590. Roche, A.F., and Johnson, J.M. A comparison between methods of calculating skeletal age (Greulich-Pyle). Am J Phys Anthropol 30:221–230, 1969. Roche, A.F., Davila, G.H., Pasternack, B.A., and Walton, M.J. Some factors influencing the replicability of assessments of skeletal maturity (Greulich-Pyle). Am. J. Roentgenol, 1970, 109, 299–306. Roche, A.F., et al. National Center for Health Statistics: Skeletal maturity of children 6–11 years, United States, by AF Roche, J Roberts and PVV Hamill. Vital and Health Statistics. Series 11-No. 140. DHEW Pub. No. (HRA) 75-1622. Health Resources Administration. Washington, U.S. Government Printing Office, Nov. 1974. Roche, A.F., et al. National Center for Health Statistics: Skeletal maturity of children 6–11 years: Radcial, geographic area and socioeconomic differentials, United States, by AF Roche, J Roberts and PVV Hamill. Vital and Health Statistics. Series 11-No. 160. D HEW Pub. No. (HRA) 77–1642. Health Resources Administration. Washington, U.S. Government Printing Office, Nov. 1976. Roche, A.F., et al. National Center for Health Statistics: Skeletal maturity of youths 12–17 years: Racial, geographic area and socioeconomic differentials, United States, by AF Roche, J Roberts and PVV Hamill. Vital and Health Statistics. Series 11-No. 167. DHEW Pub. No. (HRA) 79–1654. Health Resources Administration. Washington, U.S. Government Printing Office, Oct. 1978. Roeske, J.C., Giger, M.G., Dixon, L.B., and Doi, K. Computerized analysis of osteoporosis on bone radiographs. Radiology, 1990, 177(P): 317. Rui, Y., Huang, T.S., and Chang, S.F. Image Retrieval: Current techniques, promising directions, and open issues. Journal of Visual Communication and Image Representation 1999; 10; 39–62. Sanada, S., Doi, K., and MacMahon, H. Image feature analysis and computer-aided diagnosis in digital radiography. Automated delineation of posterior ribs in chest images. Medical Physics, Vol. 18, pp. 964–971, 1991. Sato, K., and Mitani, H. Ecaluation of bone maturation with a computer, Cli Pediatr Endoclinol 8(suppl 12):13–16, 1999. Schmid, F., and Moll, H. Atlas Der Nonmalen and Pathologischen Handskeletenwicklung. Springer Verlag, Berlin, 1960. Shyu, C.R., Brodley, C.E., Kak, A.C., Kosaka, A., Aisen, A.M., and Broderick, L.S. ASSERT: A physician-in-the-loop content-based retrieval system for HRCT image databases. Computer Vision and Image Understanding 1999; 75; 111–132. Simmons, K. Brush Foundation Study of Child Growth and Development II: Physical Growth and Development, Periodicals Service, Co, 1972. Sugiura, Y., and Nakazawa, O. Roentgen Diagnosis of Skeletal Development. Chugai-Igaku Company, Tokyo, Japan, 1972. Taira, R.K., Breant, C.M., McNitt-Gray, M.F., Sinha, S., and Huang, H.K. Adding intelligence to PACS. Proc. SPIE, 1992, 1654, 476–484. Talairach, J., and Tournoux, P. Co-planar Stereotaxic Atlas of the Human Brain: 3-
ref.qxd 2/12/04 5:23 PM Page 629
REFERENCES
629
Dimensional Proportional System—an Approach to Cerebral Imaging, Thieme Medical Publishers, New York, NY, 1988. Tanner, J.M. Physical growth and development, in Textbook of Pediatrics, edited by JO Forfar, CC Arnell. Edinburgh, Churchill Livingstone, 1978, 249–303. Tanner, J.M., and Gibbons, R.D. Automatic bone age measurement using computerized image analysis. J. Ped. Endocrinology, 7, 141–145, 1994. Tanner, J.M., and Gibbons, R.D., A Computerized Image System for Estimating Tanner-Whitehouse 2 Bone Age. Horm Res. 1994, 42, 282–287. Tanner, J.M., and Whitehouse, R.H. Assessment of Skeletal Maturity and Prediction of Adult Height (TW2 Method). Academic Press, London, 1975. Tanner, J.M., Healy, M.J.R., Goldstein, H., and Gameron, N. Assessment of Skeletal Maturity and Prediction of Adult Height (TW3 Method). WB Saunders, Co. 2000. Todd-Pokropek, A. Image processing and PACS: Implications and potential. Proc. EUROPACS’93, 1993, 66. Tou, J.T., and Gonzalez, R.C. Pattern recognition Principles Reading MA: Addison-Wesley, 1974. van Bemmel, J.H., and Musen, M.A. Handbook of Medical Informatics, Springer, 1997. Van de Wouwer, G., Scheunders, P., Van Dyck, D. Statistical Texture Characterization from Discrete Wavelet Representations, IEEE Trans. Med. Im., Vol. 8, No. 4, April 1999, pp. 592–598. van Ginneken, B., ter Haar Romeny, B.M., Viergever, Max, A. Computer-Aided Diagnosis in chest radiography: A survey. IEEE Transactions on Medical Imaging 2001; 20; 1228–1241. Zhang, M., Liu, L., and Leventon, M. A Global Coordinate System Based on Large Training Data Sets For Brain Image Registration. Radiology. 225(P) P. 762, 2002. Zhang, M., Liu, L., and Leventon, M.E. A global coordinate system based on large training sets for brain image registration. RSNA 2002 InfoRAD exhibit. Zhou, X.Q., Lou, S.L., and Huang, H.K. Authenticity and Integrity of Digital Mammographic Images. SPIE Medical Imaging, Vol. 3662–16, Feb. 20–26, 1999. Zhou, Z., Law, M.Y., Huang, H.K., Cao, F., Liu, B.J., Zhang, J., et al. Educational RIS/PACS Simulator. SPIE Medical Imaging, Vol. 5033, 139–147, 2003. Zhu X.M., Lee K.N., Levin D.L., et al. Temporal Image Database Design for Outcome Analysis. J Comput. Med. Imaging Graphics, Vol. 20, 1996, pp. 347–356.
Chapter 21 Adler, J.R., Chang, S.D., Murphy, M.J., Doty, J., Ceis, P., and Hancock, S.L. 1997. the Cyberknife: a frameless robotic system for radiosurgery. Stereotaxtic & Functional Neurosurgery, Vol. 69 (1–4 Pt 2): 124–128. Borras, C. Automation in Precision Radiotherapy: From Image Acquisition to Dose Computation—Existing and Needed Standards. In NZ Xie, Medical Imaging and Precision Radiotherapy. Guangzhou, China: The Foundation of International Scientific Exchange, 2000, pp. 168–169. Cheung, K.Y. Precision Radiotherapy in Nasopharyngeal Carcinoma treatment, In NZ Xie, Medical Imaging and Precision Radiotherapy. Guangzhou, China: The Foundation of International Scientific Exchange, 2000, pp. 69–83. DICOM: Digital Imaging and Communication in Medicine. National Electrical Manufacturers’ Association. Rosslyn, VA: NEMA, 1996. Hendee, W.R. Medical Imaging for the 21st Century. In NZ Xie, Medical Imaging and
ref.qxd 2/12/04 5:23 PM Page 630
630
REFERENCES
Precision Radiotherapy. Guangzhou, China: The Foundation of International Scientific Exchange, 2000, pp. 24–30. HL7: Health Level Seven. An Application Protocol for Electronic Data Exchange in Health Care Environments. Version 2.1. Ann Arbor, MI: Health Level Seven, Inc., 1991. Huang, H.K. 2001. PACS, Informatics, and the Neurosurgery Command Module. J. Mini Invasive Spinal Technique. Vol. 1, 62–67. Law, M., and Huang, H.K. 2003. Concept of a PACS and Imaging Informatics-Based Server for Radiation Therapy Comp Med Imaging & Graphics, V. 27, 1–9. Law, Y.Y., and Huang, H.K. 2001. Serving up Integrated Radiation Therapy. Advance Imaging and Oncology Administrator. Vol. 11. No. 11, 46–53. McDonald, C.J. 1997. The Barrier to Electronic Medical Record Systems and How to Overcome them. J Amer Med Informatics Assoc Vol. 4, May/June, 213–221. Neumann, M. The Impact of DICOM in Radiotherapy. Lecture 5th Biennal ESTRO Meeting on Physics for Clinical Radiotherapy, Gottingen, April 8, 1999. http://www.sgsmp.ch/ null983b.htm. Sternick, E.S. Intensity Modulated Radiation Therapy. In NZ Xie, Medical Imaging and Precision Radiotherapy. Guangzhou, China: The Foundation of International Scientific Exchange, 2000, pp. 38–52.
Chapter 22 Beird, L.C. The importance of a picture archiving and communications system (PACS) manager for large-scale PACS installations. J Digital Imaging. 12(2 Suppl 1):37; 1999. Beird, L.C. How to satisfy both clinical and information technology goals in designing a successful picture archiving and communication system. J Digital Imaging. 13(2 Suppl 1):10–2; 2000. Cao, F., Sickles, E.A., and Huang, H.K. An Interactive Digital Breast Imaging Teaching File. RSNA EJ 1:26 pars. Available Online: http://ej.rsna.org//EJ_0_961003797.fin/mamms.html. 18 July 1997. Carrino, J.A., Unkel, P.J., Miller, I.D., Bowser, C.L., Freckleton, M.W., and Johnson, T.G. Large-scale PACS implementation. J Digital Imaging. 11(3 Suppl 1):3–7; 1998. Crivianu-Gaita, D., Babyn, P., Gilday, D., O’Brien, B., and Charkot, E. User acceptability–a critical success factor for picture archiving and communication system implementation. J Digital Imaging. 13(2 Suppl 1):13–6; 2000. Edward, B. How many people does it take to operate a picture archiving and communication system? J Digital Imaging. 14(2 Suppl 1):40–3; 2001. Hirchom, D., Eber, C., Samuels, P., Gujrathi, S., and Baker, S.R. Filmless in New Jersey, The New Jersey Medical School PACS Project. J Digital Imaging. 15(Suppl 1):7–12; 2002. Huang, C. Changing learning with new interactive and media-rich instruction environments: virtual labs case study report. Comp Med Imaging & Graphics V27, Issues 2–3, 157–164; 2003. IHE Demonstration Participants 2001–2002. http://www.rsna.org/IHE/participation/index.shtml. Integrating the Healthcare Enterprise. http://www.rsna.org/IHE/index.shtml. Law, M.Y.Y., Tang, F.H., and Cao, F. A PACS and Image informatics training. Scientific Program, 87th Scientific Assembly and Annual Meeting, The Radiological Society of North America, Nov 25–30; 120; 2001. Law, M.Y.Y., and Hunag, H.K. Concept of a PACS and imaging informatics-based server for radiation therapy. Comp Med Imaging & Graphics, V. 27, 1–9; 2003.
ref.qxd 2/12/04 5:23 PM Page 631
REFERENCES
631
Pilling, J. Problems facing the radiologist tendering for a hospital wide PACS system. European J Radiology. 32(2):101–5; 1999. Protopapas, Z., Siegel, E.L., Reiner, B.I., Pomerantz, S.M., Pickar, E.R., Wilson, M., and Hooper, F.J. Picture archiving and communication system training for physicians: lessons learned at the Baltimore VA Medical Center. J Digital Imaging. 9(3):131–6; 1996. Tang, F.H., LAW, M.Y.Y., Zhang, J., Liu, H.L., Matsuda, K., and Cao, F. Implementation of a PACS for radiography training and clinical service in a university setting through a multinational effort. In: E.L. Siegel; H.K. Huang. Medical Imaging 2001: PACS and integrated medical information systems: design and evaluation, Proceedings of SPIE, Vol. 4323, 67–72; 2001. Watkins, J. A hospital-wide picture archiving and communication system (PACS): the views of users and providers of the radiology service at Hammersmith Hospital. European J Radiology. 32(2):106–12; 1999.
Chapter 23 Bauman, R.A., Gell, G., and Dwyer, S.J. III. Large Picture Arching and Communication Systems of the World—Parts 1 and 2 J. Digital Imaging, Vol. 9, Nos. 3 and 4, 99–103, 172–177, 1996. Dayhoff, R.E., Meldrum, K., and Kuzmak, P.M., Experience Providing a Complete Online Multimedia Patient Record. Session 38. Healthcare Information and Management Systems Society, 2001 Annual Conference and Exhibition, Feb. 4–8, 2001. GE Medical Systems Technical White Paper Cluster PACS “Pilot Project” Proposal For Hong Kong Hospital Authority. February 2002. Ha, D.-H., and Moon, M.-S. Effectiveness of wireless communication using 100 Mbps infrared in PACS. Dept. of Diagnostic Radiology and PACS team, Pundang CHA General Hospital, Pochon CHA University, Sungnam, Korea. Proceedings SPIE Medical Imaging, 2002, to appear. Huang, H.K. PACS—Basic Principles and Applications. John Wiley & Sons, NY, NY. 1999. Huang, H.K. Consultancy Study of Image Distribution and PACS in Hong Kong—Final Report. Ref: (25) in HA 105/26 VII CL, March 21, 2002. Huang, H.K. Development of a Digital Medical Imaging Archiving and Communication system (MIACS) for Services of department of Health, HKSAR Government—Final Report. July 15, 2002. Hur, G., Cha, S.J., et al. The Impact of PACS in Hospital Management. Dept. of Radiology, Inje University Ilsan Paik Hospital, Korea. Proceedings SPIE Medical Imaging, 2002, to appear. Inamura, K., Kousaka, S., Yamamoto, Y., et al. PACS Development in Asia Comp Med Imaging & Graphics V27, Issues 2–3, June, 121–128, 2003. Kim, H.-J. PACS Industry in Korea. Dept. of Radiology, Yonsei University College of Medicine, Seoul, 120–752, Korea. Proceedings SPIE Medical Imaging, 2002, to appear. Kim, J.H. Technical Aspects of PACS in Korea. Dept. of Radiology, Seoul National University Hospital, Seoul National University. Proceedings SPIE Medical Imaging, 2002, to appear. Lemke, H.U., Niederlag, W., and Houser, H. Specification and evaluation of a regional PACS in the SaxTeleMed project. Technical University Berlin, FR 3–3, CG & CAM. Proceedings SPIE Medical Imaging, 2002, to appear. Lim, J.H. Cost-Justification on Filmless PACS and National Policy. Dept. of Radiology,
ref.qxd 2/12/04 5:23 PM Page 632
632
REFERENCES
Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea. Proceedings SPIE Medical Imaging, 2002, to appear. Lim, H., Kim, D.O., Ahn, J.Y., et al. Full PACS Installation in Seoul National University Hospital, Korea. Proceedings SPIE Medical Imaging, 2002, to appear. Neiderlag, W., and Lemke, H.U., Ed. Digital Imaging and Image Communication between Hospitals in the Free State of Saxony, Germany (SaxTeleMed Reference Model Program). Papers Presented t the 15th International Congress and Exhibition on Computer Assisted Radiology and Surgery, June 27–30, 2001. Health Academy 02/2001. Philips Medical Systems. Image Distribution and PACS system for Hong Kong Health Authority. February 2002. Siemens Medical Solutions. Future of Imaging and Beyond in Hong Kong. February, 2002. Sim, J., Kang, K., Lim, M.-K., et al. Connecting and Sharing Medical Images among Three Big Full PACS Hospital in Korea. Proc EuroPACS 20th International Conference, Oulu, Finland Sept 5–7, 2002, 22–24.
gl.qxd 2/12/04 5:04 PM Page 633
PACS AND IMAGING INFORMATICS GLOSSARY
1. Backup archive The PACS controller and archive server saves every patient’s images from five to seven years in the US. Since the archive has only one copy, a backup archive with one extra copy, preferably offsite, is necessary to assure data integrity. If disaster occurs, the controller can retrieve images immediately from the backup archive to maintain the clinical operation. 2. Communication networks PACS is system integration, all components in the system are connected through communication networks.The networks use TCP/IP (Transport control protocol/internet protocol) communication protocols. If the networks are within the hospital (Intranet), they are called local area network (LAN). If the networks are outside of the hospital (Internet) metropolitan area they are called wide area network (WAN). (See also Internet/Intranet.) 3. DICOM Digital Imaging and Communication in Medicine (DICOM) is the de facto standard used in PACS for image and data format and communications. It has two components, communication protocol and image data format. The former is based on the TCP/IP protocol. All major imaging and related data manufacturer equipment are DICOM compliant. Every hospital should request this standard in its future image related equipment acquisition. 4. Diagnostic workstation Among all components in PACS, most healthcare providers are familiar with diagnostic workstations. From these workstations, they make diagnoses and review patient images. Two 2000 lines LCD monitors are usually needed for radiologists to make primary diagnosis. 5. Enterprise PACS PACS was first developed for radiology sub-specialties, then extended to the radiology department, then the whole hospital. The current trend is that hospitals are merging and federating. It creates many healthcare enterprises. Enterprise PACS is a very large-scale PACS in supporting of these federated or enterprise hospitals. The key functions in enterprise PACS are fault-tolerant, backup archive, and streamlining dataflow. 6. ePR Hospital Information Systems (HIS) are usually organized by the functions of departments, wards, and specialties. It was designed for facilitating the hospital and business operation. This design is not patient oriented. For a patient underwent various examinations and procedures, healthcare providers usually cannot trace the patient medical history from a single information system. Electronic patient record (ePR) or Electronic medical record (eMR) is an effort to re-design HIS with patient as the focus. Most ePRs used currently contain only textual information, efforts are being made to include image data. PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
633
gl.qxd 2/12/04 5:04 PM Page 634
634
PACS AND IMAGING INFORMATICS GLOSSARY
7. Fault-tolerant PACS PACS has several single-points-of-failure (SPOF). When any SPOF fails, it could cripple the PACS operation or even render the whole system fail. Fault-tolerant (FT) PACS is a design to minimize the SPOF and recovers the fails gracefully to avoid a total system catastrophe. The most critical features in FT PACS are the backup image data and the FT PACS server. 8. HIPAA HIPAA (Health Insurance Portability and Accountability Act) is mandated guidelines to healthcare providers for receiving insurance payment. It became effective in April 2003. Its implications to PACS, among others, are data privacy, security, and integrity. For these reasons, PACS design has to be fault-tolerant in order to be HIPAA compliant. 9. HIS Hospital information system (HIS) is a healthcare information system designed to streamline the hospital and business operations. HIS is comprised of many health related databases used in the hospital, it has grown gradually through years from several databases to many. For large hospitals, HIS uses mainframe computers and employed many staff members to operate the system. 10. HIS, RIS, PACS Integration In order for PACS to operate effectively and efficiently, HIS, RIS and PACS have to be integrated. They are usually integrated with a DICOM broker in which pertinent patient data and radiology reports are translated from one system to the other. HIS, RIS, PACS integration is necessary for the success of PACS operation, and for the design of ePR with image distribution. 11. HL7 Health Level 7 is a data format standard used for healthcare textual information. Once the data is encoded in HL7, it would be transparent to other healthcare system following the standard. The data transmission between two healthcare systems following HL7 normally uses TCP/IP communication protocol. 12. IHE IHE (Integrating the Healthcare Enterprise) consists of hospital workflow profiles, in particular, radiology workflows. The number of profiles have been increasing to more than twelve now. The goal of IHE is for manufacturers to follow the workflow profiles so that their equipments can be easier to integrate with others in clinical environment. 13. Image Acquisition Gateway There are many imaging modalities in radiology operation. In general, they are not connected to the PACS controller and server directly but to an image acquisition gateway as a staging buffer. The gateway checks the DICOM compliance of the image and makes necessary correction if needed before it forwards the image to the PACS controller. Several imaging modalities can share a gateway. 14. Image archive After an image is acquired from the modality, it goes to the image acquisition gateway, then the PACS controller and archive. In the archive, the database acknowledges the receipt of the image, updates the patient directory, and stores the image in the storage system in both RAID (redundant array of inexpensive disks) for short-term and tape or disk for long-term. Images in the short-term archive are deleted after a set time. 15. Image compression A 300–500 bed hospital can generate 2–5 terabytes of image data per year. For this reason, image compression may be necessary to save the storage space and speed up the image transmission time. Compression
gl.qxd 2/12/04 5:04 PM Page 635
PACS AND IMAGING INFORMATICS GLOSSARY
16.
17.
18.
19.
20.
21.
22.
23.
635
can be lossless or lossy. Currently, lossless is mostly used with a compression ratio of 2:1 which reduces 50% of the image size. Image quality Image quality is an inherent property of the imaging modality when it generates the image. Image quality is measured by three parameters: spatial resolution, density resolution and signal-to-noise ratio. When an image flows through various components of the PACS, the image quality has to be preserved. Image size The medical image size and the amount of data generated in an examination vary. The size of a medical image can be from 0.5 Mbytes (CT) to 40 Mbytes (digital mammogram). A typical chest examination with two images generates 20 Mbytes, a thin section CT examination can generate a thousand images of 500 Mbytes. In PACS implementation, care has to be taken to investigate the image load for the transmission and archive requirement. Image/data security Image/data security considers three parameters: authenticity—Who sends the image; privacy—who can access the image; and integrity—has the image be altered since it was generated? HIPAA compliance requires the healthcare providers to guarantee the image/data security. Image-based surgery and therapy In surgery and in radiation therapy, images are used to assist the operation procedure. In surgery, pre-surgical images can be used to simulate the surgical outcome, and 3-D rendering of CT/MR images to guide for surgery. In radiation therapy, CT simulation images are used to plan the therapeutic beams and predict radiation dose distribution. These images are normally obtained from the PACS archive server. Imaging Informatics Imaging informatics is a branch of science to manage, extract, and process pertinent information from large image databases. Methods used in imaging informatics include image processing, statistics, data mining, graphic user interface, visualization, and display technologies. Large image databases are mostly available in PACS. Internet/Intranet Internet is a technology used for network communication. TCP/IP is the most commonly used protocol. All PACS components are connected by TCP/IP. When all PACS components connection is within the hospital networks, it is called intranet. The data transmission speed can reach gigabits/sec, and the cost is low. When the components connection requires public networks, it is called Internet, in this case, the data transmission speed is slower and more expensive. PACS Picture Archiving and Communication System (PACS) is system integration of imaging modalities from radiology and various information systems from the hospital for effective and efficient clinical operation. The major components of PACS are radiology information system gateway, image acquisition gateway, PACS controller and archive server, display workstations, application servers, communication networks, PACS monitoring, and PACS software. PACS Controller and Archive Server PACS controller and archive server has three major functions in PACS operation. It monitors the dataflow of the complete system operation, archive all image and related data, and distribute them timely to proper workstations and application servers for review and other clinical functions. In general, patient directory is kept in mirrored databases, and
gl.qxd 2/12/04 5:04 PM Page 636
636
24.
25.
26.
27.
28.
29.
30.
31.
PACS AND IMAGING INFORMATICS GLOSSARY
images are stored in RAID and long-term storage devices. Image data normally requires second copy backup. In the United States, storage requirement of a patient’s images is from five to seven years. PACS monitoring system Since PACS has many integrated components, the PACS controller needs to keep track of the performance of every component and every transaction in the system. PACS monitor system is a software package resided either inside of the controller or outside in a monitoring server. Tools are implemented in the monitoring system to analysis dataflow to streamline PACS operation. The monitoring system can be used to detect system bottlenecks, speed up network performance, and trace and recover system or human errors. PACS training PACS is neither a single modality nor a single information system, it is system integration of many imaging components in radiology and information systems from the hospital. In general, it takes a team effort to operate PACS since no one person can comprehend all components. PACS training is therefore required to tailor to different types of users of the system, from the PACS manager, to radiologist, technologist, clinician, and other healthcare providers. PACS-based CAD Computer-assisted (or -aided) detection or diagnosis (CAD, CADx) uses computer method to assist the radiologist or clinician to facilitate the diagnosis. PACS-based CAD is to directly use images and related patient information from PACS to perform CAD. Current successful CAD systems being used are in digital mammography and for chest lung tumor detection from CT. Projection X-Ray image Digital projection X-ray images are generated from computed radiography (CR) and digital radiography (DR). A conventional Xray film can be converted to a digital image by using a laser film scanner. The size of these images is generally 2500 ¥ 2000 ¥ 12 bits, or about 10 Mbytes. Projection X-ray images account for about 60–70% of radiology examinations. RIS Radiology Information system (RIS) is an information system tailored for radiology operation. It connects to both HIS and PACS for effective and efficient clinical operation. Some radiology information systems are standalone with connection to HIS to obtain necessary patient information. Others are imbedded in the HIS. Sectional image Sectional images in radiology include computed tomography (CT), magnetic resonance image (MRI), ultrasound image (US), positron emission tomography (PET), and single photon emission tomography (SPECT). Each of these images is of the size from 256 ¥ 256 ¥ 12 to 512 ¥ 512 ¥ 12 bits. These images account for about 30–40% of radiology examinations. Teleradiology Teleradiology is the practice of radiology using both radiological images and telecommunication technology. In system design point of view, teleradiology is very similar to PACS except that the components connection is WAN, and the equipment requirement in teleradiology is less demanded as compared with PACS. Web-based image distribution PACS images can be distributed throughout the hospital or healthcare enterprise using Web-based technology. Since most hos-
gl.qxd 2/12/04 5:04 PM Page 637
PACS AND IMAGING INFORMATICS GLOSSARY
637
pitals equip many low cost desktop computers throughout the campus, these computers with some upgrade, can be used as web clients for receiving PACS images. The image quality using Web-based image distribution is not always super, but the cost of distribution is low. 32. Workstation Workstation is the component in PACS that the everyday user interacts with. Various types of image workstation have been introduced to healthcare providers through different imaging modalities. For these reasons, users are very critical of the functionality of PACS workstation due to their experience. The difficulty in workstation design for PACS is that the PACS workstation has to display all types of images as well as pertinent patient information used in clinical practice. Most diagnostic images required two 2000 line LCD monitors with 2 to 3 seconds display time. PACS workstation is different from other health related information display in that it has to display many high quality images with large file sizes.
Index.qxd 3/3/04 2:11 PM Page 639
INDEX
Acoustic noise, 282–284 Acquisition gateway, 195–198, 200–201, 203, 205, 207–211, 216 component and database management, 198 description of, 195 DICOM compliant, 198 Interface with other PACS modules, 208 Acquisition modality simulator (AMS), 571–572 Adaptive block quantization technique, 127 Admission, discharge, or transfer (ADT), 308 Advanced Study Institute, See also NATO ASI, 5 Air-blown fibers (ABF), 227 Algebraic reconstruction method, 81 Aliasing, artifacts, 57 Ambient illuminance, 283 American College of Radiology (ACR), 8 American College of Radiology-National Electrical Manufacturers Association (ACR-NEMA), 8, 175 American Standard Code for Information Interchange (ASCII), 158 Amplitude, 30, 37, 39–41 Analog communications, See also video, 220 Analog-to-digital (A/D) conversion, 57 Application server ePR-based, 539, 542, 566 Application service provider (ASP), 599 Archive library, 259, 263–264 Archive management, 335–336 Archive server, 255–275 backup, 382, 384–387, 390, 393, 405–406, 408 Area scan, video and CCD camera, 58 Asynchronous transfer mode (ATM), 220, 222
Authenticity, 409, 412, 419, 425, 427–428 Automatic focusing, 113, 115 Automatic image recovery scheme, 196–197, 204 basis, 204 of missing series/image, 205 of missing study, 205 results and the extension, 207 Automatic orientation, 212, 215 Average gradient, film characteristic curve, 58 Average gray level, 292–293 Averaging mode, 102 Back-projection, 79, 81–83, 106 Background removal advantages of, 66 concept, 57 Backup archive, 259 Baltimore VA Medical Center, 13 Bandpass/bandwidth, 101 Bit allocation table, 130–133 Bone age assessment PACS based, 524 with digital hand wrist radiograph, 525 Bridge, 226–227, 235 Bucky, Gustave, See also scatter 49, 58, 70, 73 Building distribution frame (BDF), 228 Burn-in, 401, 404 Cables for input sources, 228 Cables for image distribution, 228–229 Cabling, 437, 439 digital, 220, 235 planning for, 228 video, 220, 228 CAD, 514 CAR/CARS, 4
PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
639
Index.qxd 3/3/04 2:11 PM Page 640
640
INDEX
Computer-aided instruction (CAI), 584 Carrier Sense Multiple Access with Collision Detection (CSMA/CD), 223 Cathode Ray Tube (CRT), 279 Central processing units, 257 Centralized, 583 Charge-coupled device (CCD), 58 CHLA/USC, 241 Chemical shift, 106 Chrominance components, 150 Cine loop, ultrasound imaging, 102 Cine XCT, 86–87, 102 Clerical staff training of, 438, 448 Cluster helical, spiral XCT, 84 Color Doppler ultrasound imaging, 102 Color spaces YCbCr, 150–151 YIQ, 150 Commission Internationale de L’Eclairage (CIE), 150 Communication ACR-NEMA, digital image standard, 175 asynchronous, 220, 225 description of, 220 synchronous, 220 terminology for, 175 within-hospital systems, See also LAN, 220 Component technology, 350 Composite video control, 117 Compressed image file, 119–120, 127, 130–131, 133 Compression, 151 acceptable ratio, 136 block, 119–120, 127, 130–133, 145–146, 152 clinical image, 120 color image, 148–150 composite, 150 error-free, 119–121, 127, 150 full-frame, 127, 130, 132–133, 137, 140 irreversible, 120, 127 lossless, 119–120, 123, 131, 133, 142, 151–152 lossy, 120, 121, 142, 151–152 of color Ultrasound images, 150 ratio, 120, 122, 127, 129–134, 136–137, 142, 145–148, 151 reversible, 119–120, 131 3-D, 119–120, 140, 144–148 visually lossless, 121
wavelet in Teleradiology, 364 Computation, node in PACS, 495 Computed radiography (CR), 61, 63–69, 72–74 advantages of, 66 components of, 63 Computerized tomography (CT), transmission X-ray introduction of, 84 cine XCT, 86–87, 102 multislice XCT, 87, 89–90, 95 spiral XCT, 87, 89 4D XCT, 90 Connectivity, system, 159 Content mapping resource, 176, 187, 193 Continuous available (CA) architecture of, 66 concept of design, 381 merit of, 407 server design and implementation, 393 testing and performance measurement, 401 Contrast ratio Cr, 281 Conventional projection radiography characteristic curve of X-ray film, 55 description of the procedure used in, 59 film optical density, 55–57 image intensifier tube, 52 image receptor, 51 setup of procedure room and diagnostic area, 49 Convolution back-projection method, 82 Convolution method, SPECT, 92 CORROC2, 462 Cost analysis, 431, 436 Cost benefit analysis, 591 Cosine transform using the fast Fourier transform, 129 Cryptography, 411 public-key (asymmetric), 411 private-key (symmetric), 411 Database-to-database transfer, 311 Database management system, Acquisition Gateway, 198 Database system, PACS controller, 258 Data authoring, 375 Data decryption, 411 Data embedding, 409, 411–412, 415, 417, 419–420, 426 Data encryption, 409, 411–413, 416, 419 Data extraction, 412–413, 419
Index.qxd 3/3/04 2:11 PM Page 641
INDEX
Data formatting, 175 Data integrity, 411–412 Data migration schedule, 438, 442, 444, 447, 453 tools, 448, 450, 453, 460 tuning, 453–454 Data mining, 509, 514, 524 Data presentation, 377 Data privacy, 411 Data security, 409, 411–412, 419, 421, 423–425 DCT coding, 129 Decoding, 119, 125 Density resolution, 25 Determination of the end of image series, algorithm, 202 constant number of images in a given time-interval, 202 presence of the next series, 202, 204 Diagnostic process, 312 Difference image, 120, 135–137, 148 Digital-based operation, 431–432, 436–438, 455, 458 Digital chain, 58–60 Digital communications, 220, 230–231 Digital dictaphone system, 323 Digital envelope, 409, 412–413, 416–417, 419, 423 Digital fluorography, 49, 57–59, 66, 69 advantage of, 63, 73 components of, 58 definition of, 58 other names for, 59 Digital hand atlas, 525 Digital image definition of, 23 of an X-ray film from video scanning, 57 Digital Imaging and Communication in Medicine (DICOM) standard communication, 171–173, 175–176, 181–185, 191 composite commands, 181 conformance, 174, 176, 183 data format, 171–172, 175–176 examples of using, 183 file format, 176, 179, 193 model of the real world, 176 normalized commands, 181 object classes, 176, 179, 180 security, 409, 411–412, 418–419, 421–430
641
service classes, 176, 179, 181–185 services, 178, 181, 183 Digital Imaging Network and Picture Archiving and Communication System (DIN/PACS), 6 Digital linear tape (DLT) library, 256–268 Digital signature, 409, 411–412, 415, 419, 422 Digital Mammography full field direct, 69 Digital radiological image, definition of, 23 Digital radiology application in clinical environment, 23 integration with PACS, 77 Digital subtraction angiography (DSA), 59 Digital subtraction arteriography (DSA), 59 Digital-to-analog (D/A) conversion, 24 Digital video angiography (DVA), 59 Digital video subtraction angiography (DVSA), 59 Digital voice, 321–323 Digitally generated patterns, 25 Digitization definition of, 23 Disaster recovery, 469–472 Discrete cosine transform 2-D, 119, 127, 129–131, 133, 140–142, 145, 147–148, 151 Display systems display monitor, 278–280, 282–283, 288, 290, 300 distance area and gray level measurements, 292 histogram modification function, 290 image reverse function, 292 types of, 278, 285 window and level functions, 290 zoom and scroll functions, 289 Display workstations, 277–278, 283, 287, 289 basic software, 296 networks, 219–223, 226–228, 230–239, 241, 246–248, 251–253 soft copy, 277–288, 293 Distance calibration, 293 Distance learning, 568, 581–582 Distributed, 597, 604, 609 Distributed computing basic idea of, 495 in a PACS environment, 500 Downtime effects of, 391
Index.qxd 3/3/04 2:11 PM Page 642
642
INDEX
experience, 293, 381–382, 384, 390–391, 405 for over 24 hours, 392 impact to clinical operation, 382 DS-0 (Digital service), 354 DSL (Digital subscriber line), 360 DSSS (Direct sequence spread spectrum), 248 Edge spread function (ESF), 36 Electronic medical record (EMR), 307, 323 Electronic noise, 35, 45 Electronic patient record (EPR), 307, 321, 323, 326 Emission computerized tomography (ECT) description of, 92 positron, 79, 81, 92, 94 single photon, 94 Endoscopy, 98, 112, 116–117 Enterprise PACS concept, 591–593, 601, 610 design of, 595, 599, 601–602, 604 early model of, 593 Enterprise teleradiology, 592 Entropy coding, 129–131, 133, 145 Error-free compression, 121 Huffman coding, 125, 127, 131, 145, 147, 152 run-length coding, 151 Ethernet communication system, See also LAN, 220 departmental, 228, 234–237 eV, 55 Extensible markup language (XML), 174, 192 External network, 232–235 False positive (FP), ROC analysis, 459 Fan beam x-ray technique, 84 large fan beam, 84 Fast Ethernet, 545 Fast Fourier transform (FFT), 79 1-D, 79–81 2-D, 79–81, 104, 106, 110–111 Fault-tolerant (FT) PACS, 381–390, 392–394, 405–408 storage system, 398 Fault tolerance, Communication networks, 231 Fiber-optic broadband video system, 228 Fiber-optic cables, 223, 227–228, 231 Field size, X-rays, 51
Film-based operation, 431–437, 455, 458 Film contrast, 57 Film fog level, 57 Film gamma, 56–57 Film latitude, 56–57 Film optical density, 55–57 Film processor, X-ray, 51 Film speed, 56–57 Filtered (convolution) back-projection method, 82 First-in-first-out (FIFO), 386 Flow velocity, 106 Fluorographic procedure, See also Digital fluorography 49, 57–59, 66, 69 Food and drug administration (FDA), USA, 121 Foot-Lamberts (ft-L), 281 Fourier projection theorem, 79, 81 Fourier transform, 80, 81, 106 discrete, 34 fast, 86–87, 89, 95, 101–102, 106, 109 inverse, 33–34, 42 pair, 24, 26, 29, 31, 34–35, 37–38 series, 39 Free induction decay (FID), 104 Frequency components, 27–34, 38–40 spatial, 25, 27–29, 31, 33–34, 37–39, 42–43 Frequency domain, 29, 31, 33–34 Fuji Photo Film, Co., Ltd, 63 Full-frame bit allocation (FFBA) algorithm, 121, 141, 145, 147–148 Functional MRI (fMRI), 109–110 Fusion scanner, 95–96 Fuzzy classifier, 529, 531 Gamma curve correction, 293 g-ray emission profiles, 81 Gateway, 155–156, 158, 164, 167 network, 219–223, 226–228, 230–239, 241, 246–248, 251–253 in a clinical PACS, 216 Geometric mean modification, 93 GIF (Graphics Interchange Format), 344 Gigabit Ethernet, 223–225, 231, 238 Glare, video monitor, 282 Gray level value definition of, 284, 286, 290 Grid computing, 495, 500, 508 H and D curve, 55–56 Half-angle subtended, 115
Index.qxd 3/3/04 2:11 PM Page 643
INDEX
Hammersmith Hospital, 16 Hand atlas database web-based, 524–525, 533 Health Level Seven (HL7), 171 health care database information exchange, 318 Health maintenance organizations (HMOs), 353 Helical scanning, 86 Heterogeneous databases distributed database system, 311, 317, 321 integration of, 307–308, 317, 320–321, 325–326 High availability (HA), 381 High contrast response, 38 High-pass filtering, 145–146 HIPAA, 409, 411, 423–425, 428–429 Histogram, 290, 292–293 definition of, 24 equalization, 290, 292 modification, 290 Hong Kong Hospital Authority Healthcare Enterprise, 601 Hospital information system (HIS), 171 common data in, 314 Hospital integrated PACS (HI-PACS), 3 Host computer, connection of a film scanner to a, 61 Hounsfield number, 90 Hub room, 227–229, 232 Huffman coding, 125, 127, 131, 145, 147, 152 Huffman tree, 125 Human error, 196, 207 Hydrogen density, 106 ICU, intensive care unit, 434, 440 ID card Reader, 64 ID terminal, 64 IMAC, 3–7 Image archiving, 255, 257, 259–260, 262 capture, 72–74 collection, 432, 460 compression, 119–123, 127, 129–133, 136–137, 140, 142, 145–152 content-based indexing, 490, 492 delivery performance, PACS, 458 display, 434–440, 450–451, 455–460, 462 display and measurement functions, 289, 297 display board, 278, 301 display program, 302, 304
643
intensifier tube, 52–53, 58 measurement of quality, 34 memory, See also image buffer, 113 prefetching, 260, 263 preprocessing, 107 unsharpness, 35, 44–45 Image database query support, 495 Image digest, 411, 415, 419 Image file server distributed, 336–337, 341–343, 347 Image grouping, 205 Image hashing, 412, 415 Image management, 256 Image matching, 514 an example of, 515–517, 531 as a diagnostic support tool, 518, 520, 522 data collection, 513, 515 summary of, 522, 537 Image query/retrieve, 270 Image quality evaluation, 459–460 Image receiving, 260, 269 Image receptor characteristic curve of X-ray film, 55 digital fluorography, 49, 57–59, 66, 69 film optical density, 55–57 image intensifier tube, 52 screen/film combination, 52 Image reconstruction, 79–81, 84, 89, 92, 95, 102, 109 algebraic reconstruction method, 81 filtered back-projection method, 82 Fourier projection theorem and, 79, 81 irreversible compression, 120–121, 127 Image registration, 515 Image retrieving, 260, 263 Image reverse, 292 Image routing, 260–261, 269 Image security system, 412, 426, 428 Image management algorithm, 313 Image send, 270 Image size, definition of, 23 Image stacking, 260 Image storage, 255–256, 270–271 Image-assisted surgery system (IASS) architecture, 540, 545, 548 concept of, 539–540, 542, 545, 548, 558, 565–566 definition and functionality of, 542 hardware components, 545 software modules, 545 Imaginary components, Fourier transform, 32
Index.qxd 3/3/04 2:11 PM Page 644
644
INDEX
Imaging informatics, 539, 542, 545, 548–549, 558, 561 Imaging plate technology laser-stimulated luminescence phosphor plate, See also CR, 62 Implementation management, 444 Industry standards, 156, 159 Input image, 39 integrated image self-scaling network (ISSN), 251 Integrated services digital network (ISDN), 354 Integrating in PACS and MIII, 504 Integrating the healthcare enterprise (IHE) future of, 190 introduction, 175–176 key image note profile, 189 patient information reconciliation profile, 189 presentation of grouped procedures (PGP) profile, 339 profiles, 176, 181–182, 184–190 radiology information integration profile, 320 simple image and numeric report profile, 189 technical framework and integration profiles, 188 Intensifying screen, 53–55 Interactive and media-rich learning, 580 Interactive teaching file, 584, 586–587 Interface engine, 311–312, 325 Interfacing HIS, 307–308, 310–312, 314, 317–318, 320–321, 323–326, 328–330 PACS, 307–308, 310–315, 317–318, 320–326 RIS, 307–308, 310–315, 317–318, 320–321, 323–324, 326, 329 Intermediate distribution frame (IDF), 228 Internal network, 232–233, 235 International Society for Optical Engineering (SPIE), 4 International Standards Organization (ISO), 172 Internet, 219, 226, 233–236, 238, 241, 247, 251, 253 Internet 2 (I2) definition, 241 current performance, 241 Intravenous video arteriography (IVA), See also DSA, 59
Inverse Fourier transform, 33–34, 42 Inverse transform, 80, 81, 106 Irreversible compression description of, 152 examples of, 234, 241 measurement of differences between original and reconstructed image, 133, 136 methodology for, 127 qualitative measurement, 135 theory of, 127 Iterative modification, SPECT, 94 Japan Association of Medical Imaging Technology (JAMIT), 4 Job prioritization, 336–337 Joint photographic experts group (JPEG), 127 Just noticeable differences (JND), 281 Karhunen-Loeve, image compression method, 127 kVp, X-rays generation, 67 LAN, local area network, 220 Laser scanner accuracy of, 59 Laser-stimulated luminescence phosphor plate, See also CR operating characteristics of the, 63, 65 LCD (liquid crystal display), 289 Line pairs, 280 Line spread function (LSF), 31, 36 Local image storage, Gateway, 200 Local storage management, 256 Lookup table, 290 Low contrast response, 38 Luminance, 277, 280–283 MAC (Media Access Control), 422 Magnetic disk, 257, 259–260, 262, 267 Magnetic resonance imaging (MRI), 79, 104 advantages over other modalities, 104 block diagram of, 100, 104, 117 description of, 104 fundamentals of, 104 introduction of, 104 production, 104 pulsing sequences and relaxation, 105 tissue characterization, 109 Mammography CAD, 179, 187
Index.qxd 3/3/04 2:11 PM Page 645
INDEX
Medical diagnostic imaging support (MDIS), 442 Medical image archiving and communication system (MIACS), 604 Medical imaging informatics infrastructure (MIII) architecture and components, 488 concept, 487–488, 492, 495, 498, 500, 507–508 PACS-based, 485, 487, 491–492, 504 Micro sectional images, 96 Microscopic imaging, 113, 117 real color, 116 Mobile site module, 604, 607–609 Modulation transfer function (MTF), 31, 37 Moiré patterns, 57 Montage, 293–294 Motorized stage, 113 Multimedia medical data, 318, 328 in the radiology department, 320 Multiresolution analysis, wavelet transform, 140 National Electrical Manufacturers Association (NEMA), standard, 175 National Library of Medicine (NLM), 241 National television system committee (NTSC), 150 NATO ASI, 5 Nearest integer function, 130 Network architecture for healthcare IT, 235, 237 for PACS, 231, 235, 237–238, 247–248 general, 230, 235, 247, 251 UCLA PACS, 238 Network distribution center (NDC), 227 Networking, 219–220, 226–227, 251 Network interfaces, 156 peer-to-peer, 156 master-slave, 156 Network management, 335, 338 Noise, measurement of, 45 Normalized mean-square error (NMSE), 133, 151 Nuclear medicine scanning, 92, 97 gamma camera and components for, 98 principles of, 97, 99 image format for, 25, 98 Nuclei, 92, 104
645
Numerical aperture, 116 Nyquist frequency, 57 Observer viewing sequence, 460 OFDM (orthogonal frequency division multiplexing), 248 Oil immersion lens, 116 One-dimensional projection, 79 1-D FFT, 80 100 Base, network cable, 227 1K monitors, 280, 285, 287 Open architecture, PACS, 159 Open Systems Interconnect (OSI), 171 four-layer, DOD, 223 seven-layer, OSI, 223 Optical Carrier Level 1 (OC-1), 225 OC-3, 224–225 OC-12, 224–225 OC-48, 225 OC-192, 245 Optical density (OD) of film, 55, 72 versus gray level value, 60 Original image, 119–122, 125, 127, 129, 131, 133, 136–137, 141, 144, 146–148, 151 Out-sourcing, 598–599 Outcome analysis, 509–510, 513 Output image, 39, 40, 44 Outright purchase, 598, 600 PACS archive server DICOM compliant, 265, 268, 273–274 functions, 255, 257, 259–260, 262–263, 269–271, 273 system operation, 255, 264 hardware and software, 255, 266 Parallel data transmission, 220 Patient folder manager concept, 333 Pay-per-procedure, 599 Percentage of pixel changed, 420, 426 Performance evaluation, 455 Personal digital assistant (PDA), 249 Personal health data (PHD), 4 Phantoms physical, 24–26, 28–29, 34 Picture archiving and communication system (PACS), 155 automatic monitoring system (AMS), 571 bottlenecks, 463, 473, 477, 481 broker, 196, 210–211 clinical experience, 463, 467, 469, 473, 481
Index.qxd 3/3/04 2:11 PM Page 646
646
INDEX
communication, 219–228, 230–232, 238–239, 241, 251, 253 components, 155, 157, 159–161, 167–168 controller, 255–257, 259–260, 264, 267 current architectures, 162 database, 257–260, 262–264, 266, 268–271 display, 278–285, 290, 293–294, 296–297, 300–304 education and training of, 568 enterprise, 589, 591–593, 595, 597–602, 604, 609–610 ePR with images, 324 evaluation, 520–522, 524–525 generic workflow, 161 implementation, 498 infrastructure, 10–11, 15, 20–21, 154, 332 interface with, other medical database, 318 large scale, 9 modules, 431, 458 network, 219–223, 226–228, 230–239, 241, 246–248, 251–253 pitfalls, 463, 473, 475, 481–483 security server, 425, 427–428 simulator, 549, 552–553, 556, 559–561, 563 system evaluation, 431, 450, 455 Pitch, helical and multislice XCT, 86, 89 Pixel, 23–29, 44–45, 47 Planning to install a PACS air conditioning, 437 cabling, 437, 439 cost analysis, 574, 580 interfacing to imaging modalities and utilization of display workstation, 436 staffing, 431, 437–438, 442–443, 446–447 training, 437–440, 443, 446–448, 450, 460 Point spread function (PSF), 31, 35–36 Positron emission computed tomography (PET), 95 Power spectrum, 45 Prefetch algorithm, 263 Processing mode, DSA, 58 Processing, image, 289 Projection radiology, 49 Pulse repetition frequency (PRF), US imaging, 100 Pulsing sequences, MR imaging, 105 Quality assurance, 572 Quantization, 130–131, 133, 142, 145, 147, 152
Quantum statistics, 45 Query protocol, 311, 315, 317 Radiation therapy (RT) an example of, 515, 517, 531 data flow, 548, 558–559, 561, 566 DICOM objects 556–557, 563 dose, 548–550, 556, 558–559, 561 image, 540, 542–545, 547–553, 556–561, 563, 565–566 PACS and imaging informatics-based, 487 plan, 548–551, 553, 556, 558–561, 563 server, 539–540, 542–544, 548, 558–561, 563–564, 566 structure set, 556, 559, 561 workflow of, 549 Radio frequency (RF), 104 Radiological contrast, 45 Radiology information system (RIS), 307 common data in, 314 simulator, 571 Radiology reports online, 334 Radiology Workflow, 49 Random access memory (RAM), 278 Real component, Fourier transform, 32 Receiver operating characteristic analysis (ROC), 136 area Az under, 137 Reconstructed image, 119–120, 127, 131, 133–137, 147–148 Receiving site, 409, 412, 417, 419 Redundant array of inexpensive disks (RAID), 257 Redundant storage, 384 Reformatting, image, 211 Refractive index of the medium, 115 Regions of interest (ROI), 515, 519, 527 Repeater, 226–228 Repetition time, MR, 100 Residence time acquisition, 434, 438, 441–443, 445, 448, 455–458 archive, 434, 439, 441–443, 446, 449, 451–458 distribution, 432, 435, 443–444, 455, 457 display, 434–440, 450–451, 455–460, 462 image retrieval, 455, 457 network, 439, 442–445, 450–452, 455, 457, 459 total image, 457
Index.qxd 3/3/04 2:11 PM Page 647
INDEX
Resolution MRI, 79, 99, 104, 106, 109–111 display monitor, 278–280, 282–283, 288, 290, 300 Retrieval image, 333–345, 347–352 Rose model, 281 Router, 226–227, 235–236, 238 Run-length coding, 123, 125, 131, 147, 151 Run-zero coding, 125 Sampling, 57 Scanners/ing laser, 57, 59–64, 69, 72 linearity, 60 specifications of laser, 61 video and CCD camera, 58 Scatter, 49, 58, 70, 73 Scintillation crystal, 93 Screen/film combination, 52 Sectional images, 79, 86 Security, communication network, 231–234, 248–249 Segmentation, 490 Self-scalar, 251–253 Self-scaling networks concept, 234, 251–253 design of in healthcare environment, 253 Sending site, 409, 412, 416, 419 Sharpness measurement of, 34–35 Signal-to-noise power ratio, 45, 47 Signal-to-noise ratio (SNR), 45 peak, 134, 148 SimPHYSIO, 580–581, 583 Single-points-of-failure (SPOF), 384, 387, 390, 393, 398, 400 Silver halide, 54–55 Single helical, 85–86 Single photon, 94 Single photon emission CT (SPECT), 92 SJHC (St. John’s Health Center), 405, 454 Slice thickness, 86, 89–90, 92, 110 Small Computers Systems Interface (SCSI), 257 SMPTE phantom, 133 Software purchase, 599 Spatial domain, 28 Spatial resolution, 25 Spin-lattice relaxation time (T1), 104 Spin-spin relaxation time (T2), 104 Spinal surgery, 542, 545, 547
647
Spiral (Helical) XCT, 84 Standardization, 171 Standard Ethernet, 223, 231 Static magnetic field B0, MR, 104 Static mode, 102 Stationary scintillation detector array, 85, 87–88, 92–94 Statistical analysis, ROC, 136–137, 139, 459 Statistical power, 137 Stepping motors, 113, 115 Storage of image data, 101 computed radiography, 61, 63, 72 magnetic disk, 257 random access memory (RAM), 278 Structured Query Language (SQL), 258 Structured reporting, 179, 187, 193 Studies grouping, 260, 262 Subsystem throughput analysis, 455 Survey mode, 101 Synchronous optical network (SONET), 226 System efficiency analysis, 457 TCP/IP, 172–173, 183, 191 Technologist training of, 438, 448 Teleconsultation communication requirements, 373 hardware configuration, 374 procedure and protocol, 376 real-time system, 372–374 system design, 372–373 Telediagnosis, 353, 368–369 Telemammography, 358, 367–369 Telemanagement, 353–354, 368–369 Telemedicine, 353–354, 369, 378–379 Telemicroscopy, 369, 371 Teleradiology, 353–362, 364–367, 369, 371, 374, 378–379 components, 354, 358, 361, 369, 371 definition, 355 models, 353, 357, 365 web-based, 362, 364, 366 trade-off parameters, 366 Temporal assessment, 511 Temporal image database, 509, 511, 513–514 10 Base, network cable 5, 226 F, 227 T, 227 2, 226 X, 226
Index.qxd 3/3/04 2:11 PM Page 648
648
INDEX
Test objects and patterns, 25 3-D rendering, 490, 493–495, 508 3-D visualization, 494 Threshold contrast, 281 Throughput analysis, 455, 457 T1, WAN, 241, 246–247 Training clerical staff, 568 residents, 577, 579, 587 technologists, 568, 572, 576–577, 583 Transform coding, 127 Transformed image, 119, 127, 130–133 Transmission parallel data, 220 serial data, 220 speed of, 223, 231, 238 Transmittance, 55 Transverse (cross) section, 84 Trigger mechanism, between two databases, 315 Triple module redundant (TMR), 395 True, ROC analysis determination of, 460 True positive (TP), 137 2-D FFT (fast Fourier transform), 106 2K monitors, 284–285 UCAID (The University Corporation for Advanced Internet Development), 241 UCLA, 6, 9 UCSF, 9 UH, 241 Ultrasound imaging, 98–99, 102 Ultrasound scanning, 99, 102 cine loop, 102 color Doppler, 101–102 principles of B-mode, 97, 99 sampling modes and image display, 101 system block diagram and operational procedure for, 100 Three-dimensional, 79, 87, 102, 104 UNIX, operating system, 191 Unsharpness, 35, 44–45 Unshielded twisted pair (UTP), 227 User acceptance, PACS, 459 User friendliness, teleradiology, 360 Video cables, 227 Video camera, 38–39, 43–44, 46 Video RAM (VRAM), 278 Video scanning analog/digital (A/D) conversion, 58 digital image of an X-ray film from, 59
Video/image conferencing, 495 Visible light image, 112 VistA hospital and radiology information system, 324 imaging, 325 imaging workflow, 326 imaging operation, 327 information system architecture, 328–329 Volumetric visualization, 494 Voting logic, 396, 407 Voxel, 23–24 Waveform IOD, 187 Wavelet, transform decomposition, 142, 145–146, 150 filter selection, 146 theory, 140 3-D, 119–120, 140, 144–148 Web display clients, 344 Web server architecture of, 345, 347, 349, 393, 395, 398 component-based, 345, 347, 349, 350– 352 data flow of, 347–348 description, 335–339 for image distribution and display, 333 in PACS environment, 344 Weber-Fechner law, 281 Whole-body PET image, 95 Wide area network (WAN), 220 Wiener spectrum, 45 Window and level, 290, 296, 304 Wireless LAN (WLAN), 248 Wireless WAN (WWAN), 248, 251 Within-hospital systems, communications, 239 examples of, 234, 241 See also LAN, 220 Workstation analysis, 285, 287–288 desktop, 342, 345 diagnostic, 277, 285, 287–288 digitizing, 285, 288 display, 278–285, 290, 293–294, 296–297, 300–304 emulation, 311 image, 353–359, 361–362, 364–367, 369, 371–379 interactive teaching, 285, 288
Index.qxd 3/3/04 2:11 PM Page 649
INDEX
printing, 285, 288 review, 284–285, 287–288 softcopy, 277 Write once, read many (WORM), 259 X-ray attenuation, 80, 84, 90 coefficient, 90 profiles, 80–81, 84, 102, 109 X-ray beam, 80, 86–87, 92 collimated, 80 cone beam, 85, 87–90, 92 X-ray computerized tomography (XCT), 84 block diagram of, 100, 104, 117
gray level value, 24 scanning mode, 84, 87, 90 X-ray film characteristic curve, 55–56 X-ray film, digitization of laser film scanner, 59 sampling, 57 specifications of film scanners, 59 video scanning system, 58 X-ray photon, 52–53, 55, 62, 70–72 X-ray source, 58 Zoom and scroll, 289
649
Figure 4.15A Color Doppler US blood flow showing pulmonary vein inflow convergent. (Courtesy of Siemens Medical Imaging Systems. http://www.siemensmedical.com/webapp/ wcs/stores/servlet/PSProductImageDisplay?productId=17966&storeId=10001&langId= -1&catalogId=-1&catTree=100001,12805,12761*559299136.)
Figure 4.15B 3-D ultrasound of a 25-week-old fetal face. (Courtesy of Philips Medical Systems. http://www.medical.philips.com/main/products/ultrasound/assets/images/ image_library/3d_2295_H5_C5-2_OB_3D.jpg.) PACS and Imaging Informatics, by H. K. Huang ISBN 0-471-25123-2 Copyright © 2004 by John Wiley & Sons, Inc.
A
B Figure 4.21 (A) Brain activation map overlaid onto the structural T2 weighted image. (B) 3-D reconstruction was based on a series of fMRI Images. Color codes represent the degree of activation. (Courtesy of Dr. S. Erberich.)
Figure 4.23B A digital telemicroscopic system: (B) Close-up of the acquisition workstation. Pertinent data related to the exam are shown in various windows. Icons on the bottom right (8) are six simple click-and-play functions: transmit, display, exit, patient information, video capture, digitize, and store. The last captured image (9) is shown on the workstation monitor. [Prototype tele-microscopic imaging system at the Laboratory for Radiological Informatic Lab, UCSF. Courtesy of Drs. S. Atwater, T. Hamill, and H. Sanchez (images); and Nikon Res. Corp. and Mitra Imag. Inc. (equipment)].
Figure 4.26 An endoscopic image of the colon.
USC
UCSF NLM
Figure 9.11 The Abilene backbone (courtesy of Abilene). Red lines and letters indicate the sites and connections where I2 performance was measured. Results are given in Fig. 9.14 and 9.15.
UCLA
Stanford
UH
VistA Combines Images with the Traditional Patient Record Components
A
VistA Provides Clinicians with access to all images and EKGs
B
Figure 12.14 A: VistA Imaging displays the patient record with images. (Courtesy of Dr. H. Rutherford; Dayhoff, 2000.) B:VistA Imaging displays thumbnail images, microscopic images, MRI, and EKG. (Courtesy of Dr. H. Rutherford; Dayhoff, 2000.)
A
B Figure 20.2 The MIII visualization engine in the outcome analysis of lung nodule treatment. (A) Three windows in the graphic user interface. Left: a CT chest image. Lower right: an enlarged segmented nodule; +, lesion pixel. Upper right: segmented nodules from CT scans stored in the chest image database. (B) Processed nodules in a CT image is turned “red” color to ensure that a nodule has been and is not processed twice. Two consecutive sections are shown.
Figure 20.2 (C) Graphic user interface (GUI) showing that the user can review statistics of each processed nodule.
Figure 20.6 Distribution of subdural hematoma in a database of 107 cases. Bright yellow, highest probability. (Courtesy of Drs. M. Leventon, M. Zhang, and L. Liu.)
Figure 20.7 Distribution of meningioma in a database of 380 cases Bright yellow, highest probability. (Courtesy of Drs. M. Leventon, M. Zhang, and L. Liu.)
C Figure 19.6 (C) Pseudocolor PET image (physiology) overlays Black & White CT image (anatomy).
Figure 20.10 A GUI for selecting an ROI in a 3-D brain atlas. The crosshair in the axial view indicates the user-selected point. A 3-D view is shown in the top right panel, along with the 2-D axial, coronal, and sagittal planes containing the user-selected point. (Courtesy of Dr. J. Nielsen.)