IFMBE Proceedings Series Editors: R. Magjarevic and J. H. Nagel
Volume 16/1
The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 58 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Makoto Kikuchi, Vice-President: Herbert Voigt, Past-President: Joachim H. Nagel Treasurer: Shankar M. Krishnan, Secretary-General: Ratko Magjarevic http://www.ifmbe.org
Previous Editions: IFMBE Proceedings MEDICON 2007 “11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007”, Vol. 16, 2007, Ljubljana, Slovenia, CD IFMBE Proceedings BIOMED 2006 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 15, 2004, Kuala Lumpur, Malaysia, CD IFMBE Proceedings WC 2006 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 14, 2006, Seoul, Korea, DVD IFMBE Proceedings BSN 2007 “4th International Workshop on Wearable and Implantable Body Sensor Networks”, Vol. 13, 2006, Aachen, Germany IFMBE Proceedings ICBMEC 2005 “The 12th International Conference on Biomedical Engineering”, Vol. 12, 2005, Singapore, CD IFMBE Proceedings EMBEC’05 “3rd European Medical & Biological Engineering Conference, IFMBE European Conference on Biomedical Engineering”, Vol. 11, 2005, Prague, Czech Republic, CD IFMBE Proceedings ICCE 2005 “The 7th International Conference on Cellular Engineering”, Vol. 10, 2005, Seoul, Korea, CD IFMBE Proceedings NBC 2005 “13th Nordic Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 9, 2005, Umeå, Sweden IFMBE Proceedings APCMBE 2005 “6th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 8, 2005, Tsukuba, Japan, CD IFMBE Proceedings BIOMED 2004 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 7, 2004, Kuala Lumpur, Malaysia IFMBE Proceedings MEDICON and HEALTH TELEMATICS 2004 “X Mediterranean Conference on Medical and Biological Engineering”, Vol. 6, 2004, Ischia, Italy, CD IFMBE Proceedings 3rd Latin – American Congress on Biomedical Engineering “III CLAEB 2004”, Vol. 5, 2004, Joao Pessoa, Brazil, CD IFMBE Proceedings WC2003 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 4, 2003, Sydney, Australia, CD IFMBE Proceedings EMBEC'02 “2nd European Medical and Biological Engineering Conference”, Vol. 3, Parts 1 & 2, 2002, H. Hutten and P. Kroesl (Eds.), Vienna, Austria IFMBE Proceedings 12NBC “12th Nordic Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 2, 2002, Stefan Sigurdsson (Ed.) Reykjavik, Iceland IFMBE Proceedings MEDICON 2001 – “IX Mediterranean Conference on Medical Engineering and Computing”, Vol. 1, Parts 1 & 2, 2001, R. Magjarevic, S. Tonkovic, V. Bilas, I. Lackovic (Eds.), Pula, Croatia
IFMBE Proceedings Vol. 16/1 T. Jarm, P. Kramar, A. Županič (Eds.)
11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007 MEDICON 2007, 26 – 30 June 2007 Ljubljana, Slovenia
123
Editors Tomaž Jarm University of Ljubljana Faculty of Electrical Engineering Trzaska 25 1000 Ljubljana, Slovenia E-Mail:
[email protected]
Anže Županič University of Ljubljana Faculty of Electrical Engineering Trzaska 25 1000 Ljubljana, Slovenia E-Mail:
[email protected]
Peter Kramar University of Ljubljana Faculty of Electrical Engineering Trzaska 25 1000 Ljubljana, Slovenia E-Mail:
[email protected]
Library of Congress Control Number: 2007928834 ISSN: 1680-0737 ISBN: 978-3-540-73043-9 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. The IFMBE Proceedings is an Offical Publication of the International Federation for Medical and Biological Engineering (IFMBE) Springer is a part of Springer Science+Business Media springer.com © International Federation for Medical and Biological Engineering 2007 The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Data supplied by the authors Production: Le-Tex Jelonek, Schmidt & Vöckler GbR Cover design: deblik, Berlin Printed on acid-free paper
SPIN 12075973
60/3180/YL – 5 4 3 2 1 0
About IFMBE The International Federation for Medical and Biological Engineering (IFMBE) was established in 1959 to provide medical and biological engineering with a vehicle for international collaboration in research and practice of the profession. The Federation has a long history of encouraging and promoting international cooperation and collaboration in the use of science and engineering for improving health and quality of life. The IFMBE is an organization with membership of national and transnational societies and an International Academy. At present there are 52 national members and 5 transnational members representing a total membership in excess of 120 000 worldwide. An observer category is provided to groups or organizations considering formal affiliation. Personal membership is possible for individuals living in countries without a member society The International Academy includes individuals who have been recognized by the IFMBE for their outstanding contributions to biomedical engineering.
Objectives The objectives of the International Federation for Medical and Biological Engineering are scientific, technological, literary, and educational. Within the field of medical, clinical and biological engineering it’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. In pursuit of these aims the Federation engages in the following activities: sponsorship of national and international meetings, publication of official journals, cooperation with other societies and organizations, appointment of commissions on special problems, awarding of prizes and distinctions, establishment of professional standards and ethics within the field, as well as other activities which in the opinion of the General Assembly or the Administrative Council would further the cause of medical, clinical or biological engineering. It promotes the formation of regional, national, international or specialized societies, groups or boards, the coordination of bibliographic or informational services and the improvement of standards in terminology, equipment, methods and safety practices, and the delivery of health care. The Federation works to promote improved communication and understanding in the world community of engineering, medicine and biology.
Activities Publications of IFMBE include: the journal Medical and Biological Engineering and Computing, the electronic magazine IFMBE News, and the Book Series on Biomedical Engineering. In cooperation with its international and regional conferences, IFMBE also publishes the IFMBE Proceedings Series. All publications of the IFMBE are published by Springer Verlag. The Federation has two divisions: Clinical Engineering and Health Care Technology Assessment. Every three years the IFMBE holds a World Congress on Medical Physics and Biomedical Engineering, organized in cooperation with the IOMP and the IUPESM. In addition, annual, milestone and regional conferences are organized in different regions of the world, such as Asia Pacific, Europe, the Nordic-Baltic and Mediterranean regions, Africa and Latin America. The administrative council of the IFMBE meets once a year and is the steering body for the IFMBE: The council is subject to the rulings of the General Assembly, which meets every three years. Information on the activities of the IFMBE can be found on the web site at: http://www.ifmbe.org.
Foreword It is our great pleasure to welcome you at the11th Mediterranean Conference on Medical and Biological Engineering and Computing – the MEDICON 2007. After 24 years MEDICON conference is coming back to Slovenia, this time into its capital – Ljubljana. The MEDICON conferences are international events of high scientific standards with long lasting tradition held every third year in one of the Mediterranean countries under the auspices of the International Federation for Medical and Biological Engineering. Biomedical engineering today is a well-recognized area of research. It brings together bright minds from diverse disciplines ranging from engineering, physics, and computer sciences on one side to biology and medicine on the other side. With valuable assistance of members of the International Advisory Committee and Scientific Program Committee, the coorganizing institutions and societies, our sponsors, and distinguished invited lecturers we will ensure that the research and development presented at MEDICON 2007 plenary meetings, scientific sessions, and workshops will truly be relevant and up-to-date. MEDICON 2007 is taking place at the premises of University of Ljubljana, Faculty of Electrical Engineering. The choice was obvious: Ljubljana is accessible and vivid, and the academic environment is familiar to all of us. As Ljubljana is located in the centre of the country it offers an excellent opportunity also to explore on your own the remarkable variety of Slovenia’s scenery and national heritage. The Mediterranean coast of Primorska, Alpine resorts of Gorenjska, lovely hills and primeval forests of Dolenjska and the far-reaching plains of Prekmurje are just few examples of regional diversity you can encounter here. All these and other regions of Slovenia are easily accessible from Ljubljana within two hours of driving. We will visit some of our jewels together: Bled and Postojna. We feel confident that you will enjoy MEDICON 2007 both scientifically and socially. We will make every effort to make MEDICON 2007 a memorable event. This is also the place where you will meet your old friends and make new ones. Together we will all face new challenges imposed to us by our society and new technologies. We look forward to meeting you all in Ljubljana. Professor Damijan Miklavcic Organizing Committee The Chairman
Professor Tadej Bajd Slovenian Society of Medical and Biological Engineering, The President
Conference details Name: 11th Mediterranean Conference on Medical and Biological Engineering and Computing Short name: MEDICON 2007 Venue: Ljubljana, SLOVENIA June 26–30, 2007
Proceedings editors Tomaz Jarm Peter Kramar Anze Zupanic
Organized by Slovenian Society for Medical and Biological Engineering
In cooperation with: University of Ljubljana, Faculty of Electrical Engineering http://www.fe.uni-lj.si/ Institute for Rehabilitation of Republic of Slovenia http://www.ir-rs.si/ IFMBE - International Federation for Medical and Biological Engineering http://www.ifmbe.org Institute of Oncology, Ljubljana, Slovenia http://www.onko-i.si/ Jozef Stefan Institute, Ljubljana, Slovenia http://www.ijs.si/ijsw Clinical Center Ljubljana, Slovenia http://www2.kclj.si/ European Federation of Organisations for Medical Physics http://www.efomp.org/ University of Ljubljana, Faculty of Computer and Information Science http://www.fri.uni-lj.si/
University of Maribor, Faculty of Electrical Engineering and Computer Science http://www.feri.uni-mb.si/ podrocje.aspx
Local Organizing Committee Damijan Miklavcic (Chairman) Tadej Bajd Imre Cikajlo Peter Gajsek Tomaz Jarm Tadej Kotnik Peter Kramar Zlatko Matjacic Matjaz Mihelj Anze Zupanic Robert Cugelj Jadran Lenarcic Sasa Markovic Zvonimir Rudolf Tomaz Slivnik Igor Ticar
Scientific Programme Committee Maria Teresa Arredondo (Spain) Janez Bester (Slovenia) Manfred Bijak (Austria) Wolfgang Birkfellner (Austria) Bozidar Casar (Slovenia) Stelios Christofides (Greece) Igor Emri (Slovenia) Carlo Frigo (Italy) Borut Gersak (Slovenia) Milan Gregoric (Slovenia) Francis X. Hart (USA) William Harwin (UK) Ales Iglic (Slovenia) Paolo Inchingolo (Italy) Franc Jager (Slovenia) Joze Jelenc (Slovenia) Rihard Karba (Slovenia) Nada Lavrac (Slovenia) Ratko Magjarevic (Croatia) Crt Marincek (Slovenia)
Roberto Merletti (Italy) Lluis M. Mir (France) Marko Munih (Slovenia) Mustapha Nadi (France) Joachim Nagel (Germany) Eberhard Neumann (Germany) Franjo Pernus (Slovenia) Dejan Popovic (Denmark) Robert Riener (Switzerland) Gregor Sersa (Slovenia) Franc Solina (Slovenia) Vlado Stankovski (Slovenia) Martin Stefancic (Slovenia) Vojko Strojnik (Slovenia) Johannes Struijk (Denmark) Pascal Verdonck (Belgium) Max A. Viergever (Netherlands) Veljko Vlaisavljevic (Slovenia) Damjan Zazula (Slovenia) Ales Zemva (Slovenia) Tatjana Zrimec (Australia) Anton Zupan (Slovenia) Blaz Zupan (Slovenia)
International Advisory Committee Marcello Bracale (Italy) Ivan Bratko (Slovenia) Mario Cifrek (Croatia) David Elad (Israel) Attilio Evangelisti(Italy) Frederique Frouin (France) Enrique J Gomez (Spain) Akos Jobbagy(Hungary) Prodromos Kaplanis (Cyprus) Nicolas Pallikarakis (Greece) Costantinos S. Pattichis (Cyprus) Laura M. Roa (Spain) Herve Saint-Jalmes (France) Mario Forjaz Secca (Portugal) Thomas Sinkjaer (Denmark) Vesna Spasic Jokic (Serbia) Stanko Tonkovic (Croatia) Jos van der Sloten (Belgium) Peter Veltink (Netherlands)
IFMBE Mediterranean Conferences on Medical and Biological Engineering 1977–2007 MEDICON 1977 – I Mediterranean Conference on Medical and Biological Engineering, 12–17 September 1977, Sorrento, Italy MEDICON 1980 – II Mediterranean Conference on Medical and Biological Engineering, 15–19 September 1980, Marseilles, France MEDICON 1983 – III Mediterranean Conference on Medical and Biological Engineering, 5–9 September 1983, Portoroz, Yugoslavia MEDICON 1986 – IV Mediterranean Conference on Medical and Biological Engineering, 9–12 September 1986, Seville, Spain MEDICON 1989 – V Mediterranean Conference on Medical and Biological Engineering, 29 August–1 September 1989, Patras, Greece MEDICON 1992 – VI Mediterranean Conference on Medical and Biological Engineering, 5–10 July 1992, Capri, Italy MEDICON 1995 – VII Mediterranean Conference on Medical & Biological Engineering, 17–21 September 1995, Jerusalem, Israel MEDICON 1998 – VII Mediterranean Conference on Medical & Biological Engineering, 14–17 June 1998, Limassol, Cyprus MEDICON 2001 – IX Mediterranean Conference on Medical Engineering and Computing, 12–15 June 2001, Pula, Croatia MEDICON and HEALTH TELEMATICS 2004, X Mediterranean Conference on Medical and Biological Engineering, 31 July–5 August 2004, Ischia, Italy MEDICON 2007 – XI Mediterranean Conference on Medical Engineering and Computing June 26–30 2007, Ljubljana, Slovenia
Content Invited Lectures EMITEL – an e-Encyclopedia for Medical Imaging Technology......................................................................................... 1 S. Tabakov, C. A. Lewis, A. Cvetkov, M. Stoeva, EMITEL Consortium
Control for Therapeutic Functional Electrical Stimulation.................................................................................................. 3 Dejan B. Popovic, Mirjana B. Popovic
Patient-Cooperative Rehabilitation Robotics in Zurich ........................................................................................................ 7 Robert Riener
From Academy to Industry: Translational Research in Biophysics .................................................................................. 10 R. Cadossi, M.D
Information Technology Solutions for Diabetes management and prevention Current Challenges and Future Research directions............................................................................................................................................. 14 R. Bellazzi
Systemic Electroporation – Combining Electric Pulses with Bioactive Agents................................................................. 18 Eberhard Neumann,
Normal Sessions Analysis of ECG Intelligent Internet Based, High Quality ECG Analysis for Clinical Trials ...................................................................... 22 T.K. Zywietz, R. Fischer
Effects of vagal blockade on the complexity of heart rate variability in rats .................................................................... 26 M. Baumert, E. Nalivaiko and D. Abbott
Assessment of the Heart Rate Variability during Arousal from Sleep by Cohen’s Class Time-Frequency Distributions................................................................................................................. 30 M.O. Mendez , A.M. Bianchi , O.P. Villantieri and S. Cerutti
An algorithm for classification of ambulatory ECG leads according to type of transient ischemic episodes................. 34 A. Smrdel and F. Jager
Phase-Rectified Signal Averaging for the Detection of Quasi-Periodicities in Electrocardiogram ................................. 38 R. Schneider, A. Bauer, J.W. Kantelhardt, P. Barthel and G. Schmidt
Relative contribution of heart regions to the precordial ECG-an inverse computational approach .............................. 42 A.C. Linnenbank, A. van Oosterom, T.F. Oostendorp, P.F.H.M. van Dessel, A.C. van Rossum, R. Coronel, H.L. Tan, J.M.T. de Bakker
Classification Methods for Atrial Fibrillation Prediction after CABG.............................................................................. 46 S. Sovilj, R. Magjarević and G. Rajsman
Modelling effects of Sotalol on Action Potential morphology using a novel Markov model of the HERG channel.............................................................................................................................................................. 50 T.P. Brennan, M. Fink, B. Rodriguez, L.T. Tarassenko
Sample Entropy Analysis of Electrocardiograms to Characterize Recurrent Atrial Fibrillation................................... 54 R. Cervigon, C. Sanchez, J.M. Blas, R. Alcaraz, J. Mateo and J. Millet
USB Based ECG Acquisition System .................................................................................................................................... 58 J. Mihel, R. Magjarevic
XII
Content
QT Intervals Are Prolonging Simultaneously with Increasing Heart Rate during Dynamical Experiment in Healthy Horses............................................................................................................... 62 P. Kozelek, J. Holcik
FPGA-based System for ECG Beat Detection and Classification ...................................................................................... 66 M. Cvikl and A. Zemva
Feature extraction and selection algorithms in biomedical data classifiers based on time-frequency and principle component analysis. ........................................................................................................................................ 70 P. S. Kostka, E. J. Tkacz
Dynamic Repolarization Assessment and Arrhythmic Risk Stratification........................................................................ 74 E. Pueyo, M. Malik and P. Laguna
Fractal analysis of heart rate variability in COPD patients................................................................................................ 78 G. D’Addio, A. Accardo, G. Corbi, N. Ferrara, F. Rengo
Autonomic Modulation of Ventricular Response by Exercise and Antiarrhythmic Drugs during Atrial Fibrillation ....................................................................................................................................................... 82 VDA Corino, LT Mainardi, D Husser, A Bollmann
Flexible Multichannel System for Bioelectrical Fields Analysis ......................................................................................... 86 P. Kneppo, M. Tysler, K. Hana, P. Smrcka, V. Rosik, S. Karas, E. Heblakova
Neural Networks Based Approach to remove Baseline drift in Biomedical Signals ......................................................... 90 J. Mateo, C. Sanchez, R. Alcaraz, C. Vaya and J. J. Rieta
Non-Linear Organization Analysis of the Dominant Atrial Frequency to Predict Spontaneous Termination of Atrial Fibrillation ............................................................................................................................................................... 94 R. Alcaraz and J. J. Rieta
Using Supervised Fuzzy Clustering and CWT for Ventricular Late Potentials (VLP) Detection in High-Resolution ECG Signal............................................................................................................................................. 99 Ayyoub Jafari, M.H. Morradi
Analysis of Surface EMG An Approach to the Real-Time Surface Electromyogram Decomposition...................................................................... 105 V. Glaser, A. Holobar and D. Zazula
Non-invasive estimation of the degree of motor unit synchronization in the biceps brachii muscle ............................ 109 A. Holobar, M. Gazzoni, D. Farina, D. Zazula, and R. Merletti
EMG Based Muscle Force Estimation using Motor Unit Twitch Model and Convolution Kernel Compensation...... 114 R. Istenic, A. Holobar , R. Merletti and D. Zazula
Model Based Decomposition of Muaps Into Their Constituent Sfeaps............................................................................ 118 M.G. Xyda, C.S. Pattichis, P. Kaplanis, C. Christodoulou and D. Zazula
Fast-Slow phase separation of Near InfraRed Spectroscopy to study Oxigenation v/s sEMG Changes....................... 124 Gian Carlo Filligoi
Analysis of Uterine EMG/EHG Uterine Electromyography in Humans – Contractions, Labor, and Delivery................................................................. 128 R. E. Garfield and W. L. Maner
Predictive value of EMG basal activity in the cervix at initiation of delivery in humans .............................................. 131 D. Rudel, G. Vidmar, B. Leskosek and I. Verdenik
Content
XIII
Evaluation of adaptive filtering methods on a 16 electrode electrohysterogram recorded externally in labor............ 135 J. Terrien, C. Marque, T. Steingrimsdottir and B. Karlsson
Abdominal EHG on a 4 by 4 grid: mapping and presenting the propagation of uterine contractions ......................... 139 B. Karlsson, J. Terrien, V. Gudmundsson, T. Steingrimsdottir and C. Marque
Evaluating Uterine Electrohysterogram with Entropy ..................................................................................................... 144 J. Vrhovec, A. Macek Lebar, D. Rudel
Detection of contractions during labour using the uterine electromyogram................................................................... 148 D. Novak, A. Macek-Lebar, D. Rudel and T. Jarm
Artificial Intelligence and Intelligent Data Analysis in Medicine GIFT: a tool for generating free text reports from encoded data..................................................................................... 152 Silvia Panzarasa, Silvana Quaglini, Mauro Pessina, Anna Cavallini, Giuseppe Micieli
Supporting Factors to Improve the Explanatory Potential of Contrast Set Mining: Analyzing Brain Ischaemia Data......................................................................................................................................... 157 N. Lavrac, P. Kralj, D. Gamberger and A. Krstacic
Availability Humanization - The Semantic Model in Occupational Health .................................................................... 162 M. Molan and G. Molan
Analyzing Distributed Medical Databases on DataMiningGrid© .................................................................................... 166 Vlado Stankovski, Martin Swain, Matevz Stimec and Natasa Fidler Mis
Bioimpedance FENOTIP: Microfluidics and Nanoelectrodes for the Electromagnetic Spectroscopy of Biological Cells ................... 170 V. Senez, A. Treizebré, E. Lennon, D. Legrand, H. Ghandour, B. Bocquet, T. Fujii and J. Mazurier
Impedance method for determination of the root canal length ........................................................................................ 174 D. Krizaj, J. Jan and T. Zagar
Separation of electroporated and non-electroporated cells by means of dielectrophoresis ........................................... 178 J. Oblak, D. Krizaj, S. Amon, A. Macek-Lebar and D. Miklavcic
A simple DAQ-card based bioimpedance measurement system....................................................................................... 182 T. Zagar and D. Krizaj
Bioimpedance spectroscopy of human blood at low frequency using coplanar microelectrodes .................................. 186 J. Prado, M. Nadi, C. Margo and A Rouane
Impedance Spectroscopy of Newt Tails .............................................................................................................................. 190 F.X. Hart, J.H. Johnson and N.J. Berner
Dielectric properties of water and blood samples with glucose at different concentrations .......................................... 194 A. Tura, S. Sbrignadello, S. Barison, S. Conti, G. Pacini
Parameter Optimization in Voltage Pulse Plethysmography............................................................................................ 198 M. Melinscak
Inherently Synchronous Data Acquisition as a Platform for Bioimpedance Measurement........................................... 202 G. Poola and J. Toomessoo
Benefits and disadvantages of impedance-ratio measuring method in new generation of apex-locators ..................... 206 T. Marjanovic, Z. Stare
XIV
Content
Biological Effects of Electromagnetic Radiation Effect of Modulated 450 MHz Microwave on HumanEEG at Different Field Power Densities .................................... 210 R. Tomson, H.Hinrikus, M. Bachmann, J. Lass, and V. Tuulik
Regenerative Effects of (-)-epigallocatechin-gallate Against Hepatic Oxidative Stress Resulted by Mobile Phone Exposure .................................................................................................................................................. 214 E. Ozgur, G. Güler and N. Seyhan
Conducting Implant in Low Frequency Electromagnetic Field ....................................................................................... 218 B. Valic, P. Gajsek and D.Miklavcic
Measurements of background electromagnetic fields in human environment................................................................ 222 T. Trcek, B. Valic and P. Gajsek
Numerical Assessment of Induced Current Densities for Pregnant Women Exposed to 50 Hz Electromagnetic Field............................................................................................................................................ 226 A. Zupanic, B. Valic and D. Miklavcic
The Relation Assessment Between 50 Hz Electric Field Exposure-Induced Protein Carbonyl Levels and The Protective Effect of Green Tea Catechin (EGCG) .............................................................................................. 230 A. Tomruk, G. Guler and N. Seyhan
EMF Monitoring Campaign in Slovenian Communes ...................................................................................................... 234 B. Valic, J. Jancar and P. Gajsek
Biomaterials Surface modification of titanium fiber-mesh scaffolds through a culture of human SAOS-2 osteoblasts electromagnetically stimulated ............................................................................................................................................ 238 L. Fassina, L. Visai, E. Saino, M.G. Cusella De Angelis, F. Benazzo and G. Magenes
Expression of Smooth Muscle Cells Grown on Magnesium Alloys .................................................................................. 242 S.K. Lu, W.H. Lee, T.Y. Tian, C.H. Chen, H.I. Yeh
Coalescence of phospholipid vesicles mediated by β2GPI – experiment and modelling ................................................ 246 J. Urbanija, B. Rozman, A. Iglič, T. Mareš, M. Daniel, Veronika Kralj-Iglič
Advancing in the quality of the cells assigned for Autologous Chondrocyte Implantation (ACI) method ................... 249 A. Barlic, D. Radosavljevic, M. Drobnic and N. Kregar-Velikonja
Mesenchymal Stem Cells: a Modern Approach to Treat Long Bones Defects................................................................ 253 H. Krečič-Stres, M. Krkovič, J. Koder, E. Maličev, M. Drobnič, D. Marolt and N. Kregar-Velikonja
Biomechanics Combination of microfluidic and structure-continual studies in biorheology of blood with magnetic additions ........ 257 E.Yu. Taran, V.A. Gryaznova and O.O. Melnyk
Virtual Rehabilitation of Lower Extremities...................................................................................................................... 262 T. Koritnik, T. Bajd and M. Munih
Rating Stroke Patients Based on Movement Analysis ....................................................................................................... 266 A. Jobbagy, G. Fazekas
Bending stiffness of odontoid fracture fixation with one cortical screw – numerical approach .................................... 270 L. Capek, P. Buchvald
Elastic Moduli and Poisson’s Ratios of Microscopic Human Femoral Trabeculae ........................................................ 274 J. Hong, H. Cha, Y. Park, S. Lee, G. Khang,and Y. Kim
Content
XV
The Dissipation of Suction Waves in Flexible Tubes ......................................................................................................... 278 J. Feng and A.W. Khir
Hip stress distribution may be a risk factor for avascular necrosis of femoral head...................................................... 282 D. Dolinar, M. Ivanovski, I. List, M. Daniel, B. Mavcic, M. Tomsic, A. Iglic and V. Kralj-Iglic
Elasticity Distribution Imaging of Sliced Liver Cirrhosis and Hepatitis using a novel Tactile Mapping System........ 286 Y. Murayama, T. Yajima, H. Sakuma, Y. Hatakeyama, C.E. Constantinou, S. Takenoshita, S. Omata
Changes in Biomechanics Induced by Fatigue in Single-leg Jump and Landing............................................................ 288 J. Stublar, P. Usenik, R. Kamnik, M. Munih
Musculoskeletal Modeling to Provide Muscles and Ligaments Length Changes during Movement for Orthopaedic Surgery Planning...................................................................................................................................... 292 C.A. Frigo and E.E. Pavan
Numerical model of a myocyte for the evaluation of the influence of inotropic substances on the myocardial contractility............................................................................................................................................ 296 Bernardo Innocenti, Andrea Corvi
Biomechanical Analysis of Bolus Processing ...................................................................................................................... 300 T. Goldmann, S. Konvickova and L. Himmlova
Application of Simplified Ray Method for the Determination of the Cortical Bone Elastic Coefficients by the Ultrasonic Wave Inversion ....................................................................................................................................... 304 T. Goldmann, H. Seiner and M. Landa
Model for Muscle Force Calculation Including Dynamics Behavior and Vicoelastic Properties of Tendon................ 308 M. Vilimek
Biomedical Engineering Education and E-learning New courses in medical engineering, medical physics and bio/physics for clinical engineers, medicine and veterinary medicine specialists in Serbia..................................................................................................................... 310 V. M. Spasic-Jokic, D. Lj. Popovic, S. Stankovicand I. Z. Zupunski
The Education and Training of the Medical Physicist in Europe The European Federation of Organisations for Medical Physics -EFOMP Policy Statements and Efforts......................................................................................... 313 S. Christofides, T. Eudaldo, K. J. Olsen, J. H. Armas, R. Padovani, A. Del Guerra, W. Schlegel, M. Buchgeister, P. F. Sharp
Assessment of a system developed for virtual teaching ..................................................................................................... 319 M.L.A.Botelho, D.F.Cunha, F.B.Mendonca and S.J.Calil
A Web-Based E-learning Application on Electrochemotherapy ...................................................................................... 323 S. Corovic, J. Bester, A. Kos, M. Papic and D. Miklavcic
The value of clinical simulation-based training.................................................................................................................. 327 Vesna Paver-Erzen, Matej Cimerman
Biomedical Engineering and Virtual Education ................................................................................................................ 329 A. Kybartaite, J. Nousiainen, K. Lindroos, J. Malmivuo
Presentation of Cochlear Implant to Deaf People .............................................................................................................. 332 J. Vrhovec, A. Macek Lebar , D. Miklavcic, M. Eljon and J. Bester
Internet Examination – A New Tool in e-Learning ........................................................................................................... 336 J.A. Malmivuo, K. Lindroos and J.O. Nousiainen
XVI
Content
Biomedical Instrumentation and Measurement Development of a calibration bath for clinical thermometers .......................................................................................... 338 I. Pusnik, J. Bojkovski and J. Drnovsek
Evaluation of non-invasive blood pressure simulators ...................................................................................................... 342 G. Gersak and J. Drnovsek
Development of Implantable SAW Probe for Epilepsy Prediction .................................................................................. 346 N. Gopalsami, I. Osorio, S. Kulikov, S. Buyko, A. Martynov and A.C. Raptis
Accurate On-line Estimation of Delivered Dialysis Dose by Dialysis Adequacy Monitor (DIAMON) ......................... 350 I. Fridolin, J. Jerotskaja, K. Lauri, A.Scherbakovand M. Luman
Clinical implication of pulse wave analysis......................................................................................................................... 354 R. Accetto, K. Rener, J. Brguljan-Hitij, B. Salobir
Ambulatory blood pressure monitoring is highly sensitive for detection of early cardiovascular risk factors in young adults.......................................................................................................... 357 Maja Benca, Ales Zemva, Primoz Dolenc
Simple verification of infrared ear thermometers by use of fixed-point.......................................................................... 361 J. Bojkovski
Control Abilities of Power and Precision Grasping in Children of Different Ages ........................................................ 365 B. Bajd and L. Praprotnik
Bluetooth Portable Device for Continuous ECG and Patient Motion Monitoring During Daily Life .......................... 369 P. Bifulco, G. Gargiulo, M. Romano, A. Fratini and M. Cesarelli
Wearable Wireless Biopotential Electrode for ECG Monitoring ..................................................................................... 373 E.S. Valchinov and N.E. Pallikarakis
Modelling and Simulation of Ultrasound Non Linearities Measurement for Biological Mediums ............................... 377 R. Guelaz, D. Kourticheand M. Nadi
A Personal Computer as a Universal Controller for Medical-Focused Appliances........................................................ 381 Denis Pavliha, Matej Rebersek, Luka Krevs and Damijan Miklavcic
System Identification of Integrative Non Invasive Blood Pressure Sensor Based on ARMAX Estimator Algorithm ....................................................................................................................................... 385 Noaman M. Noaman, Abbas K. Abbas
Experimental Measurements of Potentials Generated by the Electrodes of a Cochlear Implant in a Phantom.......... 390 G. Tognola, A. Pesatori, M. Norgia, F. Sibella, S. Burdo, C. Svelto, M. Parazzini, A. Paglialonga, P. Ravazzani
Evaluation of muscle dynamic response measured before and after treatment of spastic muscle with a BTX-A − A case study ............................................................................................................................................... 393 D. Krizaj, K. Grabljevec, B. Simunic
Home Care Technologies for Ambient Assisted Living..................................................................................................... 397 Ratko Magjarevic
Development of the ISO standard for clinical thermometers ........................................................................................... 401 I. Pusnik
Hardware optimization of a Real-Time Telediagnosis System .......................................................................................... 405 Muhammad Kamrul Hasan, Md. Nazmus Sayadat, and Md. Atiqur Rahman Sarker
Application of time-gated, intensified CCD camera for imaging of absorption changes in non-homogenous medium. ............................................................................................................................................... 410 P. Sawosz, M. Kacprzak, A. Liebert, R. Maniewski
Content
XVII
The impact of the intubation model upon ventilation parameters ................................................................................... 413 B. Stankiewicz, J. Glapinski, M. Rawicz, B. Woloszczuk-Gebicka, M. Michnikowski, M. Darowski
The hybrid piston model of lungs ........................................................................................................................................ 416 M. Kozarski, K. Zielinski, K.J. Palko and M. Darowski
Biomedical Signal Processing Estimation of Neural Noise Spectrum in a Postural Control Model ................................................................................ 419 A.F. Kohn
Optimized Design of Single-sided Quadratic Phase Outer Volume Suppression Pulses for Magnetic Resonance Imaging ........................................................................................................................................ 423 N. Stikov, A. Mutapcic and J.M. Pauly
Analysis of foveation duration and repeatability at different gaze positions in patients affected by congenital nystagmus ...................................................................................................................................................... 426 M. Cesarelli, P. Bifulco, M. Romano, G. Pasquariello, A. Fratini, L. Loffredo, A. Magli, T . De Berardinis, D. Boccuzzi
Frequency characteristics of arterial catheters – an in vitro study .................................................................................. 430 F. T. Molnar and G. Halasz
Signal Processing methods for PPG Module to Increase Signal Quality ......................................................................... 434 K. Pilt, K. Meigas, J. Lass and M. Rosmann
Detection of the cancerous tissue sections in the breast optical biopsy dataflow using neural networks...................... 438 A. Nuzhny, S. Shumsky, T. Lyubynskaya
On the Occurrence of Phase-locked Pulse Train in the Peripheral Auditory System .................................................... 442 T. Matsuoka, D. Konno and M. Ogawa
A device for quantitative kinematic analysis of children’s handwriting movements...................................................... 445 A. Accardo, A. Chiap, M. Borean, L. Bravar, S. Zoia, M. Carrozzi and A. Scabar
Blood Flow and Oxygenation Measurement Monitoring of preterm infants during crying episodes...................................................................................................... 449 L. Bocchi, L. Spaccaterra, F. Favilli, L. Favilli, E. Atrei, C. Manfredi and G. P. Donzelli
Measuring Tumor Oxygenation by Electron Paramagnetic Resonance Oximetry in vivo............................................. 453 Z. Abramovic, M. Sentjurc and J. Kristl
Radiotracer and Microscopic Assessment of Vascular Function in Cancer Therapy .................................................... 457 G.M. Tozer and V.J. Cunningham
The Influence of Endurance Training on Brain and Leg Blood Volumes Translocation During an Orthostatic Test ............................................................................................................................................................... 461 A. Usaj
Comparison of two hypoxic markers: pimonidazole and glucose transporter 1 (Glut-1) .............................................. 465 A. Coer, M. Legan, D. Stiblar-Martincic, M. Cemazar, G. Sersa
Effects of vinblastine on blood flow of solid tumours in mice ........................................................................................... 469 S. Kranjc, T. Jarm, M. Cemazar, G. Sersa, A. Secerov, M. Auersperg
Automatic recognition of hemodynamic responses to rare stimuli using functional Near-Infrared Spectroscopy...... 473 M. Butti, A. C. Merzagora, M. Izzetoglu, S. Bunce, A. M. Bianchi, S. Cerutti, B. Onaral
XVIII
Content
Brain Research and Analysis of EEG Brain on a Chip: Engineering Form and Function in Cultured Neuronal Networks..................................................... 477 B.C. Wheeler
Identification of Gripping-Force Control from Electroencephalographic Signals ......................................................... 478 A. Belic, B. Koritnik, V. Logar, S. Brezan, V. Rutar, R. Karba, G. Kurillo and J. Zidar
Quantitative EEG as a Diagnostic Tool in Patients with Head Injury and Posttraumatic Epilepsy............................. 482 T. Bojic, B. Ljesevic, A. Dragin, S. Jovic, L Schwirtlich, A. Stefanovic
BSI versus the Eye: EEG Monitoring in Carotid Endarterectomy.................................................................................. 487 W.A. Hofstraand M.J.A.M. van Putten
Assessing FSP Index Performance as an Objective MLAEP Detector during Stimulation at Several Sound Pressure Levels ........................................................................................................................................ 492 M. Cagy, A.F.C. Infantosi and E.J.B. Zaeyen
The Colorful Brain: Compact Visualisition of Routine EEG Recordings ....................................................................... 497 Michel J.A.M. van Putten
Using ANN on EEG signals to predict working memory task response .......................................................................... 501 V. Logar, A. Belic, B. Koritnik S. Brezan, V. Rutar, J. Zidar, R. Karba and D. Matko
Comparison of methods and co-registration maps of EEG and fMRI in Occipital Lobe Epilepsy............................... 505 M. Forjaz Secca, A. Leal, J. Cabraland H. Fernandes
Multimodal imaging issues for electric brain activity mapping in the presence of brain lesions .................................. 509 F. Vatta, P. Bruno, F. Di Salle, F. Meneghini, S. Mininel and P. Inchingolo
Proposal and validation of a framework for High Performance 3D True Electrical Brain Activity Mapping ............ 513 S. Mininel, P. Bruno, F. Meneghini, F. Vatta and P. Inchingolo
EEG Peak Alpha Frequency as an Indicator for Physical Fatigue .................................................................................. 517 S.C. Ng, P. Raveendran
Acetylcholine addition and electrical stimulation of dissociated neurons from an extended subthalamic area – A pilot study in the rat.......................................................................................................................................................... 521 T. Heida, K.G. Usunoff and E. Marani
Cross-correlation based methods for estimating the functional connectivity in populations of cortical neurons........ 525 A.N. Ide, M. Chiappalone , L. Berdondini , V. Sanguineti , S. Martinoia
Movement Related Potentials in Spontaneous and Provoked Thumb Movement .......................................................... 529 A.B. Sefer, M. Krbot, V. Isgum and M. Cifrek
Cardiovascular System Simulation of Renal Artery Stenosis Using Cardiovascular Electronic System.............................................................. 533 K.Hassani, M.Navidbakhsh and M.Rostami
Extracellular ATP-Purinoceptor Signaling for the Intercellular Synchronization of Intracellular Ca oscillation in Cultured Cardiac Myocytes.......................................................................................... 537 K. Kawahara and Y. Nakayama
Computer Assisted Optimization of Biventricular Pacing Assuming Ventricular Heterogeneity................................. 541 R. Miri, M. Reumann, D. Farina , B. Osswald, O. Dössel
Power density spectra of the velocity waveforms in Artificial heart valves..................................................................... 545 A. A. Sakhaeimanesh
Content
XIX
Medical Plans as a Middle Step in Building Heart Failure Expert System ..................................................................... 549 Alan Jovic, Marin Prcela and Goran Krstacic
Method for Reducing Pacing Current Threshold at Transesophageal Stimulation ....................................................... 554 A. Anier, J. Kaik and K. Meigas
User–centered system to manage Heart Failure in a mobile environment ...................................................................... 558 E. Villalba, D. Salvi, M. Ottaviano, I. Peinado, M. T. Arredondo, M. Docampo
Comparison of Four Calculation Techniques for Estimation of Local Arterial Compliance ........................................ 562 R. Raamat, J. Talts and K. Jagomägi
The Effect of in vitro Anticoagulant Disodium Citrate on Beta-2-glycoprotein I - Induced Coalescence of Giant Phospholipid Vesicles ............................................................................................................................................ 566 M. Frank, M. Lokar, J. Urbanija, M. Krzan, V. Kralj-Iglic, B. Rozman
Electroporation Based Therapies Cell membrane fluidity at different temperatures in relation to electroporation effectiveness of cell line V79 ........... 570 Masa Knaduser, Marjeta Sentjurcand Damijan Miklavcic
Voltage commutator for multiple electrodes in gene electrotransfer of skin cells .......................................................... 574 M. Kranjc, P. Kramar, M. Rebersek and D. Miklavcic
Voltage breakdown measurement of planar lipid bilayer mixtures ................................................................................. 578 P. Kramar, D. Miklavcic and A. Macek Lebar
Antitumor effectiveness of electrotransfer of p53 into murine sarcomas alone or combined with electrochemotherapy using cisplatin........................................................................................................................... 582 M. Cemazar, A. Grosel, S. Kranjc, and G. Sersa
Electrochemotherapy in veterinary medicine .................................................................................................................... 586 Natasa Tozon and Maja Cemazar
Tumor electrotransfection progress and prospects: the impact of knowledge about tumor histology ......................... 589 S. Mesojednik, D. Pavlin, G. Sersa, A. Coer, S. Kranjc, A. Grosel, G. Tevz, M. Cemazar
Quantification of ion transport during cell electroporation – theoretical and experimental analysis of transient and stable pores during cell electroporation.................................................................................................. 593 M. Pavlin, and D. Miklavcic
A numerical model of skin electroporation as a method to enhance gene transfection in skin...................................... 597 N. Pavselj, V. Preat and D. Miklavcic
Tumor blood flow modifying and vascular disrupting effect of electrochemotherapy................................................... 602 G. Sersa, M. Cemazar, S. Kranjc and D. Miklavcic
Real time electroporation control for accurate and safe in vivo electrogene therapy..................................................... 606 David Cukjati, Danute Batiuskaite, Damijan Miklavčič, Lluis M. Mir
Electrochemotherapy of equids cutaneous tumors: a 57 case retrospective study 1999-2005 ....................................... 610 Y. Tamzali, J. Teissie, M. Golzioand M. P. Rols
Electrochemotherapy in treatment of solid tumours in cancer patients .......................................................................... 614 G. Sersa for the ESOPE group
Electropulsation, an biophysical delivery method for therapy ......................................................................................... 618 J. Teissie and M. Cemazar
Bases and rationale of the electrochemotherapy................................................................................................................ 622 L.M. Mir
XX
Content
A critical step in gene electrotransfer: the injection of the DNA...................................................................................... 623 F.M. André and L.M. Mir
In vivo imaging of siRNA electrotransfer and silencing in different organs ................................................................... 624 A. Paganin-Gioanni, J.M. Escoffre, L. Mazzolini, M.P. Rols, J. Teissiéand M. Golzio
An endoscopic system for gene & drug delivery directly to intraluminal tissue. ............................................................ 628 D.M. Soden, M. Sadadcharam, J. Piggott,A. Morrissey, C.G. Collins and G.C. O’Sullivan.
The effects of irreversible electroporation on tissue, in vivo............................................................................................. 629 Boris Rubinsky
Equine Cutaneous Tumors Treatment by Electro-chemo-immuno-geno-therapy ......................................................... 630 Y. Tamzali, B. Couderc, M.P. Rols, M. Golzio and J. Teissie
Analysis of Tissue Heating During Electroporation Based Therapy: A 3D FEM Model for a Pair of Needle Electrodes.............................................................................................................................................................. 631 I. Lackovic, R. Magjarevic and D. Miklavcic
The induced transmembrane potential and effective conductivity of cells in dense cell system .................................... 635 M. Pavlin, and D.Miklavcic
An experimental and numerical study of the induced transmembrane voltage and electroporation on clusters of irregularly shaped cells ................................................................................................................................. 639 G. Pucihar, T. Kotnik, and D. Miklavcic
Functional Electrical and Magnetic Stimulation The effect of afferent training on long-term neuroplastic changes in the human cerebral cortex ................................ 643 R.L.J. Meesen, O. Levin and S.P. Swinnen
An Experimental Test of Fuzzy Controller Based on Cycle-to-Cycle Control for FES-induced Gait: Knee Joint Control with Neurologically Intact Subjects................................................................................................... 647 T. Watanabe, A. Arifin, T. Masuko and M. Yoshizawa
Troubleshooting for DBS patients by a non-invasive method with subsequent examination of the implantable device...................................................................................................................................................... 651 H. Lanmüller, J. Wernisch and F. Alesch
Treating drop-foot in hemiplegics: the role of matrix electrode....................................................................................... 654 C. Azevedo-Coste, G. Bijelic, L. Schwirtlich and D.B. Popovic
FES treatment of lower extremities of patients with upper / lower motor neuron lesion: A comparison of rehabilitation strategies and stimulation equipment.................................................................................. 658 M. Bijak, M. Mödlin, C. Hofer, M. Rakos, H. Kern, W. Mayr
Optimal Control of Walking with Functional Electrical Stimulation: Inclusion of Physiological Constraints............ 661 Strahinja Dosen, Dejan B. Popovic
Magnetic Coils Design for Localized Stimulation .............................................................................................................. 665 L. Cret, M. Plesa, D. Stet and R.V. Ciupa
Gait and Motion Analysis Vertical unloading produced by electrically evoked withdrawal reflexes during gait: preliminary results................. 669 J. Emborg, E. Spaichand O.K. Andersen
Two-level control of bipedal walking model....................................................................................................................... 673 A. Olensek and Z. Matjacic
Content
XXI
Data mining time series of human locomotion data based on functional approximation............................................... 677 V. Ergovic, S. Tonkovic V. Medved and M. Kasovic
Kinematic and kinetic patterns of walking in spinal muscular atrophy, type III ........................................................... 681 Z. Matjacic, A. Praznikar, A. Olensek, J. Krajnik, I. Tomsic, M. Gorisek-Humar, A. Klemen, A. Zupan
The Gait E-Book – Development of Effective Participatory Learning using Simulation and Active Electronic Books ................................................................................................................................................ 685 A. Sandholm, P. Fritzson, V. Arora, Scott Delp, G. Petersson and J. Rose
A Study on Sensing System of Lower Limb Condition with Piezoelectric Gyroscopes: Measurements of Joint Angles and Gait Phases................................................................................................................. 689 Norio Furuse and Takashi Watanabe
Health Care and Medical Informatics A standard tool to interconnect clinical, genomic and proteomic data for personalization of cardiac disease treatment................................................................................................................................................. 693 M. Giacomini, F. Lorandi and C. Ruggiero
How do physicians make a decision?................................................................................................................................... 696 Kaiser Niknam, Mahdi Ghorbani Samini, Hedyeh Mahmudi, Sahar Niknam
Informational Internet-systems in Ukrainian healthcare – problems and perspectives................................................. 700 A.A. Lendyak
Reducing time in emergency medical service by improving information exchange among information systems........ 704 A. Jelovsek, M. Stern
Data Presentation Methods for Monitoring a Public Health-Care System ..................................................................... 708 Aleksander Pur, Marko Bohanec, Nada Lavrač, Bojan Cestnik
Adaptive Altered Auditory Feedback (AAF) device based on a multimodal intelligent monitor to treat the permanent developmental stuttering (PDS): A critical proposal ............................................................................... 712 Manuel Prado, Laura M. Roa
Simulation in Medicine and Nursing – First Experiences in Simulation centre at Faculty of Health Sciences University of Maribor........................................................................................................................... 716 D. Micetic-Turk, M. Krizmaric, H. Blazun, N. Krcevski-Skvarc, A. Kozelj, P. Kokol, Š. Grmec, Z. Turk
Open Source in Health Care: a milestone toward the creation of an ICT-based pan-European health facility .......... 719 D. Dinevski, P. Inchingolo, I. Krajnc, P. Kokol
The Open Three Consortium: an open-source, full-service-based world-wide e-health initiative ................................ 723 P. Inchingolo, M. Beltrame, P. Bosazzi, D. Dinevski G. Faustini, S. Mininel, A. Poli, F. Vatta
O3-RWS: a Java-based, IHE-compliant open-source radiology workstation ................................................................. 727 G. Faustini, P. Inchingolo
O3-DPACS: a Java-based, IHE compliant open-source data and image manager and archiver.................................. 732 M. Beltrame, P. Bosazzi, A. Poli, P. Inchingolo
GATEWAY: Assistive Technology for Education and Employment............................................................................... 737 D. Kervina, M. Jenko, M. Pustisek and J. Bester
Reshaping Clinical Trial Data Collection Process to Use the Advantages of the Web-Based Electronic Data Collection...................................................................................................................................................................... 741 I. Pavlovic and I. Lazarevic
XXII
Content
Telepathology: Success or Failure? ..................................................................................................................................... 745 D. Giansanti, L. Castrichella and M. R. Giovagnoli
E-learning for Laurea in Biomedical laboratory Technicians: a feasibility study .......................................................... 749 D. Giansanti, L. Castrichella and M.R. Giovagnoli
Health Care Technology Assessment and Management A hospital structural and technological performance indicators set................................................................................ 752 E. Iadanza, F. Dori and G. Biffi Gentili, G. Calani, E. Marini, E. Sladoievich, A. Surace
Continuous EEG monitoring in the Intensive Care Unit: Beta Scientific and Management Scientific aspects ........... 756 P.M.H. Sanders, M.J.A.M. van Putten
Technology Assessment for evaluating integration of Ambulatory Follow-up and Home Monitoring......................... 758 L. Pecchia, L. Bisaccia, P. Melillo, L. Argenziano, M. Bracale
A Multi Scale Methodology for Technology Assessment. A case study on Spine Surgery ............................................. 762 L. Pecchia, F. Acampora and S. Acampora, M. Bracale
Heart Rate Analysis Complexity Analysis of Heart Rate Control Using Symbolic Dynamics in Young Diabetic Patients ........................... 766 M. Javorka, Z. Trunkvalterova, I. Tonhajzerova, J. Javorkova and K. Javorka
Recurrence Quantification Analysis of Heart Rate Dynamics in Young Patients with Diabetes Mellitus.................... 769 Z. Trunkvalterova, M. Javorka, I. Tonhajzerova, J. Javorkova, and K. Javorka
Joint Symbolic Dynamic of Cardiovascular Time Series of Rats ..................................................................................... 773 D. Varga,T. Loncar-Turukalo, D.Bajic, S. Milutinovic, N. Japundzic-Zigon
Technical problems in STV indexes application ................................................................................................................ 777 M. Cesarelli, M. Romano, P. Bifulco
2CTG2: A new system for the antepartum analysis of fetal heart rate............................................................................ 781 G. Magenes, M.G. Signorini, M. Ferrario, F. Lunghi
Speeding up the Computation of Approximate Entropy................................................................................................... 785 G. Manis and S. Nikolopoulos
Cardiac arrhythmias and artifacts in fetal heart rate signals: detection and correction ............................................... 789 M. Cesarelli, M. Romano, P. Bifulco, A. Fratini
Medical Imaging Artery movement tracking in angiographic sequences for coronary flow calculation ................................................... 793 Hanna Goszczynska
Battery powered and wireless Electrical Impedance Tomography Spectroscopy Imaging using Bluetooth................ 798 A.L. McEwan and D.S. Holder
Using Heuristics for the Lung Fields Segmentation in Chest Radiographs..................................................................... 802 D. Gados and G. Horvath
Sampling Considerations and Resolution Enhancement in Ideal Planar Coded Aperture Nuclear Medicine Imaging ................................................................................................................................................................. 806 D.M. Starfield, D.M. Rubin and T. Marwala
Measuring Red Blood Cell Velocity with a Keyhole Tracking Algorithm....................................................................... 810 C.C. Reyes-Aldasoro, S. Akerman and G.M. Tozer
Content
XXIII
Web-based Visualization Interface for Knee Cartilage..................................................................................................... 814 C.-L. Poh, R.I. Kitney and R.B.K. Shrestha
Markov Chain Based Edge Detection Algorithm for Evaluation of Capillary Microscopic Images............................. 818 G. Hamar, G. Horvath, Zs. Tarjan and T. Virag
Lung Surface Classification on High-Resolution CT using Machine Learning .............................................................. 822 S. Busayarat and T. Zrimec
Evaluation of Tomographic Reconstruction for Small Animals using micro Digital Tomosynthesis (microDTS)............................................................................................................................................................................ 826 D. Soimu, Z. Kamarianakis and N. Pallikarakis
Methods for Automatic Honeycombing Detection in HRCT images of the Lung........................................................... 830 T. Zrimec and J. Wong
Stochastic Rank Correlation - A novel merit function for dual energy 2D/3D registration in image-modulated radiation therapy .................................................................................................................................................................. 834 W. Birkfellner
Evaluation of peptides tagged nanoparticle adhesion to activated endothelial cells....................................................... 835 K. Rhee, H.J.Moon, K.S. Park and G. Khang
Estimation method for brain activities are influenced by blood pulsation effect............................................................ 839 W. H. Lee, J. H. Ku, H. R. Lee, K. W. Han, J. S. Park, J. J. Kim, I. Y. Kim, and S. I. Kim
Classification of Prostatic Tissues using Feature Selection Methods ............................................................................... 843 S. Bouatmane, B. Nekhoul, A. Bouridane and C. Tanougast
Texture Classification of Retinal Layers in Optical Coherence Tomography................................................................. 847 M. Baroni, S. Diciotti , A. Evangelisti, P. Fortunato and A. La Torre
Automatic cell detection in phase-contrast images for evaluation of electroporation efficiency in vitro ................................................................................................................................... 851 Marko Usaj, Drago Torkar, Damijan Miklavcic
Medical Physics Scattered radiation spectrum analysis for the breast cancer diagnostics ........................................................................ 856 S.A. Belkov, G.G. Kochemasov, N.V. Maslov, S.V. Bondarenko, N.M. Shakhova, I.Yu. Pavlycheva, A. Rubenchik, U. Kasthuri, L.B. Da Silva
Laminar Axially Directed Blood Flow Promotes Blood Clot Dissolution: Mathematical Modeling Verified by MR Microscopy................................................................................................................................................................ 859 J. Vidmar, B. Grobelnik, U. Mikac, G. Tratar, A. Blinc and I. Sersa
Snoring and CT Imaging...................................................................................................................................................... 864 I. Fajdiga, A. Koren and L. Dolenc
Modulation of the beam intensity with wax filter compensators...................................................................................... 867 D. Grabec and P. Strojan
A Model of Flow Mechanical Properties of the Lung and Airways ................................................................................. 871 B. Kuraszkiewicz, T. Podsiadly-Marczykowska and M. Darowski
Standard versus 3D optimized MRI-based planning for uterine cervix cancer brachyradiotherapy – The Ljubljana experience .................................................................................................................................................... 875 R. Hudej, P. Petric, J. Burger
Problems faced after the transition from a film to a DDR Radiology Department ........................................................ 879 S.P. Spyrou,I. Gerogiannis, A.P. Stefanoyiannis, S. Skannavis, A. Kalaitzis, P.A. Kaplanis
XXIV
Content
Verification of planned relative dose distribution for irradiation treatment technique using half-beams in the area of field abutment ................................................................................................................................................ 883 R. Hudej
Experimental verification of the calculated dose for Stereotactic Radiosurgery with specially designed white polystyrene phantom .......................................................................................................... 887 B. Casar, A. Sarvari
In vivo dosimetry with diodes in radiotherapy patients treated with four field box technique ..................................... 891 A. Strojnik
The Cavitational Potential of a Single-leaflet Virtual MHV: A Multi-Physics and Multiscale Modelling Approach ................................................................................................................................... 895 D. Rafiroiu, V. Díaz-Zuccarini, D.R. Hose, P.V. Lawford, A.J. Narracott, R.V. Ciupa
Wavelet-based quantitative evaluation of a digital density equalization technique in mammography ........................ 899 A.P. Stefanoyiannis, I. Gerogiannis, E. Efstathopoulos, S. Christofides, P.A. Kaplanis, A. Gouliamos
Interaction between charged membrane surfaces mediated by charged nanoparticles ................................................. 903 J. Pavlic, A. Iglic, V. Kralj-Iglic, K. Bohinc
Optical biopsy system for breast cancer diagnostics.......................................................................................................... 907 S.A.Belkov, G.G.Kochemasov, S.M.Kulikov, V.N.Novikov, U.Kasthuri, L.B. Da Silva
Implantable brain microcooler for the closed-loop system of epileptic seizure prevention ........................................... 911 I. Osorio, G. Kochemasov, V. Baranov, V. Eroshenko, T. Lyubynskaya, N. Gopalsami
Acetabular forces and contact stresses in active abduction rehabilitation ...................................................................... 915 H. Debevec, A. Kristan, B. Mavcic, M. Cimerman, M. Tonin, V. Kralj-Iglic, and M. Daniel
Time-Frequency behaviour of the a-wave of the human electroretinogram ................................................................... 919 R. Barraco, L. Bellomonte and M. Brai
Studies on the attenuating properties of various materials used for protection in radiotherapy and their effect of on the dose distribution in rotational therapy..................................................................................... 923 T. Ivanova, G. Malatara, K. Bliznakova, D. Kardamakis and N. Pallikarakis
Monte Carlo Radiotherapy Simulator: Applications and Feasibility Studies ................................................................. 928 K. Bliznakova, D. Soimu, Z. Bliznakov and N. Pallikarakis
Recovery of 0,1 Hz microvascular skin blood flow in dysautonomic diabetic (type 2) neuropathy by using Frequency Rhythmic Electrical Modulation System (FREMS) ........................................................................ 932 M. Bevilacqua, M. Barrella, R. Toscano, A. Evangelisti
Rehabilitation Engineering Complementary evaluation tool for clinical instrument in post-stroke rehabilitation ................................................... 936 I.Cikajlo, M.Rudolf, N.Goljar and Z.Matjacic
Electrically Elicited Stapedius Muscle Reflex in Cochlear Implant System fitting ........................................................ 940 A. Wasowski , T. Palko , A. Lorens , A. Walkowiak , A. Obrycka , H. Skarzynski
Use of rapid prototyping technology in comprehensive rehabilitation of a patient with congenital facial deformity or partial finger or hand amputation ........................................................................... 943 T. Maver, H. Burger, N. Ihan Hren, A. Zuzek, L. Butolin, and J. Weingartner
Using computer vision in a rehabilitation method of a human hand ............................................................................... 947 J. Katrasnik, M. Veber and P. Peer
Experimental Evaluation of Training Device for Upper Extremities Sensory-Motor Ability Augmentation .............. 950 J. Perdan, R. Kamnik, P. Obreza, T. Bajd and M. Munih
Content
XXV
New Experimental Results in Assessing and Rehabilitating the Upper Limb Function by Means of the Grip Force Tracking Method.................................................................................................................................... 954 M.S. Poboroniuc, R. Kamnik, S. Ciprian, Gh. Livint, D. Lucache and T. Bajd
The “IRIS Home” ................................................................................................................................................................. 958 A. Zupan, R. Cugelj, F. Hocevar
Evaluation of biofeedback of abdominal muscles during exercise in COPD................................................................... 961 M. Tomsic
Robotics and Haptics Can haptic interface be used for evaluating upper limb prosthesis in children and adults .......................................... 965 H. Burger, D. Brezovar, S. Kotnik, A Bardorfer and M. Munih
FreeForm modeling of spinal implants ............................................................................................................................... 969 R.I. Campbell, M. Lo Sapio and M. Martorelli
Grip force response in graphical and haptic virtual environment ................................................................................... 973 J. Podobnik and M. Munih
A Hierarchical SOM to Identify and Recognize Objects in Sequences of Stereo Images............................................... 977 Giovanni Bertolini, Stefano Ramat, Member IEEE
Assessment of hand kinematics and its control in dexterous manipulation..................................................................... 982 M. Veber, T. Bajd and M. Munih
A model arm for testing motor control theories on corrective movements during reaching ......................................... 986 D. Curone, F. Lunghi, G. Magenes and S. Ramat
Sports Sessions Acceleration driven adaptive filter to remove motion artifact from EMG recordings in Whole Body Vibration ..................................................................................................................................................... 990 A.Fratini, M. Cesarelli, P. Bifulco, A. La Gatta, M. Romano, G. Pasquariello
The Influence of Reduced Breathing During Incremental Bicycle Exercise on Some Ventilatory and Gas Exchange Parameters ............................................................................................................................................ 994 J. Kapus, A. Usaj, V. Kapus, B. Strumbelj
A Novel Testing Tool for Balance in Sports and Rehabilitation....................................................................................... 998 N. Sarabon, G. Omejec
Change of mean frequency of EMG signal during 100 meter maximal free style swimming ..................................... 1002 I. Stirn, T. Vizintin, V. Kapus, T. Jarmand V. Strojnik
Telemonitoring of the step detection: toward two investigations based on different wearable sensors? ................... 1006 G. Maccioni, V. Macellari and D. Giansanti
Repeatability of the Mean Power Frequency of the Endurance Level During Fatiguing Isometric Muscle Contractions ........................................................................................................... 1009 I. Stirn, T. Jarmand V. Strojnik
Ultrasound Image Processing Obtaining completely stable cellular neural network templates for ultrasound image segmentation ........................ 1013 M. Lenic, D. Zazula and B. Cigale
Segmentation of 3D Ovarian Ultrasound Volumes using Continuous Wavelet Transform......................................... 1017 B. Cigale and D. Zazula
XXVI
Content
Selected Applications of Dynamic Radiation Force of Ultrasound in Biomedicine ...................................................... 1021 A. Alizad, J.F. Greenleaf, and M. Fatemi
Breast Ultrasound Images Classification Using Morphometric Parameters Ordered by Mutual Information......... 1025 A.V. Alvarenga, J.L.R. Macrini, W.C.A. Pereira, C.E. Pedreira and A.F.C. Infantosi
Virtual Reality in Medicine Development of Knee Control Training System Using Virtual Reality for Hemiplegic Patients and Feasibility Experiment with Normal Participants.................................................................................................... 1030 J.S. Park, J.H. Ku, K.W. Han, S.W. Cho,D.Y. Kim, I.Y. Kim, and S.I. Kim
Development of Alcohol craving induction and measurement system using virtual reality: Craving characteristics to social situation ........................................................................................................................ 1034 S.W. Cho, J.H. Ku, J.S. Park, K.W. Han, Y.K. Choi, K. NamKoong, Y.C. Jung, J.J. Kim, I.Y. Kim, and S.I. Kim
Cataract Surgery Simulator for Medical Education ....................................................................................................... 1038 R. Barea, L. Boquete, J. F. Pérez, M. A. Dapena, P. Ramos, M. A. Hidalgo
Special Sessions and Symposiums Clinical Engineering and Patient Safety Patient safety - a challenge for clinical engineering ......................................................................................................... 1043 J.H. Nagel and M. Nagel
MIDS-project – a National Approach to Increase Patient Safety through Improved Use of Medical Information Data Systems.................................................................................................................................................. 1047 H. Terio
Improving Patient Safety Through Clinical Alarms Management ................................................................................ 1051 Y. David, J. Tobey Clark, J. Ott, T. Bauld, B. Patail, I. Gieras, M. Shepherd, S. Miodownik, J. Heyman, O. Keil, A. Lipschultz, B. Hyndman, W. Hyman, J. Keller, M. Baretich, W. Morse and D. Dickey
A Clinical Engineering Initiative within the Irish Healthcare System toward a Safer Patient Environment ............ 1055 P.J.C. Pentony, J. Mahady and R. Kinsella
A Pervasive Computing Approach in Medical Emergency Environments.................................................................... 1058 J. Thierry, C. Hafnerand S. Grasser
System for Tracing of blood transfusions and RFID ....................................................................................................... 1062 P. Di Giacomoand L. Bocchi
A preliminary setup model and protocol for checking electromagnetic interference between pacemakers and RFID (Radio Frequency IDentification).................................................................................................................... 1066 R. Tranfaglia, M. Bracale, A. Pone, L. Argenziano , L. Pecchia
Current Status of Clinical Engineering, Health Care Engineering and Health Care Technology Assessment in Austria ............................................................................................................................................................................. 1070 H. Gilly
Clinical Engineering Training Program in Emerging Countries Example from Albania ........................................... 1074 H. Terio
BME Education at the University of Trieste: the Higher Education in Clinical Engineering ..................................... 1077 P. Inchingolo and F. Vatta
Certification of Biomedical Engineering Technicians and Clinical Engineers: Important or Not.............................. 1081 James O. Wear, PhD, CCE, CHSP, FASHE, FAIMBE
Content
XXVII
Findings of the Worldwide Clinical Engineering Survey conducted by the Clinical Engineering Division of the International Federation for Medicine and Biological Engineering .................................................................... 1085 S.J. Calil, L.N. Nascimentoand F.R. Painter
Clinical Engineering in Malaysia – A Case Study............................................................................................................ 1089 Azman Hamid
Medical Equipment Inventorying and Installation of a Web-based Management System – Pilot Application in the Periphery of Crete, Greece ..................................................................................................... 1092 Z.B. Bliznakov, P.G. Malataras and N.E. Pallikarakis
A prototype device for thermo-hygrometric assessment of neonatal incubators .......................................................... 1096 P. Bifulco, M. Romano, A. Fratini, G. Pasquariello, and M. Cesarelli
Health Technology Assessment in Croatian Healthcare System .................................................................................... 1100 P. Milicic
A QFD-based approach to quality measurement in health care..................................................................................... 1102 F. Dori, E. Iadanza and D. Bottacci, S. Mattei
EVICAB - European Virtual Campus for Biomedical Engineering The E-HECE e-Learning Experience in BME Education............................................................................................... 1107 P. Inchingolo, F. Londero and F. Vatta
Web-based Supporting Material for Biomedical Engineering Education ..................................................................... 1111 K. Lindroos, J. Malmivuo, J. Nousiainen
European Virtual Campus for Biomedical Engineering EVICAB................................................................................. 1115 J.A. Malmivuo and J.O. Nousiainen
BIOMEDEA ........................................................................................................................................................................ 1118 Joachim H. Nagel
Biomedical Engineering Education, Virtual Campuses and the Bologna Process ........................................................ 1122 E.G. Salerud and Michail Ilias
How New and Evolving Biomedical Engineering Programs Benefit from EVICAB project....................................... 1126 A. Lukosevicius, V. Marozas
Learning Managements System as a Basis for Virtual Campus Project ....................................................................... 1130 K.V. Lindroos, M. Rajalakso and T. Väliharju
Future of Medical and Biological Engineering Computer Aided Surgery in The 21 Century ................................................................................................................... 1132 T. Dohi, K. Matsumiya and K. Masamune
Multi-dimensional fluorescence imaging .......................................................................................................................... 1134 P.M.W. French
Nanomedicine: Developing Nanotechnology for Applications in Medicine ................................................................... 1135 Gang Bao
The Physiome Project: A View of Integrative Biological Function ................................................................................ 1137 C.F. Dewey
Synthetic Biology – Engineering Biologically-based Devices and Systems .................................................................... 1138 R.I. Kitney
XXVIII
Content
Biomedical Engineering Clinical Innovations: Is the Past Prologue to the Future?..................................................... 1140 P. Citron
Innovations in Bioengineering Education for the 21 Century ........................................................................................ 1142 J.H. Linehan
Index Authors......................................................................................................................................1143 Index Subjects .....................................................................................................................................1149
Control for Therapeutic Functional Electrical Stimulation Dejan B. Popovic1,2, Mirjana B. Popovic1,2,3 1
Department of Health Science and Technology, SMI, Aalborg University 2 Faculty of Electrical Engineering, University of Belgrade 3 Center for Multidisciplinary Studies, University of Belgrade
Abstract— We suggest in this review paper that control of assistive systems for individuals with disability caused by injury or disease of central nervous system has to be approached with rather sophisticated methods that are capable to deal with high redundancy, nonlinearities, time variations, adaptation to the environment, and perturbations. The use of three levels that provide interaction with user, coordination of multi joint activity, and biological actuators is likely to be the solution for future electrical stimulation assistive systems. This is especially important for therapeutic assistive systems that must mimic life-like movement. The top control level needs to be discrete and secure the recognition of intended movement and possibly some kind of feedback, the middle control level needs to be discrete and provide multi joint coordination that is based on temporal and spatial synergistic model of the movement. The lowest control level needs to be model-based in order to match the specifics of the musculo-skeletal system. The hierarchical hybrid control is inherently predictive adaptive controller that, if properly designed, could results with effective generation of segment movements that lead to life like function (e.g., walking, standing, manipulation, grasping, etc.). Keywords— Hierarchical hybrid control, Electrical stimulation, Rule-based control, Model-based control.
I. INTRODUCTION Several clinical trials demonstrated that intensive taskoriented exercise augmented with an assistive system based on electrical stimulation promotes recovery of sensorymotor functions in individuals after central nervous system injury or disease [1,2]. The reasons most likely contributing to the recovery are: 1) assistive system contributes to nearnatural activity of the paretic extremity; thereby, play a part in the re-development of healthy-like movement. 2) proprioception and exteroception are enhanced due to artificially induced movement, 3) the electrical stimulation activates afferent pathways in parallel with activation of efferent pathways, 4) ability to perform function increases the motivation to use the paretic/paralyzed sensory-motor mechanisms, and 5) voluntary exercise prevents disuse and development of compensatory strategies. The physiological explanation of the recovery is that the central nervous system plasticity is augmented during the said therapy. Studies using Positron Emission Tomography
(PET), functional Magnetic Resonance Imaging (fMRI), transcranial magnetic stimulation (TMS), and magneto encephalography (MEG) support the concept of functional reorganization after CNS lesions [3,4]. II. CONTROL METHODS FOR THE THERAPEUTIC FUNCTIONAL ELECTRICAL STIMULATION
We start with the target of this research; that is, a model of biological control that needs to be mimicked (Fig. 1). Electrical stimulation activates muscles with bursts of pulses (electrical charge) delivered via electrodes exciting the intact peripheral nerves, which subsequently leads to generation of action potentials that propagate in both afferent and afferent pathways. The direct response of the stimulation will be the contraction producing joint torques, ultimately leading to movement; yet, a reflex component of movement will follow due to excitation of afferent pathways. One method to control the bursts of pulses is to apply model-based open-loop method. There is no correction that could be applied to the electrical stimulation in case of deviation of the produced movement from the desired one. The open-loop control performance was found unsatisfactory due to several reasons: parameter variation, inherent time-variance, time-delay, and strong nonlinearities in the musculo-skeletal system. The engineering method to deal with these problems is to introduce feedback, that is, to correct for errors by using sensory information that assesses the deviations of the trajectory from the desired one. The error-driven control ensures better tracking performance and smaller sensitivity to modeling errors, parameter variations, and external disturbances. There are many theoretical studies and some simple applications of closed-loop control system in FES; yet, none
Fig. 1: Model of control of movement in a human
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 3–6, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
4
Dejan B. Popovic, Mirjana B. Popovic
has reached the maturity and found use in rehabilitation. The main problem is that the model of the system is far from reality, and that the parameters of the system can not be determined with necessary accuracy. The alternative method for control was suggested by Tomovic and McGhee [5] in form of a finite state control of multi-legged locomotion. This control method evaluated into the black box model, termed Rule Base Control (RBC), where the structure of the system was not considered; yet, only the inputs and outputs. The RBC is an open-loop control driven by sensory information that switches from rule to rule that had been developed through heuristics. The RRC in its nature is on-off control, meaning that it does not consider the dynamics of the system. Rules for RRC are derived by using computerized classification, typically some kind of artificial neural network. Artificial neural networks have been incorporated into the control schemes as they are able to learn complex nonlinear mappings [6, 7]. The stability issues remain unresolved due to the black-box structure. The development of this control method led to the development of hierarchical hybrid control (HHC) that in principle could lead to better performance [8]. The expanded schema of the model in Fig. 1 shows the elements that HHC is integrating (Fig. 2). III. HIERARCHICAL HYBRID CONTROL OF MOVEMENT Hybrid means, in general, heterogeneous in nature or composition. The term "hybrid systems" is understood to describe systems with behavior defined by entities or processes of distinct characteristics. The hybrid systems of interest here are dynamic systems where the behavior is determined by interacting continuous and discrete dynamics. These systems typically contain variables or signals that take values from a continuous set (e.g., the set of real numbers) and also variables that take values from a discrete and typically finite set (e.g., the set of symbols {a, b, c}). These continuous or discrete-valued variables or signals depend on independent variables such as time, which also may be continuous or discrete; some of the variables may also be a
Fig. 2: The organization of sensory-motor systems leading to movement.
discrete-event that is driven in an asynchronous manner. There are several reasons for using hybrid models to represent movement. Reducing the complexity was and still is an important reason for dealing with hybrid systems. This is accomplished in hybrid systems by incorporating models of dynamic processes at different levels of abstraction. For another example, in order to avoid dealing directly with a set of nonlinear equations, one may choose to work with sets of simpler equations (e.g., linear) and switch among these simpler models. This is a rather common approach in modeling physical phenomena. In control, switching among simple dynamical systems has been used successfully in practice for many decades. The hybrid control systems typically arise from the interaction of discrete planning algorithms and continuous processes, and, as such, they provide the basic framework and methodology for the analysis and synthesis of autonomous and intelligent systems, i.e., planning to move the hand and grasp an object. The hybrid control systems contain two distinct types of components, subsystems with continuous dynamics and subsystems with discrete-event dynamics that interact with each other. Another important way in which hybrid systems arise is from the hierarchical organization of complex control systems. In these systems, a hierarchical organization helps manage the complexity, and higher levels in the hierarchy require less detailed models (discrete abstractions) of the functioning of the lower levels, necessitating the interaction of discrete and continuous components. There are analogies between certain current approaches to hybrid control and digital control system methodologies. In digital control, one could carry the control design in the continuous-time domain, then approximate or emulate the controller by way of a discrete controller and implement it using an interface consisting of a sample and a hold device. Alternatively, one could first obtain a discrete model of the
Fig. 3: The model of life-like movement controller
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Control for Therapeutic Functional Electrical Stimulation
Fig. 4: The organization of a man made HHC for restoring movement plant taken together with the interface and then carry the controller design in the discrete domain. In hybrid systems, in a manner analogous to the latter case, one may obtain a discrete-event model of the plant together with the interface using automata or Petri nets. The schema of an HHC based controller for movement incorporates three levels (Fig. 3). The top artificial control structure is the interface between the user and the machine. This interface is the principal command channel, and it allows the user to trigger the operation of their choice volitionally. The actual organization of the controller is sketched in Fig. 4. The interface initiates the activity of a discrete, rule-base controller. This rule-base controller operates as a discrete, sample data feedback control, and its main role is to distribute the commands to the lowest actuator levels. The rulebase controller is implementing the finite-state model of movement, and the rules have to be determined with sufficient generality to allow the application over many assistive systems to be applicable to the entire population with a similar level of disability. The rule-based control (RBC) system is applicable for the coordination level of control (Fig. 5). This level deals with the following: 1) the strategy of how to employ the resources available, and 2) the methodology of how to maximize the efficiency of the resources. The RBC in this multilevel control uses the simulation of movement as the pattern that has to be followed. The RBC considers that the system is to be applied in individuals with disability; hence, the heuristics is applied to the output of simulation of optimal control applied to the model customized with parameters that reflect the level of impairment of the potential user and healthy-like tra-
5
Fig. 5: The schema of a rule-based control for coordination level of hierarchical hybrid controller.
jectories [9]. The RBC comprises the following elements: the “regular” rules that are sequentially switching from one to the expected action based on sensory input; “mode” rules that are responsible for selecting the appropriate set of regular rules; and “hazard” rules that deal with conflict situations. The conflict situations occur due to the uncertainty of the available sensory information and/or hardware limitations, in addition to unexpected gait events. The hazard rules result in the safety “behavior”, which attempts to minimize the eventual catastrophic consequences of the hazard (e.g., falling, obstacles, non-physiological loading). The lower, actuator control level is responsible for executing decisions from the coordination level. The actuator level deals with specific muscle groups responsible for the flexion or the extension of a single joint, or in other cases, the action of several joints when a multiarticular muscle is externally stimulated. The actuator level implements the continuous feedback control and structural modeling. IV. DISCUSSION Compared with other classes of dynamical systems, the sensory-motor systems supporting the execution of functional motions in humans have three unique features: 1) they are highly redundant; 2) they are organized in a hierarchical structure, yet with many parallel channels; and 3) they are self-organized relying, among other things, on an extremely complex connectionism. In spite of the above complexity, the movements resulting from the action of sensory system, as a rule, are deterministic and they follow the preferred way of performing the intended motor task.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
6
The technology has spawned many different types of control system; each suited to a particular application. Over a much larger time scale, the biological organisms have evolved control systems to suit many of species and physiological functions. The two streams of development have converged in the area of motor control rehabilitation. This paper proposes the integration of these two control issues. In rehabilitation of movements the aim is to activate joints in a controlled way so as to restore as much motor function as possible in humans with motor disabilities. The control strategies implemented in most of rehabilitation devices have so far been fairly simple, and have been developed largely in relation to the design of machines rather than to the design of nervous systems. Recent neurophysiological data show that some of these strategies have converged, so as to be quite similar to those in analogous natural systems. The question now is, how general is this outcome? In artificial devices, should we always strive to mimic the relevant natural control system on the assumption that it has been optimized in the course of evolution, and could we always mimic nature, given that our abilities to reproduce components of the neuromuscular system are limited? The presented control is of specific interest for neurorehabilitation, that is, a method allowing the preserved structures to find their best use if appropriately trained. The intensive, task -oriented exercise is showing very positive recovery in individuals with disability (e.g., non-ambulating subjects can walk unassisted for some distances). Neural engineering is where the ultimate successes at this stage must come. The development of new sophisticated; yet, simple to use devices that interface the central and peripheral nervous system opens new horizons in rehabilitation. The recent MEMS based technology makes dramatic impact to the development of rehabilitation technology. This technology provides today sensing and actuation that has been difficult to imagine only months ago. The computer power rises fast, and allows the processing and communication at levels that are appropriate for the use in various implantable systems. The development of power sources is still somewhat limiting factor for implantable systems. The intelligent control that resembles to natural control is the major missing link being essential for full integration of the
Dejan B. Popovic, Mirjana B. Popovic
technology and life-sciences knowledge into viable, effective rehabilitation systems.
ACKNOWLEDGMENT Danish National Research Foundation, Copenhagen, Denmark, and Ministry for Science and Environmental Protection, Belgrade, Serbia partly supported this work.
REFERENCES 1.
2. 3. 4.
5. 6. 7. 8. 9.
Yan TB, Hui-Chan CWY, and Li LSW. (2005) Functional electrical stimulation improves motor recovery of the lower extremity and walking ability of subjects with first acute stroke - A randomized placebo-controlled trial, Stroke 36(1):80-5. Popovic MB, Popovic DB, Sinkjaer T, Stefanovic A and Schwirtlich L. (2003) Clinical Evaluation of Functional Electrical Therapy in Acute Hemiplegic Subjects. J Rehabil Res Dev 40(5):443-45 Khaslavskaya S and Sinkjær T. (2005) Motor cortex excitability following repetitive electrical stimulation of the common peroneal nerve depends on the voluntary drive. Exp Brain Res 162(4): 497-502 Ridding MC, Brouwer B, Miles TS, Pitcher JB, and Thompson PD. (2000) Changes in muscle responses to stimulation of the motor cortex induced by peripheral nerve stimulation in human subjects. Exp Brain Res 131(1):135-43 Tomovic R. and McGhee RB. (1968) A finite state approach to the synthesis of bioengineering control systems. IEEE Trans Human Factors Eng HFE-7:65-69 Kawato M, Furukawa K, and Suzuki R. (1987) A Hierarchical Neural-Network Model for Control and Learning of Voluntary Movement. Biol Cybern 57:169-185 Kawato M. (1990) Feedback-Error-Learning Neural Network for Supervised Motor Learning. Advanced Neural Computers, pp 365-372 Popovic MB. (2003) Control of Neural Prostheses for Grasping and Reaching. Med Eng Phys 25(1):41-50 Popovic DB. (2003) Control of Walking in Humans with Impact to Standing and Walking, J Aut Control, 13:1-34 Author: Dejan B. Popovic Institute: SMI, Aalborg University, Denmark (also University of Belgrade, Faculty of Electrical Engineering, Belgrade, Serbia) Street: Fredrik Bajers Vej 7D3 City: 9220 Aalborg Country: Denmark Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
EMITEL – an e-Encyclopedia for Medical Imaging Technology S. Tabakov, C. A. Lewis, A. Cvetkov, M. Stoeva, EMITEL Consortium King’s College London, Dept. Medical Engineering and Physics, UK and EMITEL Consortium – www.emerald2.eu Abstract— The paper gives a brief explanation of a new International project EMITEL and its associated multilingual e-Dictionary. The project is developing the first web-based e-Encyclopedia in the profession. EMITEL will address the lifelong learning of a wide range of specialists and will be available free on Internet. The project advanced work-inprogress - the e-Dictionary is already functioning at www.emitdictionary.co.uk Keywords— Education and Training, e-Learning
I. INTRODUCTION Contemporary medicine is impossible without medical technology and medical imaging equipment is one of the most complex technologies of our time. Its effective and safe use requires well educated and trained work force Medical Physicists and Engineers. The field of medical imaging and related technologies develops rapidly. The last 20 years introduced revolutionary methods as Magnetic Resonance, Molecular Imaging, etc. All these enter quickly in healthcare and often limited information is available about new methods and respective technology. Previous projects, as EMERALD and EMIT, developed training materials (e-books and Image Databases) to address the initial training of young medical physicists. However most “mature” specialists (who already work in healthcare) do not have “the luxury” of free special training time to learn all these new methods and technologies. This places the “older” specialists in a disadvantaged position, as no suitable material is available for their timely lifelong education. The new project EMITEL aims to improve this situation by providing an Internet-based tool (Medical Imaging Technology e-Encyclopaedia for Lifelong Learning - EMITEL), which will allow flexible use of the limited time of these specialists to acquire knowledge about the newest developments in the field. The need of such material was discussed and assessed at the two International Conferences on Medical Physics/Engineering Education and Training (ICTP, Trieste, 1998, 2003) - described in the book ‘Towards a European Framework for Education and Training in Medical Physics and Biomedical Engineering’, IOS Press.
II. E-DICTIONARY The project was initiated some 5 years ago with an original multilingual Dictionary of Medical Imaging Technology Terms, which quickly grew to a full Medical Physics Dictionary cross-translating terms between each of its languages. Initially the e-Dictionary was CD-based and included English, French, German, Swedish and Italian. Later it was expanded with Spanish and Portuguese. The volume of the Dictionary is approx. 4000 terms (either a single word, e.g. dose, or combined words e.g. absorbed dose, or complex terms e.g. Linear dose-response curve, etc.). Through EMITEL the Dictionary was developed as a web-based tool. Also, it was further updated with Polish, Hungarian, Estonian, Lithuanian, Romanian and Turkish. The possibility to include and to search different alphabets allowed the Dictionary to further expand into Thai and another 5 languages, which are in preparation for inclusion during 2007. The original Search Engine of the Dictionary allows also partial word search and search of complex terms (i.e. a word anywhere in the term). One can see the advanced work in progress on the Dictionary at www.emitdictionary.co.uk where the interface of the web e-Dictionary is shown on Fig. 1. The EMITEL Consortium encourages contacts with all specialists who would like for their language to be included in the Dictionary, and also with colleagues who could suggest further omitted terms. III. EMITEL E-ENCYCLOPEDIA EMITEL task is to include explanatory articles to each of the 4000 terms. The articles will aim at MSc-level and above. Each article will include images, graphs, examples and other additional information. Typical size of an article is 150-300 words. The articles will be in English, but will be linked to the appropriate language in the Dictionary. The e-Encyclopedia will aim at more shorter articles, which will provide sufficient information for the term, together with related diagrams and practical figures. This departure from large topical articles will facilitate its Web use and multilingual term translation. At the moment the EMITEL Consortium includes 20+ specialists from King’s College London and King’s College
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1–2, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
2
Hospital, University of Lund and Lund University Hospital, University of Florence and AM Studio, Plovdiv. The Consortium includes also the IOMP (International Organisation for Medical Physics) as an international partner through which colleagues from other countries will be additionally included. After the end of the project the Web based eEncyclopedia EMITEL will be available free to all colleagues and IOMP will take care for its constant and quick future update.
S. Tabakov, C. A. Lewis, A. Cvetkov, M. Stoeva, EMITEL Consortium
IV. CONCLUSION The previous EU projects, carried out by the core of the present consortium, developed the world’s first structured training packages for young specialists in this field – EMERALD (focusing on X-ray, Nuclear Medicine and Radiotherapy) and EMIT (focusing on Ultrasound and Magnetic Resonance Imaging) [1]. These materials are now used in some 70 countries worldwide. As a result EMIT Consortium received the inaugural EU Award for Education – The Leonardo da Vinci Award (Maastricht, Dec. 2004). The new project EMITEL addresses the life long learning of a wider audience. It also includes Radiotherapy Physics, Radiation Protection and Hospital Safety terms, what makes it useful for various other healthcare specialists. Such product does not exist at the moment. In order to collect a wide feedback and include more colleagues into the project, EMITEL Consortium is organizing an International Conference of Experts during May 2008. Information on the development of the project can be found at its web site www.emerald2.eu. EMITEL builds another layer on the innovative developments of e-Learning in Medical Engineering and Physics, described in the Special Issue of the Journal of Medical Engineering and Physics. EMITEL Consortium [2] believes that this new web based tool will be a valuable contribution to the lifelong learning of many colleagues around the world and will help the development of the workforce in Medical Engineering and Physics.
ACKNOWLEDGMENT The authors express their gratitude to the EU Leonardo Programme and their own Institutions.
REFERENCES 1.
2.
Tabakov S, Roberts C., Jonsson B., Ljungberg M., Lewis C., Strand S., Lamm I., Milano F., Wirestam R., Simmons A., Deane C., Goss D., Aitken V., Noel A., Giraud J. (2005), Development of Educational Image Databases and e-books for Medical Physics Training, Journal Medical Engineering and Physics, Elsevier, vol.27, N.7, p 591-599. EMITEL Consortium members: S Tabakov, C A Lewis, C. Deane, D Goss, G Clarke, V Aitken, A. Simmons, S Keevil, J Coward, C. Deehan, P Smith, F Milano, S-E Strand, F. Stahlberg, B-A Jonsson, M Ljungberg, I-L Lamm, R. Wirestam, M. Almqvist, T Jonsson, A Cvetkov, M Stoeva Author: Dr Slavik Tabakov
Fig. 1 e-Dictionary interface (example from English-French Translation)
Institute: King’s College London, Dept. Medical Engineering and Physics Street: 124 Denmark Hill, London SE5 9RS,U.K. Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
From Academy to Industry: Translational Research in Biophysics R. Cadossi, M.D1 1
Laboratory of Clinical Biophysics, IGEA, Carpi, Italy.
Abstract— The translation “from bench to bedside” of a scientific discovery, proof of principle or simple idea, that originated within academia, into a successful industrial product is a complex, long and costly process. Many factors need to be accounted for and careful planning and protection of intellectual property are essential to retain the value of the idea. This analysis, based on more than twenty-five years of experience in the biomedical field at IGEA, is presented to outline the different steps involved in such an endeavour. Critical factors defining the different phases, from initial evaluation of the idea to marketing and post-marketing monitoring, are described focusing on development processes. Keywords— Translational research, intellectual property, marketing, Cliniporator.
I. INTRODUCTION This analysis intends to illustrate how it is possible, starting from a scientific discovery, a proof of principle or a simple idea matured in the academic field, to successfully create and market an industrial product. Basis of such analysis is the hands-on experience gained by the IGEA company, that for over twenty-five years has been designing, manufacturing and marketing electromedical devices. The decision processes described below are used during the development of our products; as an example the focus will be on the experience collected for the development of Cliniporator. The Cliniporator project, conducted with the support of European Union research grants, has allowed IGEA to gather a large interdisciplinary international experience, teaching valuable lessons in what a translational research project entails. The phases that have been identified and explained below for an effective and optimized industrialization process are heavily influenced by the peculiarities of the medical field. New product development in the medical field is usually a long process that requires large and long term investments. Product validation and marketing follow rules that are specific to this area, further adding to the complexity of projects. Nevertheless, the steps described below can easily be adapted and put into effect for any field of industry.
II. INITIAL EVALUATION OF THE IDEA AND INTELLECTUAL PROPERTY PROTECTION
In the medical field, the value of an idea, and the reasons why one should develop such idea, reside in its ability to: a) improve the quality of care; b) reduce medical cost; c) simplify procedure; d) financial return and/or personal recognition. Thus, the suitability of an idea for industrial development should be evaluated on the basis of factors that involve different company departments: Finance Department, Marketing Department, R&D. The factors to be taken into consideration include: a) the originality and/or novelty of the idea; b) the innovative content of the final product with respect to those already on the market; c) results from market analysis; d) the price competitiveness of the final product with respect to products already on the market; e) the financial and human resources available in the company to support the project; f) the capacity to undertake the project on the basis of company know-how; g) the availability of human, material and logistic resources; h) analysis of the clinical validation process feasibility. Following this initial evaluation of the idea, steps should be taken to ensure adequate protection of intellectual property inherent in the idea and/or that will be developed during the project. Ways to protect intellectual properties are: a) follow the patent process b) correct use of Confidentiality Disclosure Agreement (CDA) and Nondisclosure Agreement (NDA). III. FEASIBILITY STUDY: DEFINITION OF A WORKABLE PROJECT
The R&D department undertakes the preliminary research activity to identify the state-of-the-art technological solutions available to develop the initial idea in an industrial context. This feasibility study, and where possible the presentation of an early prototype, allows the Marketing Department and the Finance Department to evaluate whether the workable project meets the initial idea and the purposes of the different company functions and therefore whether the industrial development of the idea is viable.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 10–13, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
From Academy to Industry: Translational Research in Biophysics
To obtain an effective feasibility study the R&D should: a) control and integrate the initial project specifics (the initial idea); b) research technical and manufacturing solutions; c) present the workable project, using sketches, diagrams or physical prototypes; d) present an initial evaluation of the cost of the final product. If the project needs to be developed with out of house expertise (i.e. subcontractors), care should be taken as to have some type of patent protection in place. Once all these steps have been successfully completed, R&D department will obtain the approval by Finance department to prepare a design plan. At this time the Project Leader will be identified. IV. DESIGN PLAN The Project Leader studies the work program and plans what will be the workflow of the project using a planning document that defines: a) specifics of the project; b) precise description of the tasks to be completed, and their assignment to appropriate workgroups c) the time plan for each task; d predefined control points to verify that the solutions developed are correct and compatible with the specifics of the design; e) the resources for the development of the project, as approved and made available by the Finance department; f) a contingency plan. Of note is the importance of good communication and coordination among different units participating in the program, be as they may different division within the same company or completely separate entities such as: other companies, research centers or hospitals. Such a condition will facilitate problem solving and drastically decrease delays due to miscommunication. V. VALIDATION OF A WORKING PROTOTYPE The Project Leader oversees and coordinates the workgroups involved in the programmed phases to obtain a working prototype, respecting the scheduled time frames and check points. As the project progresses the prototype will be subjected to functionality tests of the technology that is been developed. Specifically for electro-medical devices the prototypes will also be used to carry out scientific research that should include: in vitro and in vivo studies to characterize the mechanisms of action. In vivo study will also be a relevant part of the pre-clinical validation of the
11
project. Final clinical validation of the device will be attained through clinical trials. VI. CLINICAL VALIDATION The Project Leader co-ordinates the scientific research necessary for the clinical validation of the developed technology. At this time it is extremely important to define the expected results and the parameters on the bases of which the efficacy of the treatment will be assessed. In the event of a negative outcome, one should consider either varying the specifics of the project or closing it down to limit financial and time losses. Provided the result of the clinical trial are positive, the technology cost/benefit ratio will be considered and compared to that of already existing alternative treatments to further assess viability and profitability of the project as a whole. VII. APPROVAL OF THE PROTOTYPE The Finance and Marketing Department and R&D evaluate the prototype whose technology has been positively tested and recognized as clinically valid. Indeed, it is necessary to select the characteristics to be included in the product, that should be manufactured and marketed. Finally, those specifications that have not yet been discussed should be defined, i.e. user interface, subsidiary functions and aesthetics. To carry out this evaluation the following are analyzed: a) features of the prototype; b) results of the functionality tests or of the scientific research; c) any new requirements identified by marketing; d) proposals for boxes, accessories and user interface; e) requisites for manufacturing. In this phase, considerations about localization of the product, such as interface language and cultural differences that could influence product acceptance and diffusion, are identified and implemented in the project. Once the prototype and the new additional specifics are approved, the development and marketing activity of the new product can be initiated. VIII. PREPARATION OF THE MARKETING STRATEGY The attainment of a “working” prototype, or rather the awareness of the performance of the new product and its particularities with respect to the competition or market alternatives, allows the Marketing Department to put together a precise strategy aimed at introducing the new product on the market.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
12
R. Cadossi, M.D
The Marketing Plan should be studied together with the Finance department, which provides the necessary resources, and with the Sales Management. For an effective marketing activity, it is necessary to carefully define: a) the target, i.e. the indications for use and who should use the product; b) the communication, i.e. the message; the instruments: brochures, leaflets, congresses, round tables, training centers, newspapers; c) the opinion leaders; d) the competition, i.e. the analysis of the competition, the alternatives to the new product; e) the resources, i.e. the investment needed for promotion, training and/or integration with the Sales Force; f) the price strategy. IX. IDENTIFICATION OF THE DEFINITIVE CONSTRUCTION SOLUTIONS
The R&D department develops the prototype, introducing modifications and integration to fulfill specification changes as determined during initial testing, or identified by the Marketing Department and choosing construction solutions suitable for manufacturing and which meet all the standards that apply to the product in question. To be able to develop the definitive solutions, effective coordination is needed between those working on the hardware, the software, the user interface, those writing the manuals and the forms and those working on obtaining the certification of the product. This co-ordination is managed and guaranteed by the Project Leader, who approves the solutions identified and transfers them to the Marketing Department that is responsible for the design of a precise and effective promotional strategy. The Project Leader also guarantees that every variation requested or proposed with respect to the already approved prototype is freshly discussed and approved by the Marketing Management. The product that is obtained by the development activity must meet the specifics of the project plan, the requisites defined subsequently and the standards defined in the different markets. The product is presented to the Marketing and Sales Division that, on the basis of the promotional strategy worked out define the “supplementary” aspects that are needed before sales begin: refining of the interface (colors, signs, messages); design of the labeling and packaging; commercial name, appearance and contents of the illustrative material. The approval of the final prototype, together with the definition of all the aspects necessary for manufacturing, gives the go ahead for the construction products for sale.
X. CERTIFICATIONS ATTAINEMENT AND TRANSFER TO MANUFACTURING
The R&D department co-ordinates the activities for the development of the first-series models and therefore: a) completes the certification process, completes the Technical File, the Device Master Record (DMR), and the Risk Management File (RMF); b) prepares the registration reports required by the compulsory norms and by the company Quality System; c) transfers to manufacturing the necessary data for the management of the components; d) transfers to manufacturing the know-how and the operational methods for the construction, testing and management of the product; e) co-ordinates the tests on the first-series models, necessary for the validation of the design and manufacturing process. The attainment of the certification, the transfer into manufacturing and the successful completion of the tests on the first-series models signals the end of the planning process. XI.
MARKETING ACTIVITY AND SALES STRATEGY
The start of the marketing activity, i.e. the preparation of the market for the introduction of the new product, together with its final development, provide the Sales Force with all the tools to start the detailed promotion in the field (Leaflets, Price Lists, Communication, Opinion Leaders). The Sales Force should prepare, together with the Finance and Marketing Department, a Sales Plan which defines: a) the priorities; b) the resources to be allocated to each objective; c) the return expected from operations; d) the contingency plan. XII. DISSEMINATION The members of the Marketing division who followed the development and validation of the product and who took care of the promotional strategy, the Core Group, select and prepare the Opinion Leaders in the countries where the new product will be introduced. In this sense the International Marketing Division creates, instructs, provides resources and tools for the National Marketing Divisions that have their references in the local Opinion Leaders. XIII. CUSTOMER CARE After the product has been marketed, Customer Care records every type of complaint, request for assistance and
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
From Academy to Industry: Translational Research in Biophysics
feedback from the client. This information is monitored to identify problems and to initiate corrective action or make improvements to the product. Such solutions may involve the R&D process (updating the product), the manufacturing process and/or the promotional strategy. Any change to product needs to be approved by the Marketing and Sale Department before it is implemented in Products for sale. XIV. CONCLUSIONS The development of a product to be used to treat patients, a biomedical device, is certainly a long and costly process that involves all strategic functions of a company from the very onset of the project. The target and the potential diffusion of the new product have to be clearly identified since the beginning of the process. Moreover, peculiar to the development of a medical device is a further variable: i.e. the clinical validation whose outcome cannot be controlled. In fact, despite in vitro and in vivo results may be satisfactory, they do not necessarily guarantee that they can be translated into an effective clinical application. For example the above project steps have been followed in the successful experience we had in the Cliniporator project. Nevertheless we should not forget that from early experimental data on electrochemotherapy, that envisioned its use in clinical
13
practice, to the introduction into the market of a CE marked device validated through the ESOPE clinical trial, 15 years have elapsed.
REFERENCES 1. 2. 3. 4. 5.
IGEA Quality Manual, Chapter 7.3: ISO 9001:2000. Bonutti PM, Seyler TM, Marker DR, Plate JF, Mont MA. Inventing Orthopedics: From Basic Design To Working Product. AAOS Annual Meeting Scientific Exhibit, San Diego, USA. 2007. Mir LM, Orlowski S, Belehradek J Jr, Paoletti C. Electrochemotherapy potentiation of antitumour effect of bleomycin by local electric pulses: Eur J Cancer. 1991;27(1):68-72. Lebar AM, Sersa G, Kranjc S, Groselj A, Miklavcic D. Optimisation of pulse parameters in vitro for in vivo electrochemotherapy: Anticancer Res. 2002 May-Jun;22(3):1731-6. Gothelf A, Mir LM, Gehl J. Electrochemotherapy: results of cancer treatment using enhanced delivery of bleomycin by electroporation: Cancer Treat Rev. 2003 Oct;29(5):371-87. Author: Ruggero Cadossi, M.D. Institute: Street: City: Country: Email:
Laboratory of Clinical Biophysics, IGEA Via Parmenide 10/A 41012 Carpi (MO) Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Information Technology Solutions for Diabetes management and prevention Current Challenges and Future Research directions R. Bellazzi1 1
Dipartimento di Informatica e Sistemistica, Universita di Pavia, Italy
Abstract— This paper presents a list of current and future challenges for Information technology in Diabetes Mellitus management. Three main research areas are identified: supporting patients, supporting health care providers, including specialists, GPs and case managers and finally supporting organizations and health care policy makers. The new technological advances in context awareness, user modelling, data mining and integrated information systems enable the design and implementation of new decision support strategies which may be effective in improving the management and prevention of chronic diseases in general and of Diabetes Mellitus in particular. Keywords— Diabetes Management, decision support, information and communication technologies
I. INTRODUCTION Diabetes Mellitus (DM), a major metabolic disease related with a reduced or impaired capability of the body to regulate the Blood Glucose Level, is now reaching worldwide epidemic proportions. The prevalence of DM for all age-groups was recently estimated to be 2.8% in 2000 and to reach 4.4% in 2030, so that the total number of people with DM, which is currently 171 million, is projected to raise to 366 million in 2030 [1]. The International Diabetes Federation estimates that the majority of patients is affected by type 2 Diabetes (DM-2), namely 85-95% of Diabetics, while about 0.09% of the population is affected by type 1 Diabetes (DM-1). Europe has the highest number of DM-1 patients (1.27 million) (http://www.idf.org). Given its epidemical proportion, it is not surprising that DM is having a strong impact on the health care system of western countries. DCCT and UKPDS studies [2,3] have shown that DM related complications, including cardiovascular diseases, can be delayed or prevented through a strict metabolic control. For this reason a number of guidelines and best-practice procedures have been defined to improve the delivery of care. Moreover, it has been recently demonstrated that DM2 can be delayed or prevented, too, by means of intensive interventions on diet, exercise or medication; such interventions have been also proved to be cost-effective [4]. The epidemiological dimension of DM-2 is forcing national health authorities to launch programs for the prevention,
early diagnosis and management of DM-2 based on informative campaigns targeted to citizens. There are however several problems in the implementation of such prevention and disease management programs, mainly related to a timely identification of citizens at risk of diabetes and cardiovascular diseases and then to the implementation of personalized life-style and clinical interventions. Nowadays it has become clear that the difficulties in achieving a satisfactory level in the delivery of health care to DM patients are not related to the availability of knowledge on the best diagnostic and therapeutic procedures, while they are system and organizational problems [5]. Over the last thirty years a strong interest has been given to the design and implementation of systems based on Information and Communication Technology (ICT) aimed at supporting the management of DM, mainly in the areas of Electronic patient records, Decision support systems and telemedicine [6-8]. In particular, diabetes care is probably one of the areas in which telemedicine, e-Health and Consumer-health solutions have been more widely tested [9-11]. The chronic nature of the disease and the need of patients empowerment for performing glucose self-monitoring and insulin delivery makes DM a “natural” context to test ICT as a mean for the provision of home care. Some of the proposed systems are now running large clinical trials, although very few of them became part of disease management programs, supporting multi-faceted interventions for patient care [10, 12]. In this paper we will describe some of the most important research areas which may enable the effective implementation of ICT solutions for DM and may thus provide very useful tools for improving the delivery of care. II. CHALLENGES AND RESEARCH ARCH DIRECTIONS FOR ICT IN DIABETES MANAGEMENT
A number of unparalleled opportunities are nowadays available to implement disease management and prevention programs based on the current advances of research in ICT: •
The availability of centralized electronic data repository on drug and specialized visit prescriptions and on the citizen hospital admissions and discharges. Such data
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 14–17, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Information Technology Solutions for Diabetes management and prevention Current Challenges and Future Research directions
• • •
are a crucial source of information to perform citizen stratification for DM and cardiovascular risk. The availability of new research results on DM risk assessment, coming from epidemiology and from genetics/genomics/proteomic research. The availability of new ICT solutions based on usercentered design, mobile communication and theory of behavior change. The improvement of research in the areas of context awareness and wearable systems.
Those advances enable the design of new ICT-based decision support systems in DM management devoted to support all the actors involved in DM management and prevention: patients, diabetologists, GPs and case managers, health care policy makers. A. Supporting patients Traditionally, decision support systems have been classified as visit-by-visit systems and day-by-day systems, the first being aimed at supporting physicians and the second to help DM patients in their self-management activities. The availability of telemedicine solutions have changed this kind of paradigm, potentially providing patients and physicians with the same kind of information about selfmonitoring, though with different roles and responsibilities. The majority of the past efforts have been devoted to the design of systems for insulin management. We believe that current research frontier is on the contrary related to the citizen and patient empowerment through user modeling and context awareness. In the area of user modeling, there has been a tremendous growth in diversity of ICT solutions for over the last two decades, with particular reference to lifestyle behavior change. These systems provide health behavior change information to citizens on the basis of a variety of health behavior theories [13,14], using different communication media, ranging from Web sites to computer telephony interfaces (CTI). From a clinical viewpoint they have been applied to a wide number of behaviors, including DM related ones, such as physical activity promotion, diet adherence, medication regimen adherence. Overall, these systems have been shown to be effective in a number of randomized clinical trials [15]. Recently, an interesting paper has been published by Ma and colleagues on the delivery of information and communication support to DM patient on the Internet [16]. The system is able to select patient-specific information, prioritizing diabetes learning topics and defining individualized agendas for patient-physician encounters on the basis of the so-called “Diabetes Information Profile” (DIP). The DIP is a model, i.e. a multi-faceted profile, of
15
the user which is progressively updated on the basis of the clinical data and of the patient interaction with the system. The technology has been evaluated through a small clinical study, which showed its potential effectiveness in providing useful information to patients. Web interfaces can be also used to implement embodied conversational agents and relational agents. which are animated computer-based characters that emulate human faceto-face conversation [17,18]. Promising future directions also involve wearable computers, PDAs and mobile phones as platforms for health behavior change interventions [19]. Other solutions would be of interest in order to seamlessly integrate support into the users’ everyday lives and to initiate the interaction with the user, such as Short Message Service (SMS) or natural language interaction systems integrated within CTI. The availability of such technologies can help to better define the “context” in which the conversation between the user and the helper application occurs. By context we mean the “interrelated conditions in which something exists or occurs”, including anything known about the participants in the (potential) communication relationship. Therefore, context will regard not only all the user data and information collected during the dialog but also something related to the “presence concept”. The presence concept depicts availability to communicate but proposed extensions include elements to be derived from user activity, sensors or others. The knowledge about the user (user modeling) and the context (context awareness) in which the monitoring activity is carried on opens interesting research questions concerning the personalization and distribution of decision support interventions. B. Supporting specialists, GPs and case managers Several computer-based systems for Diabetes management have been proposed since the early eighties [20,21,22]. Many of the systems described in the literature have been primarily designed to manage DM-1 patients, in accordance to the “specialist-patient” model of health care; on the contrary, the current interests are directed towards the management of DM-2 within a “specialist-GP-patient” model. The current trend is to integrate guidelines and decision support systems as reminders within Electronic Patient Records (EPR) to support complex primary care interventions. The need of integrated solutions are advocated also by the substantial lack of clinical evidence that stand-alone guideline-augmented EPR may be effective in clinical practice [23]. As an interesting example of integrated system we can refer to the Diabetes Audit and Research in Tayside (DARTS), which is a validated population-based diabetes information system that collects data coming from different
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
16
sources, including hospital admissions, diabetes clinical visits and diabetes medication. DARTS have been redesigned to overcome the problem of “inertia to change”, which is considered the main reason for the sporadic uptake and partial use of ICT-based systems. DARTS combines different technologies to allow for universal data collection, correlation, dissemination, guideline provision, guideline implementation and others. Currently DARTS is used in one thousand general practice clinic sites and fifty major hospital clinics, routinely managing a diabetic patient population of more than 160,000 patients [24]. Another relevant integrated ICT intervention is the IDEATel project, now funded for eight years since the year 2000. IDEATel is designed to provide a telemedicine service in both urban and rural economically disadvantaged areas within New York State. The project involved 1,500 patients, half of them being managed through a telemedicine intervention. Patients, GPs, case managers and specialists are connected by means of an Internet service; the telemedicine service is fully integrated with an health-care information system and is empowered by guideline based reminders and alerts. After the first year of implementation some improvements in the clinical outcomes have been observed, in particular regarding blood pressure and LDL [25]. Finally, an interesting research effort has been represented by the European Project M2DM: Multi-Access Services for Managing Diabetes Mellitus. The main goal of the project was to develop and test a Multi-Access service for managing all type of Diabetic patients. The basic concept is to collect data in a central database server that can be accessed through the Web, through the phone or through dedicated software for data downloading from the glucometers. The M2DM system comprised a Web access, a CTI service based on an Interactive Voice response system and a smartmodem located at home. The Web pages were optimized for different access modalities, including mobile devices. A distinguishing feature of M2DM is to exploit technology for managing the knowledge available to patients and physicians. To this end, the information flows is regulated by a scheduler, called Organizer that, on the basis of the knowledge on the health-care organization, it is able to automatically send e-mails and alerts as well as to commit activities to software agents, such as data analysis. Many decision support tools are integrated in the system, including casebased and rule-based reasoning, as well as modeling and simulation software. Four medical centers and more than 60 patients have been involved in a one-year randomized controlled evaluation, which showed promising clinical and evaluation results, although not statistically significant in all medical centers [11].
R. Bellazzi
C. Supporting health care policy makers Policy makers are now dealing with the problem of DM risk stratification, in order to plan for tailored life-style and disease management interventions. A large number of studies are available on the definition of DM risk calculators, based on simple data which can be collected in primary care visits, such as Age, BMI, waist circumference, antihypertensive drug treatment; those calculators are often coupled with cardiovascular risk calculators, to compute the overall health risk profile [26]. Furthermore, the investigation of insulin resistance has received considerable attention over the last twenty years, leading to quantitative indexes like the quantitative insulin-sensitivity check index (QUICKI) [27] and the so-called “minimal model” [28,29]. Moreover, in the last few years, research is actively working to find novel markers of Diabetes risk based on genetic patient profiling. Current studies are looking for genes associated with phenotypic traits of the metabolic syndrome [30]. Less attention have been devoted to the exploitation of the information coming from the patients’ administrative data [31] collected by health care system, such as hospital admissions and discharges, drug prescriptions, ambulatory visits. This challenge seems particularly relevant for policy markers who seldom rely on health care information systems integrated with clinical EPR for DM management. The exploitation of Data Mining techniques, and in particular of temporal Data mining tools, may lead to define temporal patterns associated with the disease diagnosis. Such patterns may highlight precedence relationships between physicians prescriptions and patients outcomes which may lead to properly stratify the available population [32]. Finally, health care decision makers have been recently provided with new tools for decision analysis based on the so-called DM modeling, i.e. mathematical models of the disease progression which can be used to simulate the effect of novel strategies on a large population of patients, and to take decisions formally evaluating projected costs and benefits of such strategies [33]. Up to now, there are no integrated tools for risk stratification and decision making able to integrate the available data sources, the current research results and the more recent modeling efforts in a comprehensive solution for identifying subjects who needs tailored interventions. Research and technical implementations are needed to improve the implementation of prevention programs. III. CONCLUSIONS The need of new kind of interventions for chronic care management is related to two concurrent factors, the increase of the number of elderly chronic patients and the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Information Technology Solutions for Diabetes management and prevention Current Challenges and Future Research directions
difficulty to improve their clinical outcomes. The availability of integrated health care information system is enabling the implementation of novel disease management and prevention programs, heavily relying on communication between citizens and health care providers, Current research in ICT may provide suitable tools and instruments to increase the quality of such programs by empowering patients and by optimizing the required organizational efforts.
ACKNOWLEDGMENTS I acknowledge Dr. Timothy Bickmore and Toni Giorgino for their help in writing this review.
REFERENCES 1. 2.
3. 4.
5. 6. 7.
8.
9. 10. 11.
12.
13.
Wild s, Roglic G, Green a, Sicree R, King H. Global prevalence of diabetes: estimates for the year 2000 and projections for 2030. Diabetes care. 2004;27(5):1047-53 Kilpatrick ES, Rigby AS, Atkin SL. Insulin resistance, the metabolic syndrome, and complication risk in type 1 diabetes: "double diabetes" in the Diabetes Control and Complications Trial. Diabetes Care. 2007; 30(3):707-12. UKPDS 33, Lancet 1998;352:837-53 Herman W, et al. The Cost-Effectiveness of Lifestyle Modification or Metformin in Preventing Type 2 Diabetes in Adults with Impaired Glucose Tolerance, Annals of Internal Medicine. 2005; 142 (5): 323332. Glasgow RE, et al. Report of the health care delivery work group: behavioral research related to the establishment of a chronic disease model for diabetes care. Diabetes Care. 2001; 24(1):124-30. Cavan DA, et al. Preliminary experience of the DIAS computer model in providing insulin dose advice to patients with insulin dependent diabetes. Comput Methods Programs Biomed. 1998; 56(2):157-64. Hetlevik I, Holmen J, Kruger O, Kristensen P, Iversen H, Furuseth K. Implementing clinical guidelines in the treatment of diabetes mellitus in general practice. Int J Technol Assess Health Care. 2000; 16(1):210-27. Montani S, Bellazzi R, Portinale L, d'Annunzio G, Fiocchi S, Stefanelli M. Diabetic Patients Management Exploiting Case-Based Reasoning Techniques, Comput. Meth. and Progs. in Biomedicine 2000; 62: 205-218. Gomez EJ, Del Pozo F, Hernando E. Telemedicine for Diabetes care: the DIABTel approach. Medical Informatics, 1996; 21:283-295 Starren J, et al. Columbia University's Informatics for Diabetes Education and Telemedicine (IDEATel) project: technical implementation. J. Am Med Inform Assoc. 2002; 9(1):25-36. Larizza C, Bellazzi R, Stefanelli M, Ferrari P, De Cata P, Gazzaruso C, Fratino P, D'Annunzio G, Hernando E, Gomez EJ. The M2DM Project--the experience of two Italian clinical sites with clinical evaluation of a multi-access service for the management of diabetes mellitus patients. Methods Inf Med. 2006; 45(1):79-84. Sperl-Hillen J, O'Connor PJ, Carlson RR, Lawson TB, Halstenson C, Crowson T, Wuorenma J. Improving diabetes care in a large health care system: an enhanced primary care approach. Jt Comm J Qual Improv. 2000; 26(11):615-22. Prochaska, J., & Marcus, B. (1994). The Transtheoretical Model: Applications to Exercise. In R. Dishman (Ed.), Advances in Exercise Adherence (pp. 161-180).
17
14. Glanz, K., Lewis, F., & Rimer, B. (1997). Health Behavior and Health Education: Theory, Research, and Practice. SF, CA: Jossey-Bass. 15. Owen, N., Fotheringham, M., & Marcus, B. (2002). Communication technology and health behavior change. In K. Glanz, B. Rimer, F. Lewis (Eds.), Health behavior and health education. SF, CA: JosseyBass. 16. Ma C, Warren J, Phillips P, Stanek J. Empowering patients with essential information and communication support in the context of diabetes. Int J Med Inform. 2006. 75(8):577-96. 17. Cassell, J., Sullivan, J., Prevost, S., & Churchill, E. (2000). Embodied Conversational Agents. Cambridge: MIT Press. 18. Bickmore, T. and Picard, R. (to appear) "Establishing and Maintaining Long-Term Human-Computer Relationships" ACM Transactions on Computer Human Interaction (ToCHI). 19. Koch S. Meeting the challenges--the role of medical informatics in an ageing society. Stud Health Technol Inform. 2006;124:25-31. 20. Lehmann ED, Deutsch T. Application of computers in Diabetes care – a review. Part I and II. Med Inform. 1995;20(4):281-329. 21. Lehmann ED, Deutsch T. Computer assisted diabetes care: a 6-year retrospective. Comput Methods Programs Biomed. 1996, 50(3):20930. 22. Dorr D, Bonner LM, Cohen AN, Shoai RS, Perrin R, Chaney E, Young AS. Informatics Systems to Promote Improved Care for Chronic Illness: A Literature Review. J Am Med Inform Assoc. 2007 14(2):156-163. 23. O'Connor PJ, Crain AL, Rush WA, Sperl-Hillen JM, Gutenkauf JJ, Duncan JE. Impact of an electronic medical record on diabetes quality of care. Ann Fam Med. 2005 3(4):300-6. 24. Leese G., Boyle D., Morris A. The Taysisde Diabetes Network. Diabetes Research and Clinical Practice, 2006, 74,S197-S199. 25. Shea S, et al, A randomized trial comparing telemedicine case management with usual care in older, ethnically diverse, medically underserved patients with diabetes mellitus. J Am Med Inform Assoc. 2006;1340-51. 26. Lindstrom J, Tuomilehto J, The Diabetes Risk Score. A practical tool to predict type 2 diabetes risk Diabetes Care 26: 725-731,2003. 27. Katz A, Nambi SS, Mather K, Baron AD, Follmann DA, Sullivan G, Quon MJ: Quantitative insulin sensitivity check index: a simple, accurate method for assessing insulin sensitivity in humans. J Clin Endocrinol Metab 85: 2402-2410, 2000 28. Magni P, Sparacino G, Bellazzi R, Toffolo GM, Cobelli C. Insulin minimal model indexes and secretion: proper handling of uncertainty by a Bayesian approach. Ann Biomed Eng. 2004 32(7):1027-37. 29. Bergman R, Ider Y, Bowden C, Cobelli C: Quantitative estimation of insulin sensitivity. Am J Physiol 236: E667-E677, 1979 30. Sladek R, et al., . A genome-wide association study identifies novel risk loci for type 2 diabetes. Nature. 2007;445(7130):881-5. 31. Kristensen JK, Sandbaek A, Lassen JF, Bro F, Lauritzen T. Use and validation of public data files for identification of the diabetic population in a Danish county. Dan Med Bull. 2001 Feb;48(1):33-7. 32. Bellazzi R, Larizza C, Magni P, Bellazzi R. Temporal data mining for the quality assessment of hemodialysis services. Artif Intell Med. 2005;34(1):25-39. 33. Herman W., Diabetes Modeling. Diabetes care, 2003; 26 (11): 3182 Author: Ricccardo Bellazzi Institute: Dipartimento di Informatica e Sistemistica, Universita di Pavia Street: via Ferrata 1 City: Pavia Country: Italy Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Patient-Cooperative Rehabilitation Robotics in Zurich Robert Riener1,2 1
Sensory-Motor Systems Laboratory, ETH Zurich, Switzerland Spinal Cord Injury Center, University Hospital Balgrist, Zurich, Switzerland
2
Abstract— This paper gives a short overview of new patientcooperative robotic approaches applied to the rehabilitation of gait and upper-extremity functions in patients with movement disorders. So-called patient-cooperative controllers take into account the patient’s intention and efforts rather than imposing any predefined movement. Audiovisual displays in combination with the robotic device can be used to present a virtual environment and let the patient perform different gait tasks and activities of daily living. Furthermore, the sensors implemented in the robots allow to measure and assess the patient performance and, thus, evaluate the therapy status. It is hypothesized that such patient-cooperative robotic approaches can improve patient motivation and the quality of the therapy compared to conventional approaches. Keywords— Rehabilitation Robotics, Gait Therapy, Treadmill Training, Arm Therapy, Patient-Cooperative
I. ROBOT-AIDED REHABILITATION Task-oriented repetitive movements can improve muscular strength and movement coordination in patients with impairments due to neurological or orthopaedic problems. A typical repetitive movement is the human gait. Treadmill training has been shown to improve gait and lower limb motor function in patients with locomotor disorders. Similarly, repetitive arm therapy is used for patients with paralysed upper extremities after stroke or SCI. Several studies prove that arm therapy has positive effects on the rehabilitation progress of stroke patients. It was observed that more and longer training sessions per week and longer total training periods have a positive effect on the motor function. The finding that the rehabilitation progress depends on the training intensity motivates the application of robot-aided therapy [1-5]. In contrast to manually assisted movement training, with automated, i.e. robot-assisted, gait and arm training the duration and number of training sessions can be increased, while reducing the efforts spent by the therapists per patient. Furthermore, the robot provides quantitative measures, thus, allowing the observation and evaluation of the rehabilitation process [1]. An example for a typical arm therapy robot is presented in Fig. 1.
II. PATIENT-RESPONSIVE CONTROL So-called “patient-responsive” strategies will recognize the patient’s movement intention and motor abilities in terms of muscular efforts and adapt the robotic assistance to the patient’s contribution. The best control strategy will do the same as a qualified human therapist – it will assist the patient’s movement only as much as needed. This will allow the patient to actively learn the spatiotemporal patterns of muscle activation associated with normal gait and arm/hand function. The term “patient-responsive” comprises the meanings of compliant, because the robot behaves soft and gentle and reacts to the patient’s muscular effort, adaptive, because the robot adapts to the patient’s motor abilities, and supportive, because the robot helps the patient and does not impose a predefined movement. It is assumed that patient-responsive strategies will maximize the therapeutic outcome. Among many different patient-responsive controllers developed so far [2] one of the most promising is a “pathcontroller”, where not the robot, but the patient is controlling the timing of the movements. The patient can move freely along a path corresponding to a physiological walking pattern, and is corrected by a surrounding force field when he/she deviates from the path. An additional
Fig. 1 The Zurich arm rehabilitation robot ARMin [3-5]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 7–9, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
8
Robert Riener
multi-modal (visual, acoustic, and haptic) displays allowing the patient to solve any virtual activity of daily living (ADL). These VR tasks can increase the motivation of the patients to participate. IV. ROBOT-AIDED PATIENT ASSESSMENT
Fig. 2 Lokomat equipped with a multimodal display comprised by a stereo projection system, a doubly surround sound system, a wind-generating fan, and the Lokomat providing haptic feedback. Virtual objects can be projected motivating the patient to lift the leg.
supportive force can assist the patient’s efforts as much as needed. The strategy can be adapted to the individual patient’s capabilities. III. BIOFEEDBACK AND VIRTUAL REALITY Optimal training effects during gait therapy depend on appropriate feedback about performance. For the patient, the quality of movement and extent of activity are significant measures of performance that are not easily assessed subjectively, particularly when there are also deficits in sensation, proprioception, and cognition. The robotic devices ARMin and Lokomat are instrumented with potentiometers and force transducers, and thus, are capable of providing online feedback about joint movement and joint moment production, respectively. The feedback values enable easy presentation by graphical, acoustical, or tactile displays to the patient motivating him/her to improve his/her gait pattern during the therapy. Special Virtual Reality (VR) techniques are being established allowing the patients to perform specific gait or reach-and-grasp tasks. For example with the Lokomat virtual obstacles can be displayed that must be crossed by the patient (Fig. 2). An acoustic display generates the step sound and other environmental sound sources. Hitting the obstacle can be seen, heard and felt by a 3D screen, surround sound system and a force displayed by the Lokomat, respectively. A fan produces a wind that increases its intensity with increasing gait speed. Similarly, with the ARMin system any virtual scenario can be generated by
Compared to manual treadmill therapy, there is a loss of physical interaction between therapist and patient with robotic gait retraining. Thus, it is difficult for the therapist to assess the patient’s contribution and to provide necessary feedback and instructions. The values recorded by the robot sensors can provide feedback not only to the patient but also to the therapist allowing him to evaluate the patient’s effort and assess the therapeutic progress. Important measures to be assessed are primary and secondary impairments originating from brain or spinal cord injuries including muscle weakness/strength, muscle tone/spasticity, active and passive joint range of motion etc. These measures provide important outcome indicators for the therapist in assessing functional improvement with any therapy. Performing each of these tests during every rehabilitation session would be time-consuming. However, implementation of tests that can measure these parameters could be achieved by appropriate instrumentation of robotic devices. The enhancement of the robotic trainers would be a viable approach because no additional acquisitions are required. For example, force transducers offer a means to evaluate muscle strength and voluntary force. Potentiometers offer a convenient method to extract joint range of motion information. Last but not least, imposing joint movements at different speeds and concomitant measurements from force transducers offer a possibility to evaluate passive joint stiffness as well as active and passive muscle properties. V. CONCLUSION The aim of patient-responsive control strategies is to consider voluntary efforts and exploit remaining natural control capabilities of the central nervous system after damage of brain or spinal cord. The force information is used to adapt the robotic assistance to the patient’s motor abilities enabling the patient to contribute as much as possible to the movement. At the same time, force and movement recordings can be displayed to the patient for biofeedback purposes and serve the therapist to evaluate the long term results of the movement therapy. The effects of the responsive strategies on the patient can be compared to the behavior of a qualified human therapist,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Patient-Cooperative Rehabilitation Robotics in Zurich
who moves the patient’s limbs with some amount of compliance. It is expected that the above-mentioned patientcooperative strategies will stimulate active participation by the patient and maximize the therapeutic outcome in terms of reduced therapy duration and an improved gait quality. The high potential for future robot-aided treadmill training lies in the combination of robot-assisted training with robot-assisted assessment. Thus, only one device is required to do both training and assessment. No additional efforts of donning and doffing are necessary, because the patient can use the training device also for the assessment before, during or after the therapy. Furthermore, the instrumented robotic actuation makes training as well as assessment not only repeatable, but also recordable. This is an important prerequisite for intra- and inter-subject comparisons required to assist the therapist in the evaluation of the rehabilitation process. In summary, patient-cooperative rehabilitation robotics has a high potential to make future gait and arm therapy easier, more comfortable, and more efficient. However, broad clinical testing is still required to prove these assumptions.
9
Union (Marie-Curie Project “MIMARS”) Bangerter-Rhyner-Foundation, Switzerland.
and
the
REFERENCES [1] Riener, R., Lünenburger, L., Colombo, G. (2006) Humancentered robotics applied to gait training and assessment, Journal of Rehabilitation Research & Development 43, pp. 679-694. [2] Riener, R., Lünenburger, L., Jezernik, S., Anderschitz, M., Colombo, G., Dietz, V. (2005) Cooperative subject-centered strategies for robot-aided treadmill training: first experimental results. IEEE Transactions on Neural Systems and Rehabilitation Engineering 13, p. 380-393. [3] Riener, R., Nef, T., Colombo, G. (2005) Robot-aided neurorehabilitation for the upper extremities. Medical & Biological Engineering & Computing 43, p. 2-10. [4] Mihelj, M., Nef, T., Riener, R. (2006) A novel paradigm for patient cooperative control of upper limb rehabilitation robots, Advanced Robotics, in press. [5] Nef, T., Mihelj, M., Colombo. G., Riener, R. (2006) ARMin – Robot for rehabilitation of the upper extremities. IEEE Int. Conference on Robotics and Automation, ICRA 2006, Orlando. p. 3152-3157.
ACKNOWLEDGEMENTS I thank my team of the Sensory-Motor Systems Lab, ETH and University Zurich, and the team of the SCI Center, Balgrist University Hospital for their contributions to this work. Special thanks go to Dr. Matjaz Mihelj who contributed major parts to the ARMin developments. This project was partially supported by the Swiss National Science Foundation NCCR Neuro (project 8), the European
Author: Institute: Street: City: Country: Email:
Robert Riener ETH Zürich SMS-Laboratory Tannenstrasse 1 TAN E 4 8092 Zürich Switzerland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Systemic Electroporation – Combining Electric Pulses with Bioactive Agents Eberhard Neumann Physical and Biophysical Chemistry, Faculty of Chemistry, University of Bielefeld, Bielefeld, Germany Abstract— In the year 2007, the documentation of the first functionally effective electrotransfer of naked DNA by electroporation, with stable gene expression, is 25 years old. This first functional electro-uptake has been preceded, in 1972, by the first documentation of controlled electrorelease of cellular components from bovine medullar chromaffin granules. - In the meantime, the electroporation field pulse techniques combined with the application of bioactive agents, have culminated in the new clinical disciplines of electrochemotherapy and electrogenetherapy. There are continuously ongoing efforts to improve the pulse protocols by optimizing equipment and cell biological strategies, relying heavily on the increasing knowledge on molecular-mechanistic details derived from the various electroporation data. - The digression here is restricted to a survey-like appreciation of the early functional electroporation data and their thermodynamic and mechanistic interpretation. We briefly touch the potential to explore systemic electroporation for the treatments of tissue, in particular, tumors. The goal is to provide tools in order to optimize systemic electroporation protocols and the design of electrode arrays for clinical use. Keywords— DNA electrotransfer, electrorelease, dipole rearrangements, systemic electroporation
I. INTRODUCTION The Medicon 2007 happens to be also the frame of the 25th anniversary of the first documentation of “The electric pulse method for the stable transformation of biological cells with naked gene DNA“, celebrating the first functionally effective electro-transfer of nonviral DNA by electroporation with stable gene expression in 1982 [1,2]. Complementary to electro-uptake, ten years earlier, in 1972, the electric pulse technique had been used to achieve the first non-destructive electro-release of cellular components like catecholamines, ATP and chromogranine proteins from isolated chromaffin granules of bovine adrenal medullae [3]. These initial physical chemical studies on the fieldcontrolled electroporative uptake and release of mole- cules have been recently valued in Nature Methods [4] as seminal for the various biotechnological and medical applications of what now may be called „Systemic Electroporation“, i.e., the application of voltage pulses combined with bioactive agents.
The recent developments of the techniques of systemic electroporation culminate in the new clinical disciplines of electro-chemotherapy and gene electrotherapy (L. Mir, R. Heller, D. Miklavcic, J. Teissie, G. Sersa, et al.). The following digression is restricted to a critical appreciation of the early data of functional electro-poration and the early molecular-mechanistic proposals for pore formation in high electric fields, along some of the original references (E. Neumann et al.). II. PORATION THERMODYNAMICS A. Electrochemical ansatz Already in 1982, a simple chemical scheme for the structural overall transition from a cascade of field sensitive closed membrane states (C) to the sequence of porous states (P) has been chemically formulated as: (C)
(P)
(1)
The overall distribution equilibrium constant of this overall scheme is specified as the state density ratio K= (P)/(C) = f/(1-f) and the fraction of pore states is f = (P)/[(P)+(C)]. It is found that both, K and f, increase with increasing field, maximally up to a few 0.1 percent in cells, and a few percent in curved lipid bilayers. A general analytical ansatz has been specified in terms of a generalized van t’ Hoff relationship, covering as a total differential, changes dT in temperature T in Kelvin units, changes dp in pressure p and changes dE in the strength E of the locally active electric field, relative to the molar energy unit RT, RT ⋅ d n K = (ΔH° / T) p,E dT − (ΔV° )T,E dp + ( ΔM°) p,T dE
(2)
Here R = k N A is the gas constant, k the Boltzmann constant and N A the Loschmidt-Avogadro constant. Note that Eq.(2) refers to three physical poration phenomena. Electroporation is characterized by the standard value ΔM 0 = M(P) - M(C) of the reaction dipole moment for the C to P transition, sono-poration by the standard reaction volume ΔV° and thermo-poration, including laser optoporation, by the standard reaction enthalpy ΔH° . Note that ΔH° is the total energy at constant pressure p, at a given
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 18–21, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Systemic Electroporation – Combining Electric Pulses with Bioactive Agents
temperature T and field strength E. Note that, for field effects, we have to specify the reaction enthalpy as ˆ + TΔS ΔH = ΔG
(3)
where the work potential ΔGˆ = ΔG − E ⋅ M is the Legendre-transformed Gibbs reaction energy, ΔG the ordinary reaction Gibbs energy in the field E, M the projection of the total electric moment vector M onto the direction of E, and ΔS the reaction entropy, all at constant outside pressure. It is worth mentioning that Eq. (3) reflects the first law of electro-thermodynamics at constant pressure in a canonical ensemble, i.e., a reactive system neighboured by other molecules. The electro-thermodynamic standard term ΔGˆ ° is the reversible extra work and T Δ S° refers to the reversible, i.e., exchangeable, heat energy. B. Chemical Electrothermodynamics The connection between the experimental signal (S) and the fractional effect f=S/ Smax and the electrothermodynamic quantities is via the relationships K = f/(1-f). The field dependence of K (and thus of the fraction f) is given by: K(E) = K(0) ⋅ exp X(E)
(4)
Eq. (4) reflects the electrochemical equilibrium condition ˆ = 0 , such that for dipolar molecular organizations ΔG ˆ ° results. The field effect factor is specified RT ⋅ n K = −ΔG as X(E loc ) =
∫ ΔM ⋅ dEloc RT
,
(5)
where E loc is the local field. If we refer to permanent dipoles, E loc is the directing field E dir , calculated from the induced field in given by: E m = - Δϕm /d m ,
(6)
where d m is the membrane of thickness d m , Δϕm the potential difference across the membrane. According to Maxwell, the electric current density vector for cross membrane cation and anion flows is given by. jm = σm (−∇ϕm ) = σm E m
(7)
where σ m is the membrane conductivity (of all the pores), The stationary value of the (Maxwell-Wagner) polarization-induced electric potential difference Δϕm (or Einduced membrane potential) for spherical membranes with the scalar value a of the radius vector r, under the angle θ to the direction of the homogeneous external field vector E, is given by:
19
Δϕm (θ) = - (3/2) ⋅ a ⋅ E ⋅ f σ cosθ
(8)
applicable for the description of current flows, consistent with Eq. (7), through the two hemispheres of a spherical membrane shell. Integration within the boundaries θ = 0;π yields the average membrane potential: Δϕm = −
3 E⋅a ≈ − E⋅a , 2
(9)
as a practically useful approximation for isolated cells as well as for densely packed cells in cell pellets or tissue. The average membrane field amplification is then given by: Em ≈ E a / dm
(10)
For instance, the amplification factor of a sphere with radius a = 10 µm and d m = 10 nm is a/d m = 10−3 . Finally, substitutions into Eqs. (4) and (5) result in expressions which permit to determine the reaction dipole moment or the polarization volume (giving the pore radius). Substitution of the respective expressions for K = f /(1-f) yields the relationship for the field-induced fractional change, Δ f = f(E) - f(0), relative to the zero-field fluctuation term f(0) as: ⎛ K 0 exp X K0 Δf = ⎜ − 1 + K exp X 1 + K0 0 ⎝
⎞ K 0 (exp X − 1) ⎟= ⎠ (1 + K 0 exp X)
(11)
In the case that, at E = 0, the condition K 0 1 holds true, the second part of Eq. (11) is a very good approximation of the first part. Eq. (11) has been frequently used to determine reaction dipole moments and/or reaction volumes and thus average pore radii, independent of the selective assumption of a balance of electric free energy and line/surface tension energies. It is also important to note that the exponential time courses indicate that it is the number of defined pores of a given type, which increases with increasing field strength, and that the data do not indicate continuous pore expansion. C.Reaction couplings and normal modes
In practical work, it has turned out that the overall state (C) in scheme (1) represents a whole cascade of several steps ( C1 C2 ... ) and that also (P) must be replaced with ( P1 P2 ... ). The electro-thermodynamic analysis is then more complex, too. For instance, if the kinetic data are consistent with the state transition cascade C P1 P2 , the respective terms are: K1 = P1 / C and f1 = P1 /(P1 + C) , and K 2 = P2 / P1 and f 2 = P2 /(P1 + P2 ) , both steps associated with the re-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
20
Eberhard Neumann
spective separate reaction moments and polarization volumes. The single normal modes are characterized by the fielddependent normal mode relaxation times and amplitudes, yielding molecular polarization volume of each mode, and thus the respective pore radii and fractions of pores. The data on cells, isolated or in densely packed cell pellets, and on lipid vesicles, are consistent with at least two types of pores. The first type is a kind of perm-selective “NernstPlanck” pore, permitting transport either of cations or of anions, which thus go separately through different pores. So, on average, half of the pores transport cations and, parallel to it, the other half counter-transport anions. The transport of this type is a kind of ion exchange, e.g., cations go in on one hemisphere and go out on the other hemisphere such that there is no net transport of ions, for instance, out of the compartment. The data further suggest that these ion exchange pores (of radius r/nm =0.8 ± 0.2) can, at a higher pore density, develop larger pores at the expense of the smaller ones. The lager pores (r ≥ 1 nm) permit net transport of ions, for instance, net outflow from a cell of higher internal ion concentration than the outside. Further progress in the theory shows that the observation of field strength thresholds for measurable signals, as quantified in strength-duration curves, can be rationalized as a limit in the experimental detection (“visibility” threshold). The experimental threshold does certainly not just reflect the energetic balance of electric polarization and surface tension and (counter-acting) line tension. III. ELECTROPORATIVE TRANSPORT In essence, the field primarily acts on the structure, leading to pores of defined size. Concomitant transport is thus structurally controlled, the permeability changes both are based on the poration steps and resealing processes. Therefore the kinetics of electroporative transport also reflects the underlying structural changes, i.e., number and type of transport passages. Proper analysis provides, on the one hand permeability coefficients for the diffusion of molecules across concentration gradients. On the other hand, we obtain the kinetic parameters of the field-dependent pore formation and resealing processes. It is recalled that the mole flow (mol/s) of ion transport must be formulated in terms of the, recently introduced, concept of time-dependent transport coefficients. These coefficients explicitly reflect the, usually exponential, kinetics of poration processes and the flow coefficient of resealing indicates the much slower process of pore resealing. In
any case, the measured transport curves, like the time courses of conductance changes are therefore exponentials of exponentials. This can deceive smeared exponentials of the “Kohlrausch-type”. Specifically, the time courses reflect, in a folded form, the change in the fraction of pores, because the mole flow is proportional to the flow area, i.e., to the increasing or decreasing number of pores. Therefore, proper analysis starts with the mole flow, and not with the mole flow density (mol/s m²) or mole flux (=flow per area), in order to rationalize time-dependent flow coefficients. In addition, the value of the flow coefficient at the time point of the end of the applied pulse, yields the kinetic parameters for the rate limiting, (primary) structural transitions, preceding the (secondary) transport processes. IV. LOCAL LIPID REARRANGEMENTS
(PORES)
Molecular mechanistic, it had been presumed and explicitly indicated already in 1982 that the field forces cause lipid rearrangements such that the locally the charged groups and dipolar lipid head groups form a specific pore wall like that in hydrophilic or inverted pores. The dipolar head groups were drawn as aligned parallel to the external field direction [1]. This presumption has been recently supported by both, relaxation kinetic data obtained with small lipid bilayer vesicles, and by molecular dynamics simulations of the molecular rearrangements of lipid and water molecules involved in membrane electroporation. The technique of cell electroporation has been recently extended to ultra-short pulses with nominally very high external electric field strengths. At these high external field strengths and the short rise times, probably the rapid dielectric polarization of interfaces, besides the slower MaxwellWagner ionic polarization, may rapidly induce locally larger fields which, in turn, affect primarily the intracellular organelles. Ultra-electroporation is supposed to have powerful clinical potential, for instance, for inducing cell apoptosis in malignant tissue (K. Schoenbach, J.C. Weaver), Analytically, if the experimental electroporation times are expressed as a function of the respective directing field, (calculated from the external field using the dielectric constant of the polar environment), the entire field strength range, from moderate field strengths and short pulse times up to the very high external fields of the ultra-short pulses, can be consistently described with one and the same permanent dipole moment. The electro-thermodynamic analysis yields the mean dipole moment of 91.7 ( ± 5.0) 10-30Cm (27.4 ± 1.4 D) associated with the elementary unit, involved in a defined dipolar rotation process.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Systemic Electroporation – Combining Electric Pulses with Bioactive Agents
If we compare this value with the dipole moment of the zwitterionic phosphatidylcholine head group of (70 ± 5)10-30 Cm (21 ± 2 D), we may conclude that the hydrated ionic and dipolar lipid head groups, where the water molecules in the asymmetric hydration shells of the ionic groups lead to higher dipole moments, are the molecular receptors for the interaction of the local field with the membranes. Further on, we quantify rotations of these dipolar field receptors into field-parallel positions in the walls of the hydrophilic (inverted) electropores, as one type of the dominant elementary processes in membrane electroporation. Besides the structural (C) = (P) model and the suggestion of direct field effects on the dipolar lipid head groups, the membrane electroporation processes have been viewed as a structural hysteresis with rapid in-field electric pore formation and much slower field-off resealing [5]. Since the late seventies pure physical theories have been developed and they continue to provide additional deeper insights (K. Kinosita, Jr., T.Y. Tsong, Y.A. Chiz-madzhev, J.C.Weaver) into physical details of. On the level of densely packed cells and tissue, conductivity data have been interpreted as models for tissue. For instance, field distributions across tissue between electrodes have been calculated (D. Miklavcic) and successfully applied for the design of new electrode arrays and medical electroporators. V. CONCLUSIONS In summary, the first functional electrotransfer of DNA in 1982, was the starting point of many fruitful studies, also aiming at the optimization of the guidelines of cell biological and clinical electroporation protocols and for the devel-
21
opment of new electrode arrays and electro-porators such as the semi-automatic EU- Cliniporator©.
ACKNOWLEDGMENT We gratefully acknowledge financial support by the DFG, Bonn, grants Ne-227/1-9; by the ministry MSWF of the Land NRW for grant ELMINOS; by the European Union, Brussels, center grant QLK3-CT-1999-00484.
REFERENCES 1. 2. 3. 4. 5.
Neumann E, Schaefer-Ridder M, Wang Y, Hofschneider PH (1982) Gene transfer into mouse lyoma cells by electro-poraton in high electric fields, EMBO J, 1, 841-845 Wong TK, Neumann E (1982) Electric field mediated gene transfer, Biophys Biochem Res Commun 107, 584-587 Neumann E, Rosenheck K (1972) Permeability changes induced by electric impulses in vesicular membranes, J Membr Biol 10, 279-290 and (1973) 14, 194-196 Eisenstein M (2006) A look back: a shock to the system, Nature Methods, 3, 66 Neumann E (1989) The relaxation hysteresis of membrane electroporation, In: Electroporation and electrofusion in cell biology. Eds. Neumann E, Sowers AE, and Jordan CA, Plenum Press, p.61-82 Author: Eberhard Neumann Institute: Physical and Biophysical Chemistry, Faculty of chemistry, University of Bielefeld Street: P.O. Box 100 131 City: Bielefeld Country: Germany Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
An algorithm for classification of ambulatory ECG leads according to type of transient ischemic episodes A. Smrdel1 and F. Jager1 1
Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
Abstract— We developed and evaluated an algorithm for classification of ECG leads in ambulatory records according to type of transient ischemic ST segment episodes using the LongTerm ST Database. The algorithm robustly generates ST level function in each ECG lead, tracks non-ischemic ST changes to construct the ST reference function and subtracts it from the ST level function to obtain the ST deviation function. Then the algorithm using statistical moment of the histogram of the ST deviation function given lead classifies the lead according to type of transient ischemic ST episodes (elevations, depressions). The algorithm correctly classified all 9 ECG leads with elevated and 89 out of 90 ECG leads with depressed transient ischemic ST episodes.
severe cases of ischemic heart disease, such as Prinzmetal's angina, while the depressed ischemic episodes appear in patients with milder cases of ischemic heart disease, such as stable angina. Furthermore, elevated ST segment indicates high risk of mortality [2], an ischemic injury [3], or an acute myocardial infarction [4]. As a diagnostic tool it would be beneficial to be able to determine the severeness of the heart disease by analyzing ambulatory ECG records and determining orientation of leads. In this paper we present an algorithm to classify ambulatory ECG leads according to their orientation. The algorithm was developed and evaluated using the LTST DB.
Keywords— Ambulatory electrocardiogram, transient ischemia, ECG lead classification, Long-Term ST Database.
I. INTRODUCTION Ischemia is one of the most common heart diseases. It is a state when there is insufficient supply of heart muscle with oxygenated blood. This can lead to myocardial infarction and, in worst case, death. In electrocardiogram (ECG), ischemia is manifested as change of ST segment morphology (ischemic ST segment changes). In addition, nonischemic changes of ST segment morphology are similar to true ischemic changes and include: heart-rate related ST segment changes, changes of ST segment level due to sudden shifts of the electrical axis of the heart (axis shifts) due to body position changes, and slow drifts of ST segment level due to diurnal changes or effects of medications. Ischemic changes and non-ischemic heart-rate related changes construct transient ischemic and transient heart-rate related ST segment episodes respectively. According to the shift of the ST segment level (positive or negative), the ST segment episodes are elevated or depressed. If we observe ambulatory ECG records, we notice, that majority of ECG leads contain only one type of episodes: elevations or depressions, while only a small number of leads contain both types. Therefore we can define the lead orientation such as: positive (only elevations present), negative (only depressions), mixed (elevations and depressions), and no orientation (no episodes). Elevated ischemic ST episodes, as we can observe in ambulatory ECG records of the Long-Term ST Database (LTST DB) [1], usually appear in patients with
II. MATERIAL AND METHODS The LTST DB includes 86 2- and 3-lead 24-hour ambulatory records (190 ECG leads), obtained during daily clinical practice. Each record contains reference annotations (set by human expert annotators) for ischemic and heart-rate related ST segment episodes, and human expert reference annotations that define time-varying ST segment reference level (non-ischemic path). By subtracting time-varying ST segment reference level from actual ST segment level the ST segment deviation level or the ST segment deviation function is obtained in which transient ischemic and heartrate related ST segment episodes are annotated. These ST segment episodes were annotated according to annotation criteria, which state that: • • •
an episode begins when the magnitude of the ST deviation function first exceeds 50 μV; the ST deviation function must reach a magnitude of Vmin or more throughout the interval of at least Tmin s; the episode ends when the ST deviation function becomes smaller than 50 μV, provided that it does not exceed 50 μV in the following 30 s.
Values for Vmin and Tmin differ according to three annotation protocols: • • •
Protocol A: Vmin = 75 μV; Tmin = 30 s; Protocol B: Vmin = 100 μV; Tmin = 30 s; Protocol C: Vmin = 100 μV; Tmin = 60 s; For this study we chose reference annotations according to protocol B. For each lead of each record of the LTST DB, we
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 34–37, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
An algorithm for classification of ambulatory ECG leads according to type of transient ischemic episodes
visually verified the episodes in order to determine the lead orientation: if the extrema of all episodes in the lead were positive, the lead has positive orientation; if the extrema of all episodes were negative, the lead has negative orientation; if the episodes had negative and positive extrema the lead orientation is mixed; and if there were no episodes, the lead has no orientation. In our study we included leads with transient ischemic ST episodes only (104 leads), and leads without any ischemic or heart-rate related ST episodes (42 leads). Of the former, 9 have elevated episodes only, 90 have depressed episodes only, while 5 leads have elevated and depressed episodes. Leads which contain heart-rate related episodes or both types were excluded (44 leads). The algorithm developed for classification of leads according to their orientation is based on our previous work [5,6]. It is composed of following modules. A. Preprocessing The algorithm initially robustly preprocesses raw ECG data, and extracts noises and abnormal heart beats. To further avoid the effects of noise the average heart beats are constructed. Each normal heart beat in the raw ECG signal is replaced with the average heart beat. For the construction of each average heart beat that replaces individual current heart beat normal non-noisy heart beats in the 16 s neighborhood of the current heart beat are used. To construct the ST level function given lead, the algorithm first searches for the isoelectric levels and J points in each average heart beat. To determine the isoelectric level, I(j), where j denotes average heart beat number, the algorithm searches for the “flattest” part of the signal from the fiducial point, FP(j), backwards [5]. For the position of the J point, J(j), the algorithm searches forward from the FP(j) for a part of a waveform that “starts to flatten” [5]. After that, the ST segment amplitude is measured in each average heart beat and in each lead as the difference between the amplitudes at the point of measurements of ST segment level, J(j)+ 80 ms, and isoelectric level, I(j). These measurements constitute `fine' ST level function, s(i,j), where i denotes the lead number. This function is then resampled (ΔT=2 s) and smoothed to obtain the `raw' ST level function, s(i,k), where k denotes the sample number. Examples of the ST level function derived by the human expert annotators of the LTST DB and that derived by the algorithm are shown in Fig. 1.b and 1.d respectively. B. Tracking of slow drifts and detection of axis shifts Non-ischemic events have to be excluded from the ST level function. The algorithm tracks the non-ischemic path in the ST level function to create the ST reference function,
35
which is then subtracted from the ST level function to obtain the ST deviation function. In the first step, slowly varying reference trend is tracked. The filters of 6 h 40 min ( h g ) and 5 min ( hl ) in duration are applied to obtain the timevarying global, rg (i, k ) , and local, rl (i, k ) , ST reference level trends respectively. Then the estimation of the reference level, r1 (i, k ) , which tracks slow drifts, but skips faster events and episodes is obtained using: ⎧⎪rg (i, k ) : if rg (i, k ) − rl (i, k ) > 50 μV r1 (i, k ) = ⎨ ⎪⎩rl (i, k ) : otherwise.
(1)
In the second step, axis shifts due to body position changes are detected. To detect step changes of the ST level function the ST level function and the Mahalanobis distance functions of the first order of the QRS complex and ST segment Karhunen-Loève (KL) coefficient feature vectors are used. Axis shifts are detected in all three functions as a step change which has a flat interval before and after the step change. In each of the three functions, the algorithm first searches for a flat interval of T f =216 s in length, which has to have its mean absolute deviation from its own mean less than K f =20μV for the ST level function, and less than Σ F =0.33 SD (SD - standard deviation) for both Mahalanobis distance functions of the first order. This has to be followed by a step change, characterized by the moving average value over Ta =72 s in length, and has to change for at least K S =50 μV for the ST level function and for at least Θ QRS =0.5 SD for the QRS KL distance function and for at least Θ ST =0.4 SD for the ST KL distance function in the next 2Ta =144 s in length. This step change has to be followed by another flat interval in all three functions defined the same as for the first flat interval. In the intervals surrounding the step change, the ST reference function is updated, following: ⎧r1 (i, k ) : if rg (i, k ) − s (i, k ) < 50μV ∧ ⎪⎪ r2 (i, k ) = ⎨ rl (i, k ) − s(i, k ) < 50μV ⎪ s (i, k ) : otherwise. ⎪⎩
(2)
It means that those parts of the ST reference function coinciding with sudden step changes are replaced by the ST level function. By subtracting the ST reference function of the lead from the ST level function the ST deviation function, d(i,k), is finally constructed: d (i, k ) = s(i, k ) − r2 (i, k ).
(3)
An example of an ST deviation function is shown in Fig. 1.f, that resembles one constructed by the human experts (Fig. 1.c).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
36
A. Smrdel and F. Jager
Fig. 1 Time trends of a three-lead record s30661 (shown is the first lead) from the LTST DB (6 hour excerpt from 24 hour record is shown, starting at 10 hours after the start of the recording). Legend: (a) heart rate ([bpm]); (b) ST level function as derived by the annotators of the LTST DB ([μV]); (c) ST deviation function as derived by the annotators ([μV]); (d) ST level function as derived by the algorithm ([μV]); (e) ST reference function as derived by the algorithm ([μV]); (f) ST deviation function as derived by the algorithm ([μV]); (g) an axis shift as detected by the algorithm (a vertical tic above the line), and two ST segment episodes and two axis shifts (vertical tics below the line) as annotated by the expert annotators. Determination of lead orientation To determine the lead orientation, the samples of the ST deviation function of an ECG lead, d(i,k), are considered as samples of a random variable to construct a histogram of this function. An example of such a histogram of the first lead of record s30661 is shown in Fig. 2. Next, the z-th statistical moment above the threshold K S =50 μV: B z 1 (4) mz+ (i, K S ) = ∑ ( x − K S N (i, x)), M x=KS and z-th statistical moment below the threshold − K s : m z− (i,− K S ) =
−KS
∑ (x+K
x=− B
z S
1 N (i, x)), M
classification. To optimize the algorithm the first, second, and the third moment for various values of K C were investigated. As the optimization constraint the minimum number of leads containing only elevations as being falsely classified as those containing depressions and the minimum number of leads containing only depressions as being falsely classified as containing elevations was chosen. The optimal values for K C according to optimization constraints are: 2 × 10 3 ( μV ) for the first, 75 × 10 3 ( μV ) 2 for the second, and 3.75 × 10 6 ( μV ) 3 for the third moment. An example in Fig. 2 demonstrates detection of a lead orientation.
(5)
are constructed, where z denotes moment used, N(i,x) represents number of samples with value x in the histogram, M is number of samples of d(i,k), and B=1500 μV defines the bounds between which the histogram is constructed. From these two moments the algorithm determines the lead orientation using following rule: 1 ⎧ + − ⎪ p : if (mz (i, K S ) − mz (i,− K S ) > M K C ) ⎪ 1 ⎪ O(i ) = ⎨n : if (mz+ (i, K S ) − mz− (i,− K S ) < − KC ) M ⎪ ⎪u : otherwise, ⎪ ⎩
where p, n, and u denote positive, negative and uncertain orientation respectively, and K C is the threshold for lead
(6)
Fig. 2 Histogram of the ST deviation function of the first lead, d(1,k), of record s30661 of the LTST DB. Ischemic episodes in this lead are depressions. See text.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
An algorithm for classification of ambulatory ECG leads according to type of transient ischemic episodes
III. RESULTS Table 1 summarizes results of the ECG lead classification using the first, second and the third moment and optimal thresholds K C . The results show that the algorithm using the first, second or the third moment, correctly classified all 9 leads with elevations as having positive orientation. The algorithm, using the first moment, correctly classified 86 out of 90 leads with depressions as having negative orientation, while three leads were classified as having positive orientation, and one as uncertain. Using the second (third) moment, the algorithm correctly classified 87 (89) leads with depressions as having negative orientation, while three (one) leads were classified as uncertain. Regardless of the moment used, the classification of the leads with mixed type of episodes was poor. The classification of the leads without episodes showed, that the best results were obtained using the third moment, when the algorithm classified 15 out of 42 leads as uncertain.
37
such leads. Larger number of episodes of one type than of the other prevails in such a way that the orientation is shown as either positive or negative. The developed algorithm also did not perform well while classifying leads containing no ischemic episodes. To better evaluate leads with mixed type of episodes and leads containing no episodes the rule (6) will need to be improved and more sophisticated method for determining the lead orientation will be investigated. Although, the best results were achieved using the third moment, the choice of the moment to use is predominantly dependent on the application for which it is to be used. In applications that would follow the protocol A of the LTST DB the use of the lower amplitude thresholds might be preferable. In this case the use of the second or the first moment might be desirable. We conclude that the algorithm developed showed nice performance in classification of ECG leads, will be improved, and could be a valuable tool to determine the severeness of ischemic heart disease.
Table 1 Results of the ECG lead classification using the first, second and the third moment with optimal threshold according to the reference annotations for the protocol B of the LTST DB. See text. Positive
REFERENCES
Negative
Uncertain
1.
0 86 2 24
0 1 1 8
2.
0 87 2 20
0 3 0 12
0 89 2 18
0 1 0 15
First moment Elevations Depressions Mixed No episodes
9 3 2 10
3.
Second moment Elevations Depressions Mixed No episodes
9 0 3 10 Third moment
Elevations Depressions Mixed No episodes
9 0 3 9
IV. DISCUSSION The results showed that the algorithm developed is efficient in classifying ECG leads with only elevated or depressed transient ischemic ST segment episodes. Using the third moment, almost all leads with either elevations or depressions were correctly classified. Main reason for uncertain classification of leads with mixed types of episodes is due to the different number of episodes of each type in
4. 5. 6.
Jager F, Taddei A et al (2003) Long-term ST database: a reference for the development and evaluation of automated ischaemia detectors and for the study of the dynamics of myocardial ischaemia. Med Biol Eng Comput 41: 172-182. Berger PB (2006) Acute Myocardial Infarction. Acp Medicine http://www.acpmedicine.com/dxrx/dxrx0108.htm Kleber AG (2003) ST segment elevation in electrocardiogram: a sign of myocardial ischemia. Cardiovasc Res 45: 111118. Wang K et al (2003) ST-Segment Elevation in Conditions Other Than Acute Myocardial Infarction. N Engl J Med 349: 2128-2135. Smrdel A, Jager F (2004) Automated detection of transient ST-segment episodes in 24 h electrocardiograms. Med Biol Eng Comput 42: 303-311. Smrdel A (2004) Robustno avtomatsko odkrivanje prehodnih epizod segmenta ST v 24-urnih elektrokardiogramih. PhD Thesis, Faculty of Computer and Information Science, University of Ljubljana. Author: Institution: Address: City: Country: E-mail:
Ales Smrdel Faculty of Computer and Information Science Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Assessment of the Heart Rate Variability during Arousal from Sleep by Cohen’s Class Time-Frequency Distributions M.O. Mendez 1, A.M. Bianchi1 , O.P. Villantieri1 and S. Cerutti1 1
Bioengineering Department, Politecnico di Milano, Milano, Italy
Abstract— Arousal from sleep is a normal physiologic event which produces well defined changes in the sympatho-vagal balance. Arousal from sleep is related to sleep fragmentation and some sleep disorders as obstructive sleep apnea. However when repetitive arousals are found during sleep time, bad sleep quality and as consequence sleepiness during the day are associated. We studied the dynamic of the HRV during arousal accompanied by muscular activity. Ten isolated arousals free from any pathologic event where studied. Three TimeFrequency distributions (TFDs), Born-Jordan, Choi-Williams and Smooth Pseudo Wigner-Ville Distributions, were analyzed in order to evaluate their performance during arousal episodes. The three TFDs showed the same performance when analytic HRV signal is used. LF component suggests a major participation of the sympathetic activity at the beginning of the arousal episode while HF component suggests a major role of the parasympathetic drive at after arousal episode. Keywords— Sympatho-vagal balance, Time-Frequency Distribution, Obstructive Sleep Apnea.
I. INTRODUCTION Sleep is an unconscious process where the human being interacts with his inner world and interaction with real worldis basically vanished. When we have a restorative sleep, we wake with energy to do all our daily tasks. However, sleep can be disturbed by different causes that range from stress until physiological pathologies as obstructive sleep apnea. Arousal from sleep is a natural physiological event that appears during both the normal sleep and pathologic episodes. Arousal from sleep has a very close relationship with sleep fragmentation since repetitive arousals from sleep produce a low sleep quality. Classical symptoms caused by a disrupted normal sleep are daily sleepiness, memory impairment and low concentration. In addition when arousal from sleep are related to noxious respiratory pathologies as obstructive sleep apnea, normal physiological levels can be altered even during wakefulness. Consequences of diurnal hypertension until heart failure can occur. Arousals from sleep have two main functions, activating sensory organs to monitor the environment and bolstering physiological system up to overcome noxious events [1]. Arousal from sleep is defined from the cortical electroencephalogram waves. Some definitions can be found in literature were spindles, k-complexes are part of
the arousal hierarchy as arousal precursors. An arousal episode is defined as a sudden increment in the EEG waves with at least thee seconds and not superior to 10 seconds. In addition, the shift frequency includes theta, alpha or frequencies higher that 16 Hz. In REM sleep an arousal must be accompanied by muscular activity. A complete arousal description is found in [2]. Arousals from sleep produce changes in heart rate. Heart rate presents increment during arousal episodes due to a withdrawal of the parasympathetic flow and a strong activation of sympathetic activity. Effects of arousal from sleep in sympatho-vagal balance have been widely studied as during NREM sleep and with inducted arousals [1,3-4]. On the other hand, in the past decades has been found a tie relationship between the autonomic nervous system and the spectral components of the heart rate variability. Therefore, methods of spectral decomposition represent a non invasive tool to evaluate the behavior of the of the autonomic nervous system. Heart rate fluctuations range between 0 and about 0.5 Hz. Frequency range between 0.15 – 0.5 Hz is directly correlated with the vagal flow, while range between 0.02 – 0.15 represent mainly the sympathetic activity [5]. The classical method to analyse the spectral components of a signals is the Fourier transform. However, this approach does not take in account the temporal evolution of the spectral components. In addition, arousal episodes produce a transitory reflex of the sympathetic muscle which is reflected as a non stationarity in the heart rate. Approaches as Wavelets transform, Time-variant autoregressive models and Time-Frequency Distributions (TFD) allow to evaluate the temporal dynamic of the signal fluctuation and are suitable to evaluate transitories changes as those present in the heart rate variability during arousal events. TFD evaluate the Fourier transform of the temporal autocorrelation. In other words, they evaluate the Fourier transform of the autocorrelation function without expected operator (Wigner-Ville Distribution). This evaluation enables to the TFD to capture the evolution of the spectrum at each sample, but in multi-component signals spurious frequency components appear. In order to reduce the spurious terms, some smooth functions (Kernels) have been incorporated inside to the Wigner-Ville Distribution. Each kernel represent a Time-Frequency Distribution and attain to specific properties such as time-frequency invariance, finite support, etc. Cohen’s class TFD is a well defined group of
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 30–33, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Assessment of the Heart Rate Variability during Arousal from Sleep by Cohen’s Class Time-Frequency Distributions
kernels that satisfy the time-frequency invariance property. This means that if a signal x(t) is delayed in time or modulated in frequency then its TFR will be delayed in time and shifted in frequency. This is an important property when physiological signals are analyzed [6]. The aim of this study is evaluate beat-by-beat the dynamic evolution of the autonomic nervous system. BornJordan, Choi-Williams and Smooth Pseudo Wigner-Ville Distribution which are part of the Cohen’s class are tested on synthetic signal and HRV sequence during arousal episode. We evaluated the goodness of each distribution to assess the dynamic changes of the HRV fluctuation in arousal episodes. II. MATERIAL AND METHODS Five overnight polysomnographic recordings were obtained from five healthy subjects. The subjects were 48±5 year old, Body Mass Index was 36±2 Kg/m2. Data were obtained using a polymnosograph Heritage Digital PSG Grass Telefactor. All signals were acquire with a sampling frequency of 100 Hz. Sleep stages were evaluated according to the standard clinical criteria [7]. Arousals were identified from EEG C4/A1 channel during stage 2 by expert personal, in agreement with the definition given in [2]. Ten arousal with muscular activity were selected free of noise and distant of any pathologic event (OSA or PLM). Therefore, the RR intervals were searched and detected from ECG channel by a derivative algorithm. Due to the low sampling frequency a better detection of R peaks was obtained by a parabolic interpolation. RR time series were verified and manually corrected where misdetections occurred and when extrasystoles happened the corresponding portion of the signal was discarded. Intervals of 2 min and 30 sec as baseline and the same interval as recovery after arousal were taken. A. Method Development Cohen’s Class Time-Frequency Distributions (TFD) of a signal x(t) is defined as:
⎛ τ⎞ ⎛ τ⎞ C x (t , f ) = ∫∫ φ (t − t ' ,τ ) x * ⎜ t '− ⎟ x⎜ t '+ ⎟e − j 2πfτ dt ' dτ ⎝ 2⎠ ⎝ 2⎠ (1) Where φ (θ ,τ ) is a function labeled kernel. By choosing different kernels, different features of the distributions are obtained. From here there is an infinite number of distributions that can be obtained. The TFD are defined according
31
to the kernel. Smooth Pseudo Wigner-Ville Distribution is characterized by independent smoothing functions, in time and in frequency originated by γ (t ) and η ⎛⎜ τ ⎞⎟η * ⎛⎜ − τ ⎞⎟ ⎝2⎠ ⎝ 2⎠ windows respectively, where the kernel function is:
⎛τ ⎞ ⎛ τ ⎞ ϕ ( t ,τ ) = γ ( t )η ⎜ ⎟η * ⎜ − ⎟ ⎝2⎠
⎝
2⎠
(2)
Choi and Williams Distribution is defined as:
ϕ ( t ,τ ) =
⎡ σ σ 1 exp ⎢ − 4π τ ⎣⎢ 4
2 ⎛t⎞ ⎤ ⎜ ⎟ ⎥ ⎝ τ ⎠ ⎦⎥
(3)
The scaling factor σ determines the cross-terms suppression, time and frequency resolution and concentration of auto-terms. High value of σ gives a good definition of autoterms and low cross-terms suppression, while low values of σ reduces cross-terms and spread out the auto-terms. Born-Jordan Distribution (BJD) maintains most of the attractive properties due to associates mixed products of time and frequency. The distribution had been used as bases for creating other distributions, or used for evaluating new distributions proposed. BJD is defined as:
⎧1 ⎪ , ϕ (t ,τ ) = ⎨ τ ⎪0, ⎩
t / τ <1/ 2 (4)
t / τ >1/ 2
Arousal from sleep produce a strong and rapid change in the HRV signal. We are interesting in analyzing the ability of this TFD to follow this fast changes. In order to test the capacity of the different TFD we generate a tri-component synthetic real signal in the following way:
⎧sin(2πf1n), 32 ≤ n ≤ 192 ⎪sin(2πf n), 193 ≤ n ≤ 319 ⎪ 2 i= ⎨ ⎪sin(2πf 3 n), 320 ≤ n ≤ 480 ⎪⎩0, otherwise
(5)
Where f1=0.1 Hz, f2=0.25 Hz and f3=0.4 Hz In order to reduce the cross terms produced by the negative frequencies, Hilbert Transform was applied to the synthetic sequence. The sampling frequency of the synthetic signals was 4 Hz and 512 samples. The TFR parameters for SPWV were : smooth time window, hamming 21 samples; smooth frequency window Hamming 129 samples. For CWD a fixed σ = 1. Three frequency bands, A (0.35 –
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
32
M.O. Mendez, A.M. Bianchi, O.P. Villantieri and S. Cerutti
0.45 Hz), B (0.15 – 0.25 Hz) and C (0.05 – 0.1 Hz) across all time were created in order to have a clear indication of the instantaneous power in each band. These bands show the interference terms attenuation and negative components. Each band and the total frequency range (0-0.5 Hz) were integrated across the frequency axis with the intention of comparing the real time evolution of the instantaneous power of the signal and the one obtained with the TFR. The results of the TFRs are showed by means of an image, the energy range is plotted in a gray scale of 256 values. B. Arousal Data Resulting RR sequences were re-sampled at 2 Hz by cubic spline interpolation and detrended. Therefore, to each RR sequence was applied Hilbert Transform in order to obtain an analytic signal. After that, SPWVD was used to obtain the evolution of RR power at different frequencies and times. Then, it was computed the time evolution of the classical heart rate variability indexes: (PT) total power (0.005-0.5 Hz), (VLF) very low frequency (0.005-0.04 Hz) ; (LF) low frequency (0.04-0.15 Hz); (HF) high frequency (0.15-0.4 Hz); and low to high frequency ratio (LF/HF). All spectral powers were computed in absolute units. The representation and spectral indexes was obtained using the absolute values of the distribution. The data were synchronized with the occurrence of the minimum RR value. Thereafter, an ensemble average was obtained for each spectral index and RR intervals. Each sequence was normalized as the percentage of change respect to the mean of the first 20 seconds of each sequence. All the indexes values were given as mean ± standard deviation. Segments of 180 points from data and spectral indexes were analyzed. III. RESULTS Figure 1 shows three sinusoids at different times and frequencies according to Equation 5. Signal components are clearly separated and fine represented by the three distributions in the time-frequency plane (upper panel). The results are so similar that is almost impossible distinguish among them by their time-frequency representation. From the middle panel, we can observe that the instantaneous power in A, B and C bands is totally smoothed by the three distributions. Finally, when we compare the instantaneous power of the real signal and the instantaneous power evaluated from the timefrequency energy plane, all TFDs follow in mean the real instantaneous power (lower panel). However, most important is to recognize that TFDs followed with high time resolution
Figure 1. Time-Frequency analysis of a synthetic signal composed by three sinusoids after the Hilbert transformation. The first row shows the time evolution of the PSD of the signal obtained by three Time-Frequency representations, Smooth Pseudo Wigner-Ville Distribution (SPWVD), ChoiWilliams Distribution (CWD) and Born-Jordan Distribution (BJD). The second row depicts the instantaneous frequency for A (dashed line), B (black line) and C (gray line) frequency bands. The third row present the instantaneous power for the whole frequency axis, gray line represents the theoretical instantaneous power while black line is the one obtained after integrating the PSD respect to the frequency axis.
the changes the changes in frequency presented in the synthetic signal. Figure 2 presents mean and SE of time evolution of the spectral indexes of HRV for arousals from sleep during stage two sleep. The series are presented as percentage variations respect to the baseline (see method). RR intervals present a fast decrement which reaches its minimum value around seven seconds after the beginning of the arousal episode, after that it begins a recovery phase over-passing the baseline and returning to the base line 20 seconds later. RR intervals, have significant lower values respect to the base line during 9 to 15 seconds. HF component has a decrement immediately when an arousal happens during the first 3 seconds, then a constant increment which arrive at the maximum value close at the final of the arousal. However, no significant difference were found. LF component presents a strong increment during arousal episode which shows its maximum value close to RR minimum and go back to baseline levels after 25 seconds. Significant differences respect to baseline are from 9 to 20 second. VLF component and LF to HF ratio showed an analogous performance as the LF component.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Assessment of the Heart Rate Variability during Arousal from Sleep by Cohen’s Class Time-Frequency Distributions
33
but a smoothed version of this one (see Figure 1). When we deal with the HRV signal we are interested in the instantaneous frequency which describes the relations of the sympatho-vagal balance. Then, when we use an analytic version of a HRV signal it is indifferent to use any distribution. Changes in the sympatho-vagal balance during an arousal event are produced by both an increment in the symphatetic activity (LF increment) and a reduction in the parasympathetic activity (HF reduction). This fast and large change is caused by the activation of the sympathetic activity while the vagal tone seem not play an important role during the episode. On the Contrary, at the arousal end a tachycardia is observed where the vagus nerve play a major role. In conclusion TFDs are a fine tool to evaluate the dynamic that the autonomic nervous system presents during an arousal episode. When we work with analytic version of the HRV, the results suggest that it is indifferent applied any TFD. The sudden tachycardia observer at the beginning of an arousal seems caused mainly by the sympathetic activation. In the tachycardia after arousal episode parasympathetic activity seems to play a major role. Figure 2. Time evolution indexes of the heart rate variability parameters during arousal from sleep episode. Spectral indexes were evaluate by Smooth Pseudo Wigner-Ville distribution. Values are presented as mean value and standard error of the percentage changes respect to the baseline. Form the top to the bottom. RR intervals (RR), High frequency (HF), Low frequency (LF), Very low frequency (VLF) and Low to high frequency ratio (L/H). The window with * represents significant differences.
IV. DISCUSION AND CONCLUSIONS Born-Jordan, Choi-Williams and Smooth Pseudo Wigner-Ville distributions belonged to Quadratic Cohen Class Time-Frequency Distributions are used to evaluated the behavior of the autonomic nervous system during arousal form sleep episodes. Our main observations are : a) the three TFDs allow to evaluate with large time-frequency resolution the behavior of the heart rate variability even during transitory events. b) When we used an analytic signal, applying any of these TFDs during arousal episodes is indifferent. c) Arousal episodes produce a high increment in the LF/HF balance during arousal. However this change is mainly caused by a rise in LF. Physiological signals are real in nature. However, when we work with Time-Frequency distribution is recommended to use analytic signals in order to reduce the interference terms generated by the quadratic nature of this approach. Nevertheless the interference terms are necessary to retain fine properties found in BJD and CWD such as finite time support and marginals. Those properties are lost when we use analytic signals. If we integrate respect to the frequency axis, we do not obtain the instantaneous power of a signal
ACKNOWLEGEMENT This work was supported by the European project MY HEART
REFERENCES 1. 2. 3. 4. 5. 6. 7.
Halasz P, Terzano M. Liborio P, Bodizs R (2004) The Nature of Arousal in Sleep. J Sleep Res 13:1-13. Atlas Task Force of American Sleep Disorders Association (1992) EEG arousal: scoring rules and examples. Sleep 15: 174-184. Catcheside, et. al (2001) Acute Cardiovascular Responses to Arousal from Non-REM Sleep During Normoxia and Hypoxia. Sleep 24:895-902. Blasi A et.al. (2003) Cardiovascular Variability after Arousal from Sleep: Time-Varying spectral Analysis J Appl Physiol 95:1394-1404. Malliani A (1999) The Pattern of Sympathovagal Balance Explored in the Frequency Domain. News Physiol Sci 14: 111-117. Cohen L (1989) Time-Frequency Distributions – A Review. Proc. IEEE (77) 941-981. Rechtschaffen A, Kales AE (1968) A manual of standardized terminology, techniques and scoring system for sleep stages of human subjects. Brain information Services/Brain Research Institute, UCLA, 1968. Author: Institute: Street: City: Country: Email:
Martin Mendez Politeci di Milano P.zza Leonardo da Vinci 32, 20133 Milan Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Autonomic Modulation of Ventricular Response by Exercise and Antiarrhythmic Drugs during Atrial Fibrillation VDA Corino1,2, LT Mainardi1, D Husser2, A Bollmann2 1
2
Department of Biomedical Engineering, Politecnico di Milano, Milano, Italy Department of Cardiology, University Hospital Magdeburg, Magdeburg, Germany
Abstract— Ventricular response (VR) during atrial fibrillation (AF) is a complex process that is influenced by the autonomic nervous system (ANS). Although antiarrhythmic drugs are known to have modulating ANS effects, these are not routinely evaluated in the clinical setting. Therefore, the purpose of this study is to characterize VR to exercise as ANS stimulus during AF and to evaluate possible modulating effects of antiarrhythmic drugs (flecainide and amiodarone). Seventeen patients (11 males, mean age 61±11 years), with persistent AF, underwent bicycle exercise testing before and after 3 – 5 days of oral flecainide (9 patients) or amiodarone (8 patients) loading. RR series were derived from ECG recordings and analyzed by means of time domain parameters (mean, SDNN, pNN50 and rMSSD) and non-linear methods assessing the predictability of the time series (level of predictability (P) and regularity (R)). The effect of exercise in VR modulation was evident both with and without antiarrhythmic drugs (p<.05). Both antiarrhythmic drugs had no significant effects at rest but on exercise conditions: flecainide decreased pNN50 and rMSSD, while amiodarone increased the mean of NN interval. In addition, flecainide amplified the response to exercise, resulting in a pronounced pNN50 reduction, while no differences were observed with amiodarone administration. In conclusion, antiarrhythmic drugs exhibit ANS modulating effects during exercise that are not apparent during resting conditions. Keywords— atrial fibrillation, autonomic nervous system, flecainide, amiodarone, exercise testing.
I. INTRODUCTION Atrial fibrillation (AF) is characterized by an irregular ventricular response (VR) that is often described as chaotic and without any form of patterning. Previous studies have shown, however, that this process is not completely random [1] [2]. Moreover, a reduced VR variability may predict an adverse prognosis in patients with advanced heart failure [3]. The autonomic nervous system (ANS) [4] is one of the most important factors influencing VR in AF. Its effect is largely mediated by the modulation of atrioventricular node refractoriness, which is mainly dependent on vagal tone [5]. Exercise is associated with complex responses of sympathetic and vagal control mechanisms, and represents consequently a natural model to evaluate the effects of autonomic modulation on VR dynamics.
In many patients antiarrhythmic drugs are prescribed to terminate AF and to prevent its recurrence [6]. It is well known that in addition to the desired electrophysiological effects, these drugs may influence autonomic control mechanisms. However, these effects are not routinely evaluated in the clinical setting. Thus, the purpose of this study was twofold: (1) to characterize VR in response to changes of the autonomic balance induced by exercise testing, and (2) to elucidate the influence of two commonly used antiarrhythmic drugs, flecainide and amiodarone, on the dynamics of VR at rest and during exercise in patients with persistent AF. II. METHODS A. Study protocol Seventeen patients (11 men/6 women, mean age 61 ± 11 years), with persistent AF (mean AF duration 21 ± 29 months) and referred for cardioversion, were included in this study. Echocardiographic characteristics were: left atrial diameter 47 ± 5 mm and left ventricular ejection fraction 56 ± 15 %. The pharmacological therapy included digitalis in 8 patients, calcium channel blockers in 4 and beta blockers in 6 (more than one drug is possible for each patient). Patients underwent symptom-limited bicycle exercise stress testing using a 3-minute step-up protocol. Workload increase was chosen according to age- and gender-predicted values, aiming for a test duration of 8 to 12 minutes. Exercise testing was repeated after 3 – 5 days of flecainide (in 9 patients, 200 mg/day) or amiodarone (in 8 patients, 1200 mg/day) loading. An ECG (Predictor, Dr. Kaiser; sampling rate 2 KHz) was continuously recorded with the subject in supine position at baseline and immediately after termination of exercise. Thus, four clinical conditions were defined for each patient (Figure 1). The QRS detection and RR interval measurements were automatically performed on two-minute segments at baseline and immediately after termination of exercise. RR interval series were visually checked and missed/misdetected beats were corrected using an interactive software. Prema-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 82–85, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Autonomic Modulation of Ventricular Response by Exercise and Antiarrhythmic Drugs during Atrial Fibrillation
Fig. 1 Protocol phases ture ventricular contraction or beats with aberrant conduction were excluded from the time domain analysis, while they were substituted by linear interpolation of the preceding and the subsequent RR intervals for the series predictability analysis [7]. B. VR analysis Time domain parameters: Time domain parameters were computed following the recommendations for heart rate variability measurements [8] that have also been applied during AF [9]. Time domain analysis includes the mean (M) and the standard deviation (SDNN) of all normal-to-normal intervals (NN), the root of the mean squared differences of successive NN intervals (rMSSD) and the percentage of interval differences of successive NN intervals greater than 50 ms (pNN50). Level of predictability: A discrete time series x(n) can be modeled as the output of an autoregressive model of p order p
x(n) = ∑ a k x(n − k ) + w(n)
(1)
k =1
where n is the discrete-time index, the ak are the model coefficients and w(n) is a Gaussian white noise process of variance σ2 feeding the model. The actual sample differs from its model prediction, thus generating the prediction error p
e( n ) = x ( n ) − ∑ a k x ( n − k )
(2)
k =1
An index of the level of predictability (LP) may be defined as follows LP = (1 −
σe ) ⋅100 σx
(3)
where σe is the standard deviation of e(n) and σx is the standard deviation of the process x. LP measures the percentage of standard deviation which may be predicted by the autoregressive model. In the case of a purely random signal (σe
83
is quite close to σ x ) LP tends to 0, while in the case of a linearly predictable signal (σ e tends to zero) the index tends to 100% and it assumes intermediate values for those processes that may be partially predicted from the model. Regularity: The regularity (R) index is related to the degree of recurrence of a pattern in a time series and it is based on the Conditional Entropy (CE), i.e. the amount of information carried by the most recent sample x(i) of a normalized realization of x when its past L-1 samples are known. For a given signal, M different patterns of length L can be obtained (the J-th of them is indicated as xLJ ). CE is defined as [10] N
CE ( L) = −∑ p( xi / x LJ −1 ) log( p( xi / x LJ −1 ))
(4)
i =1
where p(xL-1J) represents the probability of the pattern xL1(i) and p(xi/xL-1) the conditional probability of the sample x(i) given the pattern xL-1. The estimation of CE(L) is no longer statistically consistent [10] when L increases, therefore the Corrected Conditional Entropy (CCE), sum of CE(L) and a corrective term, must be introduced to perform a reliable measure over short data series. The CCE is then normalized by the Shannon entropy of the process to derive an index independent of the different probability distribution of the processes, obtaining the Normalized Corrected Conditional Entropy (NCCE). The R index may be defined as: R = 1 − min( NCCE ( L))
(5)
In case of a purely periodic signal (a fully predictable process) the observation of a few samples makes it possible to predict the successive ones (i.e. the new samples are not carrying any new information and simply replicate the previous ones). In general, R tends to 0 if the series is a fully unpredictable process, it tends to 1 if the series is a periodic signal and it assumes intermediate values for those processes that can be partially predicted by the knowledge of the past samples. Statistical analysis: Data are reported as mean ± one standard deviation. Rest vs. exercise at baseline and after drug loading was compared using Student’s t-test for paired data. In order to assess the response to exercise, differences between exercise and rest condition were computed and compared for the drug-free state and after drug loading using Student’s t-test for paired data. To evaluate the differences between the two antiarrhythmic drugs, Student’s t-test for unpaired data was used. A p value of at least .05 was considered statistically significant.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
84
VDA Corino, LT Mainardi, D Husser, A Bollmann
III. RESULTS A. Effects of Exercise on VR Exercise effects on VR modulation were evident in patients with and without antiarrhythmic drug administration, with time domain and non-linear methodologies. A significant decrease (p<.001) was noted in all time domain parameters, reflecting a reduction in the RR beat-tobeat variability. In addition, an increase (p<.05) of the LP and R indexes, reflecting linear and non-linear series predictability respectively, were observed (Tables 1 and 2). Table 1
Mean values ± one SD during baseline and flecainide administration Baseline
M SDNN pNN50 rMSSD LP R
Rest 689 ± 145 156 ± 43 74 ± 13 224 ± 73 1.63 ± 0.93 0.06 ± 0.06
Exercise 635 ± 93** 119 ± 38** 71 ± 12 164 ± 56** 2.85 ± 4.03 0.07 ± 0.07
Flecainide Rest 702 ± 121 161 ± 55 80 ± 6 227 ± 84 1.31 ± 0.8 0.06 ± 0.05
Exercise 525 ± 93** 112 ± 44** 55 ± 22* 140 ± 58** 5.27 ± 4.21* 0.15 ± 0.17*
* p<.05 † p<.01 exercise effect
Table 2
Mean values ± one SD during baseline and amiodarone administration
Baseline M SDNN pNN50 rMSSD LP R
Rest 775 ± 156 200 ± 52 85 ± 9 288 ± 87 2.6 ± 2.8 0.05 ± 0.03
Amiodarone Exercise 565 ± 164† 143 ± 40† 69 ± 10† 174 ± 59† 9.1 ± 10.0* 0.09 ± 0.08
Rest 826 ± 177 191 ± 53 82 ± 9 276 ± 80 2.3 ± 1.4 0.05 ± 0.04
Exercise 596 ± 176† 149 ± 51† 68 ± 12† 183 ± 78† 12.5 ± 16.8 0.16 ± 0.15*
* p<.05 † p<.01 exercise effect
B. Antiarrhythmic Drug Effects Both flecainide and amiodarone did not alter VR parameters under resting conditions. However, significant differences in a few parameters were noted under exercise conditions, as shown in Tables 1 and 2. In particular, after flecainide loading pNN50 and rMSSD decreased, while the mean of NN increased with amiodarone. In addition, the differences between parameters with and without drugs were computed and compared. Some significant changes were noted during exercise and are shown in Figure 2. Flecainide decreased M, pNN50, rMSSD, while amiodarone increased those parameters (p<.05).
Fig. 2 Box plots of the difference (d) with drug vs. without drug during exercise phases. Only VR parameters with significant differences are shown. See text for details. * p< .05
To assess drug effects on VR to exercise, exerciseinduced changes of VR parameters at baseline and after drug administration were compared. After amiodarone administration the change in all assessed variables was similar compared with baseline measurements. On the contrary, flecainide amplified the response to exercise, resulting in a pronounced (p<.05) pNN50 reduction (Figure 3). IV. DISCUSSION In this study, VR has been characterized according to the changes of the autonomic balance induced by exercise testing, in patients with persistent AF. In addition, the effect of two antiarrhythmic drugs, flecainide and amiodarone, during rest and exercise and their influence on exercise response, has been assessed.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Autonomic Modulation of Ventricular Response by Exercise and Antiarrhythmic Drugs during Atrial Fibrillation
85
derived from time domain and non-linear analysis. This may be useful for better characterization of AF modulating factors in the individual patient and assessment of antiarrhythmic drug effects.
ACKNOWLEDGMENT This study was supported by the Volkswagen Foundation.
REFERENCES
Fig. 3 Mean ± one SD and individual values of pNN50 in flecainide group. See text for details. 1.
A. Exercise effects After exercise, a significant decrease was noted in all time domain parameters (i.e. mean RR, SDNN, pNN50 and rMSSD) evidencing a reduction in the RR beat-to-beat variability. In addition, an increase of the LP and R indexes, reflecting linear and non-linear series predictability respectively, was observed. It should be noticed that both the LP and R values are very low compared to sinus rhythm [11], thus the predictability degree of VR is very small. Nevertheless, both the parameters succeed in underlining the increased predictability of VR during exercise. These results regarding VR during exercise stress the relevant role that is played by the ANS in patients with AF. These findings are consistent with an increase of the sympathetic tone and decrease in vagal tone. B. Drug effect During rest condition, the computed parameters were not different after drug (flecainide or amiodarone) administration, while during exercise testing some significant differences were observed. Flecainide decreased pNN50 and rMSSD, thus suggesting a possible enhanced vagolytic effect, as both these indexes are highly related to vagal activity. Amiodarone increased the mean of NN intervals, thus highlighting its antiadrenergic activity. Concerning these three parameters, they are reduced after flecainide administration, while they are increased after amiodarone loading. Thus, this distinct behavior stresses the differences in ANS modulating effects of the two different classes of antiarrhythmic drugs. C. Conclusions Monitoring of exercise-induced and antiarrhythmic drug effects on the ANS during AF is possible with parameters
Stein KM, Walden J, Lippman Net al. (1999) Ventricular response in atrial fibrillation: random or deterministic? Am J Physiol 277:H452H458. 2. Hayano J, Yamasaki F, Sakata S et al. (1997) Spectral characteristics of ventricular response to atrial fibrillation. Am J Physiol 273:H2811H2816. 3. Frey B, Heinz G, Binder T et al. (1995) Diurnal variation of ventricular response to atrial fibrillation in patients with advanced heart failure. Am Heart J 129:58-65. 4. Nagayoshi H, Janota T, Hnatkova K et al. (1997) Autonomic modulation of ventricular rate in atrial fibrillation. Am J Physiol 272:H1643H1649. 5. Toivonen L, Kadish A, Kou W et al. (1990) Determinants of the ventricular rate during atrial fibrillation. J Am Coll Cardiol 16:11941200. 6. Husser D, Stridh M, Sornmo L et al. (2005) Time-frequency analysis of the surface electrocardiogram for monitoring antiarrhythmic drug effects in atrial fibrillation. Am J Cardiol 95:526-528. 7. Huikuri HV, Niemel MJ, Ojala S et al. (1994) Circadian rhythms of frequency domain measures of heart rate variability in healthy subjects and patients with coronary artery disease: effects of arousal and upright posture. Circulation 90:121–126. 8. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. (1996) Heart rate variability: standards of measurement, physiological interpretation, and clinical use. Circulation 93:1043-1065. 9. Van den Berg MP, Haaksma J, Brouwer J et al. (1997) Heart rate variability in patients with atrial fibrillation is related to vagal tone. Circulation 96:1209-1216. 10. Porta A, Baselli G, Liberati D et al. (1998) Measuring regularity by means of a corrected conditional entropy in sympathetic outflow. Biol Cybern 78:71-78 11. Mainardi LT, Corino VDA, Lombardi L et al. (2004) Assessment of the dynamics of atrial signals and local atrial period series during atrial fibrillation: effects of isoproterenol administration. Biomed Eng Online 3:37. Author: Ing. Valentina D.A. Corino Institute: Dipartimento di Ingegneria Biomedica Politecnico di Milano Street: via Golgi 39 City: 20133, Milano Country: Italy Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Classification Methods for Atrial Fibrillation Prediction after CABG S. Sovilj1, R. Magjarević1 and G. Rajsman2 1
University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia 2 Clinical Hospital Center Zagreb, Department of Cardiac Surgery, Zagreb, Croatia
Abstract— The aim of this study is to compare different methods for the classification type problems specifically in predicting Atrial Fibrillation (AF) after Coronary Artery Bypass Grafting (CABG). The prediction/classification model tends to predict a categorical dependent variable (which determines the belonging of patient to a group of patients that have or to a group of patients that have not developed AF), by one or more continuous and/or categorical predictor variables derived from the patients' history, electrocardiograms and in particular from the P wave. We have obtained the parameters from continuously recorded ECG after the surgery. Keywords— classification, prediction, atrial fibrillation, CABG, discriminant analysis, decision tree.
I. INTRODUCTION Atrial fibrillation (AF) is the most common postoperative arrhythmia after CABG and occurs in 30 – 40 % patients [1]. AF may cause different complications like hemodynamic changes, cerebral and other thromboembolisms etc., all potentially dangerous for patients. In previous studies which tried to find predictors of AF in similar groups of patients, authors analyzed relatively short segments of multichannel ECG recorded and included into analysis also some other measured physiological parameters or data from patient history [2, 3]. These studies have not brought to consent on either parameters which should be recorded or on their value which could be used in clinical practice as a general procedure. Therefore we decided to analyze the parameters of the patients’ ECGs recorded continuously after the surgery (while the patients are still in the ICU). We have enlarged the number of parameters from the ECG that entered the analysis and we used less demanding instrumentation for ECG acquisition. The aim of this study is to develop a statistical classification model for prediction of AF based on parameters obtained from the patients' ECGs which were continuously acquired after the CABG. The early risk assessment for AF, several hours before it begins, would result the timely medication treatment of patients prone to AF and would reduce an incidence of arrhythmia while the other group of patients could be excluded from prophylactic anti-arrhythmia medication, thus reducing possible drug contraindications in the group.
II. METHODS In the period from 2005 to 2006, we have continuously recorded the standard II lead ECG of fifty patients in a period of typically 48 hours after CABG or until the onset of AF. The ECG was acquired with HP Patient Monitor 78330A and digitized using a standard ADC card (Measurement Computing CIO-DAS08/JR) embedded into a PC. The sampling frequency was 1kHz and the amplitude resolution 12-bit. Electrocardiograms were segmented with a QRS and P wave detector based on the wavelet transformations [2] and numerous parameters were calculated especially in the P wave segment of ECG. Every recorded ECG was divided in the time segments of one-hour duration. In each one-hour ECG record, the number of detected QRS segments had to be more than 2000 in order to exclude those segments which had a lot of artifacts superimposed to the signal. Furthermore the records in which a number of detected P waves was less than 75% of the total number of detected QRS segments in current hour were also excluded from analysis. We made statistical analysis of the data obtained from the recorded ECG segments and identified parameters which best discriminate the two patient groups (AFP and NAFP). Different AF prediction/classification models were proposed and we compared their accuracy and possible applicability: linear discriminant analysis models, classification and regression tree (C&RT) and CHAID statistical tree models, Boosting Tree Classifier models and Binomial Logit Regression models. For the statistical analysis Statistica 7, StatSoft Inc. was used [4]. A. Measured ECG parameters After the QRS and P wave detection in recorded ECG segments, 110 different parameters dealing predominantly with atrial activity were measured or calculated. Only those that were considered important in the discrimination between the groups are presented in Table 1 though later analysis excluded some of them from the models. An hourly average value and standard deviation for each of these parameters was calculated. The parameters were aligned in three categories: Time parameters, Wavelet parameters and Other parameters, as presented in Table 1.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 46–49, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Classification Methods for Atrial Fibrillation Prediction after CABG
Table 1 Parameter Time parameters: PonPoffAVR, PonPoffDEV PonPpeakAVR, PonPpeakDEV PpeakPoffAVR, PpeakPoffDEV PpeakRpeakAVR, PpeakRpeakDEV PonQonAVR, PonQonDEV RRAVR, RRDEV, HRAVR, HRDEV Wavelet parameters: Pslope1AVR5, Pslope1DEV5
Pslope2AVR5, Pslope2DEV5
Pslope1Pslope2AVR5, Pslope1Pslope2DEV5 WenergyAVR5, WenergyDEV5 relWenergyAVR5 Wentropy relPslope1Pslope2AVR, relPslope1Pslope2DEV Other parameters: ampAVR, ampDEV AonoffAVR, AonoffDEV RecordHour PpeakRpeakAVR_RR, PpeakRpeakDEV_RR, PonQonAVR_RR, PonQonDEV_RR, PonPoffAVR_RR, PonPoffDEV_RR
Measured parameters. Description P wave duration (from P wave onset to P wave offset) 1st half P wave duration (from P wave onset to P wave peak) 2nd half P wave duration (from P wave peak to P wave offset) PR interval duration (from P peak to R peak) PQ interval duration (from P onset to Q onset) RR interval duration and Heart Rate
P wave rising slope (value of wavelet coefficient detected at the 5th wavelet scale) P wave falling slope (value of wavelet coefficient detected at the 5th wavelet scale) duration between points of highest and lowest P wave slope energy measured between P wave onset and offset at 5th wavelet scale relative P wave energy (ratio between energy at 5th wavelet scale and total) measure of P wave energy dispersion at different wavelet scales relative ratio between rising and falling P wave slope P wave amplitude Surface area below P wave number of hour after CABG PR interval duration normalized with RR interval PQ interval duration normalized with RR interval P wave duration normalized with RR interval
suffix AVR notes mean value of measured parameter in 1 hour period suffix DEV notes standard deviation of measured parameter in 1 hour
Statistical analysis included 360 hours of ECG of patients who developed AF and 1003 hours of ECG of patients which did not develop AF. Approximately two third of the cases (930 hours) were randomly selected and entered in the analysis as a learning sample and the remaining third of the cases (433 hours) was used for a cross-validation and was treated as a testing sample. Prior probability of the class size was estimated from learning sample.
47
B. General discriminant analysis model General Discriminant Analysis (GDA) is a method for building a multivariate linear model used to determine the variables that discriminate between two or more naturally occurring groups. A categorical dependent (criterion) variable labeled AF determines the belonging of a patient to either the group that has developed (PAF) or has not developed AF (nPAF) and it was predicted with more continuous independent (predictor) variables using the model obtained by GDA. Using GDA, a model for prediction of AF based on a number of parameters measured from ECG was built (labeled the model GDA1). In a forward stepwise analysis (in every step) all variables were evaluated in order to determine which contribute the most to the discrimination between two groups and were included into the model step by step. In each step, variables that had a statistical significance p < 0.05 in the discrimination entered into the model GDA1 while the others were removed from the discriminant function. Finally, 16 variables have been entered in the discriminant function. Variables that contribute the most to the discrimination are given in order of their importance: RRAVR, PonPoffAVR,, PonQonAVR, PpeakRpeakAVR, Pslope2AVR5, Pslope2DEV5, PonPoff23AVR (for more the detailed description of noted predictors see Table 1). It is still difficult to interpret the model and to explain why the observations are classified or predicted in a particular manner. Particularly, GDA1 model assumes a linear relation between predictor variables and dependent variables. The quality measures for the model GDA1 are presented in the Table 2. C. General classification and regression tree model General Classification and Regression Tree algorithm (GC&RT) is used to build a classification tree for predicting Table 2
Quality measures for GDA1 model (TP – true positive, FN – false negative, TN – true negative, FP – false positive). learning sample
TP cases FN cases TN cases FP cases sensitivity specificity positive predictivity negative predictivity accuracy
138 109 637 46 55,9% 93,3% 75,0% 85,4% 83,3%
testing sample 47 65 289 32 42,0% 90,0% 59,5% 81,6% 77,6%
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
48
S. Sovilj, R. Magjarević and G. Rajsman
Table 3 TP cases FN cases TN cases FP cases sensitivity specificity positive predictivity negative predictivity accuracy
Quality measures for CRT1 model. learning sample
testing sample
140 107 645 38 56,7% 94,4% 78,7% 85,8% 84,4%
42 70 291 30 37,5% 90,7% 58,3% 80,6% 76,9%
Table 4 Quality measures for CHAID model labeled CHAID1.
Fig. 1 CRT1 classification tree model. N denotes total number of cases that entered the classification tree at certain node, PAF denotes cases (hours) the tree classified as belonging to a patient prone to AF, nPAF denotes cases (hours) the tree classified as belonging to a patient not prone to AF, Pi denotes the name of the predictor (see Table 1 for description), TP, TN, FP, FN same abbreviation as in Table 2, numbers above TP, TN, FP, FN designate the validity of the classification in the particular node.
a categorical predictor variable. GC&RT algorithm determines a set of if-then logical, univariate split conditions and tries to achieve maximal possible accuracy for the prediction. Tree models are nonparametric and nonlinear and can reveal a non-monotonic relationship between the variables using multiple splits on the same variable. The interpretation of results summarized in a tree is very simple and this simplicity is useful for a rapid classification and also for physiological evaluation and interpretation. [4]. A misclassification cost was assumed equal for both PAF and nPAF groups and a priori knowledge about the sizes of groups was estimated based on analyzing sample size and was used by GC&RT algorithm. The classification tree labeled CRT1 was designed (Fig. 1) and the classification properties of obtained model are presented in the Table 3. D. General CHAID model CHI-squared Automatic Interaction Detector (CHAID) represents one of the oldest algorithms for classification tree design. The CHAID classification trees do not have to be binary and in a single node can have more than two branches. Because of simplicity the CHAID design algorithm can be used for the analysis of very large date sets. The CHAID algorithm first divides a continuous predictor into a number of categories with about equal
TP cases FN cases TN cases FP cases sensitivity specificity positive predictivity negative predictivity accuracy
learning sample
testing sample
180 67 515 168 72,9% 75,4% 51,7% 88,5% 74,7%
65 47 238 81 58,0% 74,6% 44,5% 83,5% 70,3%
number of observations so it becomes a categorical predictor. For each predictor a pair of category that is least significantly different in Pearson Chi-square test, with respect to the dependent variable for the classification, is determined. The algorithm then makes a choice of the predictor variable that yields the most significant split. Terminal nodes are defined as the points were no more splits can be performed because p-value for selected predictor is greater than some pre-adjusted smallest value. The observations that have missing data in any of predictor variables are excluded from the analyses. Quality measures of the CHAID classification tree model labeled CHAID1 is presented in Table 4. E. Boosting Tree Classifiers Model Gradient Boosting Trees is a method that repeatedly applies predictive function in the series and weights the output of each function so that the total error of the prediction is minimized. The predictive accuracy of such a series greatly exceeds the accuracy of the base function used alone. After the first tree is designed, the residuals (error values) from the first tree are then fed into the second tree which attempts to reduce the error. This process is repeated through a chain of successive trees. The final predicted
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Classification Methods for Atrial Fibrillation Prediction after CABG
Table 5
Quality measures for Boosting Tree model labeled BOOST1.
TP cases FN cases TN cases FP cases sensitivity specificity positive predictivity negative predictivity accuracy
learning sample
testing sample
297 144 741 25 67,3% 96,7% 92,2% 83,7% 86,0%
91 38 272 26 70,5% 91,3% 77,8% 87,7% 85,0%
49
The predicted values for the dependent variable is never less than or equal to 0, or greater than or equal to 1, regardless of the values of the independent variables because of logit or logistic transformation [4]. After building the logit regression model (labeled LOGIT1) which quality measures are presented in Table 6, a test of significance for the predictors was performed and they are presented in order of the importance for the model: PonPoffAVR5, PonPoffAVR_RR, relPslope1Pslope2AVR5, Pslope2DEV5, Pslope2AVR5, PonQonAVR4 (for more the detailed description of noted predictors see Table 1). III. CONCLUSIONS
value is formed by adding the weighted contribution of each tree. Models designed as an additive series of trees are among the most accurate and they achieve better results than any other known modeling technique [4] and our results in Table 5 support that statement. The primary disadvantage of the Boosting Tree is that the model is complex and cannot be visualized like a C&RT or CHAID models. After building the boosting tree, predictor statistics can be calculated. The predictors are given on order of the importance: relPslope1Pslope2AVR5, WenergyAVR4, Pslope2AVR5, WenergyAVR5, RRAVR, Pslope2DEV5, PonQonAVR4, PonPoffAVR_RR (for more the detailed description of noted predictors see Table 1). F. Binomial Logit Regression Models Binomial Logit Regression Model estimates the relationship between more continuous independent variables (predictors) with the binary dependent variable which specifies the case belonging to the class. The cases that belong to the class of patients who have developed AF were coded with 1 and the cases that belong to the class of patients who have not developed AF were coded with 0. Table 6 Quality measures for Binomial Logit model labeled LOGIT1. TP cases FN cases TN cases FP cases sensitivity specificity positive predictivity negative predictivity accuracy
learing sample
testing sample
156 91 626 57 63,2% 91,7% 73,2% 87,3% 84,1%
55 57 293 28 49,1% 91,3% 66,3% 83,7% 80,4%
In our previous work we have found that several additional P wave ECG parameters may be relevant for early CABG AF prediction [1]. However, manipulating with a large number of parameters does not allow easy and simple decision making and demands formal blind studies. We have evaluated five classification tree models on our data samples. The Boosting Trees classifies tree showed best results, i.e. highest overall sensitivity and accuracy, which might had been expected due to the highest complexity. Our results are comparable with the results obtained in previous studies and we expect that inclusion of additional parameters from the patient history will improve the classification.
ACKNOWLEDGMENT This study was supported by Ministry of Science, Education and Sport of the Republic of Croatia under grant no. 036-0362979-1554.
REFERENCES 1. 2. 3. 4.
Sovilj S, Rajsman G, Magjarević R. Multiparameter Prediction Model for Atrial Fibrillation after CABG. Detector. Proc. Computers in Cardiology 2006; 33:489-92. Sovilj S, Rajsman G, Magjarevic R. Continuous Multiparameter Monitoring of P Wave Parameters after CABG Using Wavelet Detector. Proc. Computers in Cardiology 2005; 32:945-8. Poli S, Barbaro V, Bartolini P, Calcagnini G, Censi F. Prediction of AF from surface ECG: review of methods and algorithms. Ann 1st Super Sanita 2003; 39(2). StatSoft Inc. at http://www.statsoft.com Author: Siniša Sovilj Institute: Street: City: Country: Email:
University of Zagreb Faculty of Electrical Engineering Unska 3 Zagreb Croatia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Dynamic Repolarization Assessment and Arrhythmic Risk Stratification E. Pueyo1, M. Malik2 and P. Laguna1 1
2
Aragón Institute for Engineering Research (I3A), University of Zaragoza, Spain Department of Cardiological Sciences, St. George’s Hospital Medical School, London, UK
Abstract— A dynamic model is proposed to study the relationship between the QT and RR intervals of the surface electrocardiogram. The model accounts for the influence of a history of previous RR intervals on each QT, considering that such an influence may vary along the recording time. For identification of the model parameters, an adaptive methodology that uses the regularized Kalman filter is developed. A set of risk markers are derived from the estimated model parameters and they are tested on ambulatory recordings of postmyocardial infarction patients randomized to treatment with amiodarone or placebo. The results of our study show that amiodarone substantially modifies the QT interval response to heart rate changes. Furthermore, the way amiodarone acts on QT adaptation allows to identify patients in which treatment is being effective and separate them from those in which it is not and, consequently, are at higher risk of suffering from arrhythmic death. Keywords— Electrocardiogram, repolarization, QT/RR, arrhythmic death.
I. INTRODUCTION Numerous studies have pointed out the tight relationship that exists between the QT interval, which expresses the entire duration of ventricular depolarization plus repolarization, and the RR interval, which is the inverse of heart rate. It has been suggested that characteristics derived from such a relationship can be used to detect or predict states associated with high arrhythmic risk [1]. The use of long-term electrocardiographic (ECG) recordings is recommended for risk stratification studies based on repolarization analysis [2]. Those types of recordings contain sharp changes in heart rate and, consequently, QT hysteresis needs to be considered when exploring the QT/RR relationship. Hysteresis refers to the fact that the QT interval is not able to follow the RR interval changes instantaneously but there is a time lag in the adaptation. In [3] a method was developed to investigate QT changes after RR in ambulatory recordings of post-myocardial infarction (MI) patients of the EMIAT database that were followed-up during a mean time of two years. Using the proposed method, several indices characterizing QT/RR adaptation, including the time that QT needs to follow RR changes, were evaluated. Those indices showed strong ca-
pacity to discriminate between patients at low and high risk of arrhythmic death while on therapy with amiodarone. Although in [3] the time and profile of QT adaptation were considered to be specific of each patient, it was assumed that those characteristics did not vary along the recording time. In order to account for the dynamic properties of the QT/RR relationship, a time-variant methodology is used in the present paper that extends the one described in [3]. Based on that method we have investigated QT dependence on RR over the same recordings of the EMIAT database and we have derived new markers characterizing QT dynamicity. The potential of those markers for arrhythmic risk stratification is presented. II. METHODS A. Population and data measurements The study population comprises 939 patients of the EMIAT database. Patients were survivors of acute MI randomized to treatment with amiodarone or placebo. All of them were aged less than 75 years and had a left ventricular ejection fraction (LVEF) inferior to 40%. Meaningful clinical data were available in 866 patients who were followedup during a mean time of 620 days (±176). Of these patients, 404 received placebo and 462 were treated with amiodarone. There were 26 arrhythmic deaths in the placebo group and 18 in the amiodarone group. The electrocardiographic recordings were obtained one month after randomization. All of them were 24-hour Holter ECGs with 3 recorded leads. In each lead, individual QT and RR intervals were measured using the commercial software of Pathfinder 700 Holter system (Reynolds Medical, Hetford, UK). These measurements where checked by an expert and, where necessary, they were corrected manually or deleted. For each patient and lead, the number of cardiac cycles where it was possible to determine both the RR measurement and the QT measurement were counted. The lead that presented the largest number of accepted beats was selected for further analysis. Potential outliers in the RR and QT series were removed by applying a procedure based on a Median Absolute Deviation (MAD) filter. The clean series were interpolated linearly at a sampling fre-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 74–77, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Dynamic Repolarization Assessment and Arrhythmic Risk Stratification
quency of 1 Hz and low-pass filtered (0.05 Hz) to avoid the sympathetic and parasympathetic influences of the Autonomic Nervous System. The final series are denoted by x RR (n) and y QT (n) , respectively.
The QT/RR relationship is modeled by considering a nonlinear system with memory that has x RR (n) as its input signal and y QT (n) as its output. The system is assumed to be composed of two blocks (see Fig. 1). The first block is a linear time-variant FIR filter of order N: h(n) = [ h 0 ( n ) … h N −1 (n) ]
T
∈ ℜ N×1
(1)
x RR (n) = [x RR (n) x RR (n − 1) … x RR (n - N + 1)] . (2) T
The second block is a time-varying nonlinearity represented by a first-order polynomial: g ( z RR (n) , a(n) ) = aT (n) z RR (n)
(3)
a(n) = [a 0 (n) a 1 (n)] ∈ ℜ 2×1
and
T
[
z RR (n) = 1 z RR (n)
]
∈ℜ
h(n)
g ( . , a(n))
y QT (n)
+
2×1
.The order of the linear filter is
defined as N=50 based on the results reported in [3] where it is shown that the initial 40 - 50 RR intervals previous to each QT are the most clinically relevant. The objective of our study is to identify the described system only from the knowledge of the input and output signals. In order to guarantee uniqueness in the determination of the filter weights and the polynomial coefficients, a normalization constraint on the weights is imposed: h T (n) ⋅ 1 = 1 , ∀ n, with 1 denoting the N × 1 vector of ones. Also, constraints referred to weights being positive are introduced so as to be able to derive physiologically plausible interpretations. With all the above mentioned constraints, the output of the first block can be interpreted as a weighted-averaged RR measurement (which is optimally defined at each time n), while the output of the second block expresses the evolution of the QT interval as a function of such an averaged RR measurement. Finally, the output of the global system is considered to be contaminated with some additive white noise v(n) that can include delineation errors and/or inaccuracies due to modeling assumptions: y QT (n) = a T (n) z RR (n) + v(n) .
C. System identification State-space formulation Denoting
θ(n) = [ a 0 (n) a1 (n) h 0 (n) … a1 (n) h N-1 (n) ] s(n)= [1 x RR (n) … x RR (n-N+1) ] , T
T
(5) (6)
the output y QT (n) can be expressed as
whose output is z RR (n) = h T (n) ⋅ x RR (n) , where
T
x RR (n)
v ( )
Fig. 1 Block diagram of the dynamic model proposed in this study.
B. Model Composition
with
75
(4)
y QT (n) = s T (n) θ(n) + v(n)
(7)
where s(n) is the known observation vector and θ(n) is the parameter vector to be estimated, both of dimension (N+1) × 1. In addition to the observation equation (7), a second equation describing the time-varying nature of the system state is incorporated: θ(n + 1) = θ(n) + w (n) .
(8)
The two equations (7) and (8) constitute a state-space representation of the system to be identified. In such a representation, the noises v(n) and w(n) are assumed to be uncorrelated zero-mean white processes with the variance of v(n) denoted by σ 2v (n) and the covariance of w(n) denoted by Qw(n). The initial state of the system, θ(0) , is assumed to be uncorrelated with v(n) and w(n). The mean μ0,θ of θ(0) is defined using equation (5) and vectors μ0,h and μ0,a initially defined employing simple fitting procedures applied to the initial samples of the data series. The covariance matrix Π0,θ of θ(0) is taken as the identity matrix. Kalman filter with regularization Estimation of the system vector θ(n) is performed utilizing the Kalman filter (KF). The Kalman Filter is a linear adaptive MMSE filter that is able to deal with nonstationary environments, which are the type of environments we encounter when analyzing the QT/RR relationship over ambulatory recordings. Direct application of KF to our formulated problem implies that, at each time n, N+1 parameters need to be estimated from a unique observation, as it can be observed from equation (7). In order to make the solution more robust
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
76
E. Pueyo, M. Malik and P. Laguna
against noise or imprecision that can be present at the output y QT (n) , regularization is incorporated into the problem.
This means that additional a priori information on the solution is added. This is performed in our study by augmenting the observation equation (7) as described next: y QT (n) = S T (n) θ(n) + v(n)
(9)
where ⎡ sT (n) ⎤ ⎡ y (n) ⎤ ⎡ v(n) ⎤ y QT (n) = ⎢ QT ⎥ , S(n) = ⎢ ⎥ , v(n) = ⎢ ⎥ (10) 0 D β(n) (n) ⎣ ⎦ ⎣ v'(n) ⎦ ⎣ ⎦
In equation (10), 0 denotes the N × 1 vector of zeros and β(n) is a scalar called regularization parameter, which is selected using the so-called L-curve criterion [4]. The matrix D(n) is defined so as to force the filter weights follow a relation close to an exponential one. That selection is based on our experience about QT dependence on RR [3] and, in any case, the strength put on that type of smoothing is determined along with the state estimation. The noise v'(n) is a fictitious zero-mean noise uncorrelated with θ(n) and v(n) and with covariance matrix taken as the identity. In the application of the KF, the variance σ 2v (n) of the measurement noise v(n) and the covariance Qw(n) of the process noise w(n) are estimated following the approach proposed in [5]. Constraints With the objective of having an estimate of θ(n) satisfying the constraints described in section II.B, the constraint space Ω is built and the unconstrained solution θˆ (n) is projected onto it. Ω is defined by the condition that all of the elements of the estimated state vector, except the first one, have the same sign. This guarantees that the estimated weights are positive. Once the projection ˆ θˆ (n) = arg min
θ (n)∈Ω
{ (θ(n) − θˆ (n)) (θ(n) − θˆ (n)) } T
(12)
is obtained, the constrained solution is renamed as θˆ (n) . Estimates hˆ (n) and aˆ (n) are readily derived from θˆ (n) . D. Clinical markers
A number of indices are proposed for clinical comparisons. Some of those indices are defined from the variable L90(n) that measures the time required by the QT interval to complete 90% of its adaptation in response to RR changes. The variable L90(n) is calculated at each instant n using the
weight profile hˆ (n) estimated in II.C [5]. The proposed indices are L90,acc , L90,dec and L90,sta , which are defined as the mean of L90(n) in response to heart rate accelerations, decelerations and stable rate periods, respectively. In each recording, those types of periods are identified following the approach proposed in [5]. Other indices are defined from the slope of the line that fits the ⎡⎣ yQT (n) , z RR (n) ⎤⎦ data in small neighborhoods around each value of z RR (n 0 ) . The proposed markers are sacc, sdec and ssta, which are calculated as the mean of the estimated aˆ 1 (n) in periods of accelerating, decelerating and stable rate, respectively.
III. RESULTS AND DISCUSSION A. Dynamic QT adaptation
Evaluation of QT lag behind RR changes revealed substantial differences along the 24-hour recording. In mean over recordings, the time required by the QT interval to complete 90% of the adaptation was L90,dec = 2.1 min when measured after a heart rate deceleration, L90,ace = 1.6 min after a rate acceleration, and L90,sta = 1.9 min under stable rate conditions. The slope s described in section II.D was as well evaluated in episodes of decelerating, accelerating and stable heart rate. The mean slope values were sdec = 0.152, sacc = 0.135 and ssta = 0.127, respectively. The results obtained in our study corroborate the hypothesis that QT dependence on RR is not constant along the recording time but such a dependence changes in response to heart rate variations. Specifically, we found that QT adaptation after a sudden rate acceleration is more rapid than after a stable or a decelerating rate period. This can be explained by the fact that readjustment of cell mechanisms needs to be completed faster after a heart rate acceleration so as to avoid beat overlapping. B. Clinical risk stratification
The risk markers described in section II.D were separately assessed in the placebo and amiodarone arms. In each of the two arms, independent analysis was performed for the group of patients who suffered arrhythmic death while on therapy and the group of those who survived. Results are presented in Table I. It can be observed that survivors treated with amiodarone have prolonged QT adaptation times as compared to those treated with placebo, either when the adaptation time is measured in accelerating (L90,acc), decelerating (L90,dec) or stable (L90,sta) rate periods.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Dynamic Repolarization Assessment and Arrhythmic Risk Stratification
77
Table 1 Mean and standard error of the mean for the markers described in II.D. Units are: min for L90,acc , L90,dec and L90,sta ; n.u. for sacc , sdec and ssta Placebo
Amiodarone
L90,acc L90,dec L90,sta
Survivors Mean ± SEM 67,89 ± 2,77 99,25 ± 4,18 88,76 ± 6,08
Victims Mean ± SEM 110,55 ± 27,04 153,50 ± 28,61 190,59 ± 73,91
sacc sdec ssta
0,059 ± 0,003 0,085 ± 0,003 0,066 ± 0,004
0,091 ± 0,024 0,122 ± 0,025 0,114 ± 0,029
p-value 0,133 0,075 0,186
Survivors Mean ± SEM 112,31 ± 6,43 146,35 ± 7,98 134,36 ± 9,08
Victims Mean ± SEM 75,68 ± 7,21 111,93 ± 14,34 115,42 ± 32,35
p-value 4 · 10-4 0,047 0,581
0,200 0,159 0,120
0,084 ± 0,005 0,113 ± 0,005 0,095 ± 0,006
0,045 ± 0,011 0,064 ± 0,011 0,063 ± 0,014
0,006 0,001 0,049
On the other hand, victims on amiodarone show reduced adaptation times with respect to values found in the placebo arm. Similar observations can be made using the variables that measure the slope of the QT/ RR relationship: in amiodarone, survivors exhibit increased slope values, while victims have reduced values. These results confirm our previous observations that amiodarone modifies QT adaptation and such a modification is different for victims and survivors of arrhythmic death [3]. The advantage of the dynamic method presented in this study is that local repolarization heterogeneities can be effectively detected even if they occur only at isolated episodes of the recording where heart rate experiments sudden changes. On the contrary, with the assumption that QT adaptation preserves constant characteristics in one and the same patient, local heterogeneities can be masked if, on average, the adaptation process is not severely altered. Arrhythmic death has been usually associated with rate exertion [6]. Our results suggest that amiodarone improves repolarization adaptation by delaying the response of the QT interval to rate accelerations. Those patients, in which amiodarone is not able to provoke such a delay are at higher risk of suffering from arrhythmic death. In a similar manner, when a heart rate deceleration occurs, amiodarone increases the QT adaptation time so as to prevent excessive increased QT lengthening in the early phase of rate decelerations, which could trigger ventricular arrhythmias [7]. Patients, in which amiodarone is not being effective show shorter adaptation times, indicating higher vulnerability to arrhythmic death.
ACKNOWLEDGMENT This study was supported by Ministerio de Ciencia y Tecnología and FEDER under Project TEC2004-05263-
C02-02, in part by the Diputación General de Aragón (DGA), Spain, through Grupos Consolidados GTC ref:T30, and by CIBER through ISCIII CB06/01/0062.
REFERENCES 1.
2. 3.
4. 5.
6. 7.
Chevalier P, Burri H, Adeleine, Kirkorian G, Lopez M, Leizorovicz A, Andre-Fouet X, Chapon P, Rubel P, Touboul P (2003) QT dynamicity and sudden death after myocardial infarction: results of a long-term follow-up study. J Cardiovasc Electrophysiol, 14(3):227– 233 Lux R, Kirchhof P, Cygankiewcz I, Brockmeier K (2007) Electrocardiographic markers of sudden cardiac death. J Electrocardiol 40(1 Suppl):S9–S10 Pueyo E, Smetana P, Caminal P, Bayés de Luna A, Malik M, Laguna P (2004) Characterization of QT interval adaptation to RR interval changes and its use as a risk-stratifier of arrhythmic mortality in amiodarone-treated survivors of acute myocardial infarction. IEEE Trans Biomed Eng 51(9):1511–1520 Golub GH, Hansen PC, O´Leary DP (1999) Tikhonov regularization and total least squares. SIAM J Matrix Annal Appl 21(1):185-194 Pueyo E (2006) Detección de heterogeneidades en la depolarización y repolarización cardiacas a partir del electrocardiograma como mejora en la predicción del riesgo frente a arritmias. PhD dissertation. University of Zaragoza, Spain, 2006 Cobb LA, Weaver WD (1986) Exercise: a risk for sudden death in patients with coronary heart disease. J Am Coll Cardiol 7(1):215–219, Review Singh JP, Johnston J, Sleight P, Bird R, Ryder K, Hart G (1997) Left ventricular hypertrophy in hypertensive patients is associated with abnormal rate adaptation of QT interval. J Am Coll Cardiol 29(4):778–784
Esther Pueyo Aragon Institute for Engineering Research (I3A), University of Zaragoza C/ Maria de Luna, 1 Zaragoza Spain Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Effects of vagal blockade on the complexity of heart rate variability in rats M. Baumert1, E. Nalivaiko2 and D. Abbott1 1
Centre for Biomedical Engineering, School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, Australia; 2 Department of Human Physiology and Centre for Neuroscience, Flinders University, Adelaide, Australia.
Abstract— In this paper we investigate the influence of vagal blockage on heart rate variability complexity measures. Nine conscious rats are injected with methyl-scopolamine brobide (50 µg/kg s.c.). We analyze 10 minute segments of beat-to-beat intervals before and after injection by standard time and frequency domain methods, compression entropy, sample entropy, Poincaré plot, detrended fluctuation analysis and symbolic dynamics. All parameter domains show changes in heart rate variability after vagal blockade, indicating a decrease in heart rate complexity. In conclusion, vagal modulation plays an important role in the generation of heart complexity in rats or, in other words, heart rate complexity measures are sensitive to vagal heart rate modulation.
dance with the European Community Council Directive of 24 November 1986 (86/609/EEC), and are approved by the Flinders University Animal Welfare Committee. During preliminary surgery, telemetric ECG transmitters (TA11CA-F40, Data Science International, USA) are implanted into the peritoneal cavity under isoflurane (1.5% in 100% oxygen) anesthesia. On the day of experiment, ECG is recorded before and after administration of methylscopolamine bromide (50 µg/kg s.c., Sigma, USA), a vagal blocker that does not cross blood-brain barrier. Analogue signal is acquired using the MacLab interface and Chart software (ADInstruments, Sydney, Australia).
Keywords— heart rate variability, complexity, vagal blockade, rat
B. Heart rate variability analysis
The heart rate underlies beat-to-beat variations, reflecting modulations mediated by vagal and sympathetic branches of the autonomic nervous system. Heart rate variability (HRV) analysis has shown prognostic significance in patients after acute myocardial infarction [1] and in the diagnosis of autonomic neuropathy [2]. Furthermore, it is used in various research settings such as sports [3] or obstetrics [4]. The quantification of HRV is basically a time series analysis task, and numerous approaches have been proposed, including traditional time and frequency domain measures [5], but also measures from complex systems science [4,6,7,8]. Although the sensitivity of some of those new measures often appeared superior to standard time and frequency domain measures, their physiological meaning is hardly understood and their interpretation remains difficult. To assess their sensitivity to vagal heart rate modulation we investigate the impact of vagal blockade on HRV complexity measures in a rat model.
Pre-processing: RR intervals series are extracted from the ECG recording, using the Chart® software (ADInstruments, Sydney, Australia). Subsequently, RR time series are scanned and manually edited. Artifacts and ectopic beats are filtered, resulting in a normal-to-normal (NN) interval time series. For further analysis we select a segment of a 10 minute length, starting 15 minutes prior to the vagal blockade injection in order to obtain HRV baseline values. To analyze HRV during vagal blockade, we select ten minute epochs, beginning five minutes after injection. 220 injection
200 NN interval [ms]
I. INTRODUCTION
180 160 140 120 100 0
II. METHODS A. Animal preparation and experimental protocol
10
20
30
40
50
60
time [min]
Fig. 1 Beat-to-beat interval time series in a conscious rat prior and after injection of methyl-scopolamine. The NN interval as well as heart rate variability deceases.
The study is performed on nine male Wistar Hooded rats weighing 250-300 g. Experiments are conducted in accor-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 26–29, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Effects of vagal blockade on the complexity of heart rate variability in rats
Time domain analysis: For traditional time domain analysis of HRV we compute meanNN, the mean beat-tobeat interval of normal heart beats, its standard deviation sdNN, and the root-mean-square of successive beat-to-beat differences rmssd. Frequency domain analysis: For frequency domain analysis of HRV we generate equidistant time series, using a linear interpolation at 20 Hz. Subsequently, the power spectrum is estimated, using FFT and a Blackman-Harris window. Total power (P: 0-3Hz), very low frequency power (VLF: 0.03-0.25 Hz), low frequency power (LF: 0.25-1 Hz) as well as high frequency power (HF: 1-3 Hz) are computed. Poincaré plots: Poincaré plots provide a visual way to study dynamics underlying HRV. As commonly used, NN intervals are plotted against the previous ones (i.e. NNn+1 vs. NNn). Although this approach is somewhat simplified with regard to the non-linear systems theoretical intention, it is a useful tool for HRV analysis. Usually, an ellipsoid shape is fitted to the points and the short axis SD1 and long axis SD2 are taken as measures. Although the Poincaré plot itself may capture non-linear characteristics of HRV, SD1 as well as SD2 capture only linear characteristics [9]. Compression entropy: To study the short-term complexity of beat-to-beat fluctuations we recently introduced a compression based complexity measure [8]. From the point of information theory, the smallest algorithm that produces a string is the entropy of that string (Chaitin-Kolmogorov entropy). Although it is theoretically impossible to develop such an algorithm, data compression techniques might provide a good approximation. We apply a modified version of the LZ77 algorithm for lossless data compression introduced by Lempel and Ziff in 1977 [10]. The algorithm is based on a sliding window technique and searches for matching sequences. It keeps the w, the most recently encoded source symbols (sliding window of size w). The notyet-encoded sequence of symbols is stored in the lookahead buffer of size b. The encoder positioned at p looks for the longest match of length n between the not-yet-encoded p + n -1
n-string x p
in the look-ahead buffer and the already p − w + v + n −1
encoded string x p − w + v
in the window beginning at
position v. Thus, the matching string of n symbols is simply encoded by encoding the integer numbers n and v, i.e. a pointer to the previous occurrence of this string in the sliding window. Then the position and length of the matching sequence are stored. The ratio of the uncompressed to the compressed file, called compression entropy Hc , is used as complexity measure. We set b = 3 and w = 7 as previously published [8].
27
Symbolic Dynamics: The concept of symbolic dynamics allows a simplified description of the dynamics of a system with a limited amount of symbols. Methods based on symbolic dynamics have already been successfully applied to HRV analysis providing some more global information about the underlying system. In this study we employ the technique proposed by Voss et al. [6]. The difference between each NN interval and mean NN is transformed into an alphabet of 4 symbols {0, 1, 2, 3}. Symbols ‘0’ and’2’ reflect low deviation (decrease or increase) from the mean NN interval, whereas ‘1’ and ‘3’ reflect a stronger deviation (decrease or increase over a predefined limit). Subsequently, the symbol string is transformed to words (bins) of three successive symbols. The distribution of word types reflects some nonlinear properties of HRV (see [6] for detailed information). From this symbolic dynamics the following parameters are calculated: WPSUM13: words that contain only symbols ‘1’ and ‘3’ reflecting high variability; WPSUM02: words that contain only symbols ‘1’ and ‘3’ reflecting high variability; FORBWORD: number of word types that occur seldom, i.e. with a probability less 0.001. Using a modified symbol transformation consecutive NN differences less than 2 ms are coded as ‘0’ and otherwise as ‘1’. In this way two further parameters are obtained: PLVAR2: percentage of words of length 6 that contain only ‘0’, reflecting a low variability; PHVAR2: percentage of words of length 6 that contain only ‘1’, reflecting a high variability. Sample entropy: Sample entropy (SampEn) calculates the probability that epochs of window length m that are similar within a tolerance r remain similar at the next point. SampEn is precisely the negative natural logarithm of the conditional probability that a dataset of length N, having repeated itself within a tolerance r for m points, will also repeat itself for m + 1 points, without allowing self-matches. In agreement with previously published studies we choose values of r = 0.25 and m = 2 [4]. Detrended fluctuation analysis: The DFA technique has been developed to analyze long-range correlations (longmemory dependence) in non-stationary data, where conventional fluctuation analyses such as power spectra and Hurst analysis cannot be reliably used [7]. The method works as follows: 1. Comput1.e the cumulative sum c(k ) = ∑k [s(k ) − s ] of i =1 the time series s where s is the mean of S (using the concept of Random-Walk-Analysis). 2. Compute the local trend cn (k ) within boxes of varying sizes n (least square fit). 3. Compute the root mean square of the detrended time series in dependency on box size n as
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
28
M. Baumert, E. Nalivaiko and D. Abbott
F ( n) =
1 N 2 ∑ [c( k ) − cn ( k )] , where N denotes the size N k =1
of S. 4. Plot log10 F(n) against log10 n. If the data displays long-range dependence then F(n) ~ nα, where α is the scaling exponent. For stationary data with scale-invariant temporal organization, the Fourier power spectrum S(f) is S(f) ~ f- β, where the scaling exponent β is related to α in the following way: β = 2α-1. Values of 0 < α < 0.5 are associated with anti-correlation (i.e. large and small values of the time series are likely to alternate). For Gaussian white noise α = 0.5. Values of 0.5 < α ≤ 1 indicate long-range power-law correlations (i.e. large values of the time series are likely to be followed by large values). Values 1 < α ≤ 1.5 represent stronger long-range correlations that are different from power-law where α = 1.5 for Brownian motion. We compute two scaling exponents, αLF and αVLF that are related the LF and VLF frequency ranges as defined above. In order to estimate frequency values fn from the segment size n of DFA, in Hertz, the segment sizes are related to the mean heart rate (meanNN-1), i.e. fn ≈ meanNN-1n [11]. Table 1 Heart rate variability measured in nine rats before (baseline) and after injection of methyl-scoplamine (vagal blockade) displayed as medians and inter-quartile ranges as well as p-values of the Wilcoxon test for paired comparisons of medians HRV measure baseline
vagal blockade
p
meanNN sdNN Rmssd P VLF LF HF Hc SD1 SD2 WPSUM13 WPSUM20 FORBWORD PLVAR2 PHVAR2 SampEn αLF αVLF
144 [131 – 155] 4.3 [4.0 – 5.0] 0.8 [0.8 – 1.0] 13.6 [8.6 – 21.7] 1.7 [1.1 – 2.3] 0.11 [0.10 – 0.18] 0.22 [0.21 – 0.32] 0.31 [0.30 – 0.31] 0.6 [0.5 – 0.7] 6.1 [5.6 – 7.1] 0.06 [0.04 – 0.18] 0.92 [0.75 – 0.92] 42 [42 – 44] 0.94 [0.87 – 0.96] 0.000 [0.000 – 0.000] 0.42 [0.33 – 0.73] 1.01 [0.98 – 1.18] 1.41 [1.38 – 1.44]
0.0039 0.098 0.0039 0.30 0.0039 0.0039 0.0039 0.0039 0.0039 0.098 0.57 0.30 1 0.0039 0.0039 0.074 0.16 0.0039
179 [178 – 198] 7.6 [5.0 – 10.1] 2.5 [2.1 – 2.9] 33.9 [15.8 – 45] 7.2 [6.2 – 11.9] 1.54 [0.74 – 2.33] 1.63 [0.90 – 2.33] 0.41 [0.39 – 0.43] 1.8 [1.5 – 2.1] 10.6 [7.0 – 14.3] 0.11 [0.04 – 0.24] 0.73 [0.58 – 0.94] 42 [42 – 42] 0.15 [0.04 – 0.17] 0.016 [0.003 – 0.024] 1.00 [0.84 – 1.19] 0.96 [0.85 – 1.04] 1.31 [1.22 – 1.34]
III. RESULTS After injection of methyl-scopolamine, a vagal blocker, the heart rate increases, i.e. meanNN reduces in all rats. Fig.1 shows an example of the NN interval time series before and after injection. For statistical comparison of HRV measures we compute group medians, inter-quartile ranges as well as the Wilcoxon test. Results are summarized in Tab.1. The overall HRV, as measured in the time domain via sdNN, is reduced in trend, but does not reach statistical significance. A more detailed analysis of HRV reveal drastically reduced beat-to-beat variability (rmssd). The frequency domain analysis shows that the high (HF), low frequency (LF), and very low frequency (VLF) power is reduced, whereas the overall power is decreased in trend only. The Poincaré plot based analysis shows significantly reduced short-term fluctuations (SD1) and nearly significant reduction in SD2. The compression entropy is also significantly reduced after vagal blockade. Symbolic dynamics based analysis reveals an increase of low variability patterns (PLVAR2) paralleled by a decrease in high variability patterns (PHVAR2), whereas the other parameters are not significantly changed. The sampling entropy is reduced after vagal blockade, but does not reach statistical significance. Scaling analysis by means of DFA shows increased correlations after vagal blockade in the VLF range. IV. DISCUSSION In this paper we study HRV, particularly its complexity measures, and changes in these measures caused by vagal blockade in conscious rats. It is well known that vagal blockade leads to an increase in heart rate paralleled by a decrease in variability in the high frequency range [12]. Little is known, however, about the effect of vagal blockade on the complexity of HRV or, in other words, about the sensitivity of complexity measures to vagal modulation of the heart rate. Our data shows the typical increase in heart rate paralleled by a decrease in HRV after vagal blockade. Besides the decrease in HF and rmssd we find LF oscillations to be almost vanished, suggesting that without external stress, LF fluctuations in heart rate are almost completely caused by vagal mechanisms. But also the VLF oscillations are significantly reduced during vagal blockade. This broad reduction in HRV also affects most of the complexity measures, showing a decreased complexity. The compression based measure Hc assesses fluctuations of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Effects of vagal blockade on the complexity of heart rate variability in rats
heart rate within a short time window and is therefore sensitive to vagal modulations. Looking at the Poincaré plot, SD1 reflects beat-to-beat changes that are exclusively mediated by the vagal pathway whereas, SD2 assesses also slower dynamics that might be caused by other regulatory systems, external stimuli or animal’s movements, and consequently shows less sensitivity to vagal modulations. The contradictory behavior of the Symbolic Dynamics measures PLVAR2 and PHVAR2 that assess beat-to-beat dynamics over 7 consecutive heart beats also reflect the low HRV after vagal blockade. In particular there are no more heart rate patterns with consecutive beat-to-beat changes higher than 2 ms (PHVAR2). Sampling entropy is less sensitive to vagal blockade since this regularity statistic is based on the overall variability of the NN time series that is also influenced by slow trends that are not caused by vagal modulations. The DFA shows a steeper slope in the VLF range and therefore increased long-range correlations after vagal blockage, which also suggest that vagal modulations cause a certain amount of irregularity in HRV and consequently a vagal blockade is leading to a more regular behavior at larger scales. Given that heart rate modulations mediated by vagal efferents mainly reflect cardio-respiratory coupling it could be speculated that the irregularity of respiration, including factors such as respiratory frequency, tidal volume, ratio between inspiration and respiration, etc. is the major source of the complexity found and assessed with the above described measures. This emphasizes the necessity of recording respiration in HRV analysis studies.
29
REFERENCES 1.
Kleiger RE, Miller JP, Bigger JT et al. (1987) Decreased heart rate variability and its association with increased mortality after acute myocardial infarction. Am J Cardiol 59:256–262 2. Vinik AI, Maser RE, Mitchel BD et al. (2003) Diabetic autonomic neuropathy. Diabetes Care 26:1553-1579 3. Baumert M, Brechtel L, Lock J et al (2006) Heart Rate Variability, Blood Pressure Variability, and Baroreflex Sensitivity in Overtrained Athletes. Clin J Sport Med 16:412–417 4. Lake DE, Richman JS, Griffin MP et al. (2002) Sample entropy analysis of neonatal heart rate variability. Am J Physiol Regul Inter Comp Physiol 283:R789-R797 5. Task Force of the European Society for Cardiology and the North American Society of Pacing and Electrophysiology (1996) Heart rate variability. Standards of measurement, physiological interpretation, and clinical use. 17:354-381 6. Voss A, Kurths J, Kleiner HJ et al. (1996) The application of methods of nonlinear dynamics for the improved and predictive recognition of patients threatened by sudden cardiac death. 31:419-433 7. Peng CK, Havlin S, Stanley HE et al (1996) Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos 5:82-87 8. Baumert M, Baier V, Haueisen J et al. (2004) Forecasting of Life Threatening Arrhythmias Using the Compression Entropy of Heart Rate. Methods Inf Med 43:202-206 9. Brennan M, Palaniswami M, Kamen P (2001) Do existing measures of Poincaré plot geometry reflect nonlinear features of heart rate variability? IEEE Trans Biomed Eng 48:1342-1347 10. Ziv J, Lempel A (1977) A universal algorithm for sequential data compression. IEEE Trans Inf Theory 23:337-343 11. Baumert M, Brechtel LM, Lock J et al. (2006) Scaling graphs of heart rate time series in athletes demonstrating the VLF, LF and HF regions. Physiol Meas 27:N35-N39 12. Japundzic N, Grichois ML, Zitoun P et al. (1991) Spectral analysis of blood pressure and heart rate in conscious rats: effects of autonomic blockers. J Auton Nerv Syst 30:91-100 Author: Dr Mathias Baumert
V.CONCLUSIONS Vagal blockade of heart rate control in rats shows the typical increase in heart rate paralleled by a decrease in HRV. Several complexity measures are decreased and are therefore sensitive to vagal heart rate modulation.
Institute: School of Electrical and Electronic Engineering, The University of Adelaide City: Adelaide, SA5005 Country: Australia Email:
[email protected]
ACKNOWLEDGMENT This study was supported by grants form the Australian Research Council (DP0663345).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Feature extraction and selection algorithms in biomedical data classifiers based on time-frequency and principle component analysis. P. S. Kostka1, E. J. Tkacz1 1
Institute of Electronics, Div. of Microelectronics and Biotechnology, Silesian University of Technology, Gliwice, Poland
Abstract— Proposed methods for feature extraction and selection stages of biomedical pattern recognition system are presented. Time-Frequency signal analysis based on adaptive wavelet transform and Principle Component Algorithm (PCA) algorithm is used for extracting and selecting from original data the input features that are most predictive for a given outcome. From the discrete fast wavelet transform coefficients optimal feature set based on energy and entropy of wavelet components is created. Then PCA is used to shrink this feature group by creating the most representative parameter subset for given problem, which is the input for last neural classifier stage. System was positively verified on the set of clinically classified ECG signals for control and atrial fibrillation (AF) disease patients taken from MITBIH data base. The measures of specificity and sensitivity computed for the set of 20 AF and 20 patients from control group divided into learning and verifying subsets were used to evaluate presented pattern recognition structure. Different types of wavelet basic function for feature extraction stage as well as supervised (Multilayer Perceptron) and unsupervised (Self Organizating Maps) neural network classification units were tested to find the best system structure.
ables is not always significant. This problem may undermine the success of machine learning that is strongly affected by data quality: redundant, noisy or unreliable information may impair the learning process. Proposed feature extraction tools almost always must depend on the specificity of classification task to be sensitive to features, which will be able to distinguish between health and pathology cases. The application field of presented multi-domain feature extraction and selection is the trial of
Keywords— feature extraction, feature selection, principal component analysis, pattern recognition, wavelet transform.
I. INTRODUCTION Pattern recognitions system structure (fig.1) after preliminary data preparation parts consists of two major stages [1]: • Feature extraction and selection • Classification The pattern to be recognized is first converted to some features, believed to carry the class identity of the pattern, and then the set of features is classified as one of the possible classes. To achieve high recognition accuracy, the feature extractor is required to discover salient characteristics suited for classification and the classifier is required to set class boundaries accurately in the feature space [1]. Progress made in sensor technology and data management allow researchers to gather data sets of ever increasing sizes, particularly with respect to the number of variables. However, the incremental informative content of such vari-
Fig.1
Pattern recognition system structure.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 70–73, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Feature extraction and selection algorithms in biomedical data classifiers based on time-frequency and principle component analysis.
71
Original Signal ECG lead: II
1 0.5 0 -0.5
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
3000
3500
4000
4500
5000
3000
3500
4000
4500
5000
3000
3500
4000
4500
5000
3000
3500
4000
4500
5000
3000
3500
4000
4500
5000
Wavelet Detail D7 0.5 0 -0.5
0
500
1000
1500
2000
2500 Wavelet Detail D6
0.2 0 -0.2
0
500
1000
1500
2000
2500 Wavelet Detail D5
0.2 0 -0.2
Fig. 2 Feature extraction stage based on ECG signal analysis in time(T), frequency (F) and mixed T-F domains.
0
500
1000
1500
2000
2500 Wavelet Detail D4
0.05 0 -0.05
0
500
1000
1500
2000
2500 Wavelet Detail D3
0.05
detection of atrial fibrillation (AF), which is a supraventricular tachyarrhythmia characterized by uncoordinated atrial activation with consequent deterioration of atrial mechanical function. The purpose of feature or variable selection is to eliminate irrelevant variables to enhance the generalization performance of a given learning algorithm. The selection of relevant variables may also be useful to gain some insight about the concept to be learned. Other advantages of feature selection include cost reduction of data gathering and storage and computational speedup [2]. In this paper we investigate the efficiency of criteria derived from support vector machines (SVMs) for variable selection in application to classification problems. II. METHODS A. Feature extraction Before selection of the most representative features (II.B) the set of parameters characteristic for Atrial Fibrillation (AF) detection were composed of the time, frequency and mixed time-frequency domain (fig.2) [3]. This multidomain feature set covers a wide spectrum of possible AF activity occurrence. 1. Time domain: Duration time of atrial activation (P wale time) (tP) 2. Frequency domain: Frequency of oscillations after ventricular activity cancellation (FAF) 3. Mixed T-F domain: T-F analysis carried out by fast Mallat wavelet decomposion was used to compute following parameters based on energy and entropy, which correspond to the measure of information included in every frequency sub-band of Mallat jth decomposition (fig.3) [4],[5],[6]:
0 -0.05
0
500
1000
1500
2000
2500
Fig. 3 Multilevel Mallat decomposition components (details: d3-d7 with original ECG signal) of II ECG lead of patient with AF (fig.1).
• Energy of wavelet component
E1, j {ci , j } = (ci , j ) 2 ⇒ E11, j {s (n)} = ∑ (ci , j ) 2 i
• The (non-normalized) Shannon entropy:
E2, j {ci , j } = −(ci , j ) 2 log(ci , j ) 2 ⇒ E2, j {s (n)} = −∑ ⎡⎣(ci , j ) 2 log(ci , j ) 2 ⎤⎦ i
Full list of proposed T-F parameters are included in [7]. B. Feature selection There are two approaches to feature selection problem: I. Feature subset selection II. Feature projection, which tries to find optimal original feature combination (projection into new domain) into smaller set of new features. Principle component analysis (PCA) [8] or projection pursuit [9] are often used feature projection methods [10]. In presented algorithm, the most relevant features were obtained as the arbitrary number of the most principle components computed for multi-domain features (from point II.A), characteristic for AF detection. Principle Component Analysis: PCA realizates linear input data mapping (M-dimension space) into new feature space (L-dimension). In pattern recognition tasks it enables
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
72
P. S. Kostka, E. J. Tkacz
to eliminate the uncorrelated noise and linear dependences in data.
[
]
T
= v1( j ) , v2( j ) ,..., vM( j ) , where: v ( j ) ∈ V ⊂ ℜ M , taken from dataset Γ = v (1) ,..., v ( P ) Every feature vector v
( j)
[
]
consisted of P feature vectors is mapped into reduced, L-
[
]
T
z ( j ) = z1( j ) , z2( j ) ,..., z L( j ) , ( j) L where z ∈ Z ⊂ ℜ ; L < M to fulfill MMSE minidimensional feature vector
malization criteria.
( j)
According to [11] each vector v can be approximated by the following sum with reduced dimension: L
v~ ( j ) = ∑ zi( j )ui + i =1
M
∑b u
i = L +1
i i
To chose the basis ortonormal vectors ui and set of coefficients bi to achieve the best, optimal approximation for every feature vector the sum of squared errors over the whole data set is created:
EL =
1 P ( j) ~( j) ∑ v −v 2 j =1
2
=
(
1 P M ( j) ∑ ∑ zi − bi 2 j =1 i=L+1
)
2
After some modifications [11] the minimum of the measure E L is achieved for the following form of basis vectors:
∑v ui = λiui where:
∑
v
is covariance matrix of the learning set
of v feature vectors ; ui are the eigenvectors and eigenvalues of
∑
v
λi are
the
.
PCA with mentioned above theoretical background was practically carried out according to algorithm presented in fig.4. Neural classifier structure of new selected feature vector F2 is the last stage of pattern classifier (fig.1), realized by: • Supervised learnt multilayer perceptron (MLP). • Unsupervised structure of Kohonen maps (SOMs).
Fig.4
PCA algorithm structure.
III. RESULTS To verify presented method, ECG signals taken from MITBIH database containing AF episodes were tested. Whole data set consisting of 40 cases with long term ECG recordings were divided into learning and verifying set. Performance of presented pattern recognition system was evaluated based on classical measures of classifier Sensitivity and Specificity. First group of tests (Table 1) was carried out for different types of basic wavelet structure on feature extraction stage (T-F analysis) and both supervised (Multilayer Perceptron) and nonsupervised (Self Organizating Maps) learnt neural structure used for final feature vector classification.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Feature extraction and selection algorithms in biomedical data classifiers based on time-frequency and principle component analysis.
Classifier structures comparison 100
[%]
90 80 70 60 50 SOM DM
MLP DM
SOM PCA
MLP PCA
Type of classifier structure Sensitivity
Specificity
Fig. 5 Results of comparison the feature selection stage realized with the method based on PCA and different approach using with feature Discriminity Measure [3], which expresses what is the value of given feature separability.
value of classifier sensitivity S=92%, while specificity SP=90% for AF with different degree of organization (atrial flutter, AF1, AF2 and AF3). PCA used in feature selection stage gave better results then the other type of feature selection based on particular feature Discriminity Measure. To conclude, obtained results showed, that before pattern classifier can be properly designed and effectively used, it is necessary to consider the feature extraction and data reduction problems. Feature extraction should consists in choosing those features, which are most effective for preserving the class separability. Principal Component Analysis appeared as an effective tool for most representative features selection, improving whole classification process. Presented classification procedure gave satisfactory results, considering described classification algorithm as a contribution to atrial fibrillation arrhythmia detection on preliminary screen examination stage.
Table 1 Comparison of AF detection results for two structures of neural
REFERENCES
classifier part: multilayer perceptron (MLP) & Kohonen self organizating maps (SOMs), two types of feature extraction basic wavelet function. Neural Network classifier part type MLP + Feature extraction using db5 wavelet + PCA SOMs + Feature extraction using db5 wavelet +PCA MLP + Feature extraction using bior2.1 wavelet+PCA SOMs + Feature extraction using bior2.1 wavelet +PCA MLP class. NO preliminary feature extr. and sel. stage SOMs class. NO preliminary feature extr. and sel. stage
Sensitivity [%]
Specificity [%]
92
90
81
80
90
86
80
80
65
60
70
71
Second group of verifying tests (fig.5) was connected with the comparison of using PCA in feature selection stage with presented in our previous works [3] parameter selection algorithms based on the computing of ith feature Discriminity Measure, which expresses what is the value of its separability. IV. CONCLUSIONS After feature extraction from different time (T), frequency (F) and T-F domains, Principal Component Analysis was used to transform extracted features into new space with reduced size. Presented article focuses on PCA used for revealing of the features with maximal weight in classification process. It allowed to find the optimal feature subset selection of from different domain T-F features. Attrial Fibrillation detector tests gave for the optimal structure the
73
1.
Duda R.O., Hart P.E. (1973) Pattern classification and scene analysis. John Wiley & Sons, New York. 2. Rakotomamonjy A. (2003), Variable Selection Using SVM-based criteria, Journal of Machine Learning Research 3:1357-1370. 3. Kostka P.S., Tkacz E.J. (2006), Hybrid Feature Vector Creation for Atrial Fibrillation Detection Improvement, Proc. of World Congress of Medical Physics and Biomedical Engineering, Seoul. 4. Mallat S. (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans. on Pattern An, 7(11):674-693 5. Akay M. Time-Frequency and wavelet analysis. IEEE EMB Magazine 14(2). 6. Thakor N.V. Sherman D.L. (1994) Biomedical problems in timefrequency-scale analysis – new challenges, Proceedings of the IEEESP pp. 536–539, 1994. 7. Kostka P.S., Tkacz E.J. (2005), Feature extraction optimization in neural classifier of heart rate variability signals, Proceedings of the 4th International Conference on Computer Recognition Systems CORES 2005, Advances in Soft Computing, Springer-Verlag, pp.585-594. 8. Karhunen, Joutsensalo (1995), Generalizations of principle component analysis, optimization problems, and neural networks, Neural Networks, vol. 8, No. 4, pp. 549-562. 9. Friedman, W.J. Tukey, J.W. (1974) A proj. pursuit alg. for exploratory data analysis, IEEE Trans. on Com., Vol. 23, pp. 881-889. 10. Fukunaga K. (1990), Introduction to statistical pattern recognition. 2nd edition. Academic Press, San Diego, CA. 11. Bishop C.M, (1996) Neural Networks for Pattern Recognition, Oxford University Press, New York. Corresponding author: Author: Pawel Kostka Institute: Street: City: Country: Email:
Institute of Electronics, Silesian University of Technology Akademicka 16 Gliwice Poland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Flexible Multichannel System for Bioelectrical Fields Analysis P. Kneppo1, M. Tysler2, K. Hana1, P. Smrcka1, V. Rosik2, S. Karas2, E. Heblakova2 1
Czech Technical University in Prague, Faculty of Biomedical Engineering, Kladno, Czech Republic 2 Institute of Measurement Science, Slovak Academy of Sciences, Bratislava, Slovakia
Abstract— Powerful multichannel system for measurement of surface biosignals is introduced and its use for cardiac electric field mapping is presented. The system can use up to 128 active electrodes; active neutralization of the patient and battery-powered operation facilitates optimal signal quality. Amplification and measuring unit containing ADSP-BF537 processor with embedded network connectivity enables easy wired or wireless connection to the host personal computer. Measurement and data analysis PC software based on the .NET platform enables fast high-resolution multichannel measurement, processing and visualization of surface bioelectrical fields. Another application enables noninvasive identification of local ischemia using measured QRST integral maps and model of the patient’s torso. Keywords— DSP based measuring system, Ethernet connectivity, biosignal measurement, body surface potential mapping, noninvasive ischemia identification
I. INTRODUCTION Multichannel measurement of surface bioelectric potentials and computation of body surface potential (BSP) maps is a noninvasive procedure used in cardiac or brain investigations that enables detailed analysis and more precise diagnostics of cardiac or brain disorders based on detailed high resolution registration of surface biopotentials distribution. In electrocardiography, forty years of experience with BSP showed that information contents in maps obtained from 24 to 240 leads is greater than that of commonly used standard 12-lead ECG of Frank VCG. Measured maps can be used for immediate analysis and diagnostics, however, model studies prove that BSP maps together with information on torso or head structure obtained from imaging techniques such as MRI, CT or ultrasound systems can be used for more advanced diagnostic methods [6] enabling noninvasive assessment of normal or abnormal electrical sources in the underlying tissues. Besides simple surface potential maps, 2D or 3D distributions of another parameters derived from measured potentials are often used to quantify the properties of the biopotential fields. As an example, body surface integral maps
displaying the surface distribution of integrals of surface ECG potentials over the ventricular depolarizationrepolarization period (QRST interval in ECG) practically depend only on the action potentials and not on the ventricular activation sequence [1]. Moreover, measured changes in surface QRST integral maps together with the knowledge of torso geometry and electrical properties can thus serve as input data for noninvasive assessment of ischemic heart regions. In this paper, a powerful high resolution biopotential measuring system that can be used for advanced BSP-based cardiac or brain studies is presented and its ability to noninvasively locate an ischemic heart region with changed repolarization using differences in surface integral maps and model of the patient torso is demonstrated. II. METHODS AND MATERIALS Multichannel biopotentials were measured from the chest surface using a new high resolution BSP mapping device. Differences in integral maps of ECG potentials and dipole model of the cardiac electric generator in realistic inhomogeneous human torso were used to identify local ischemic changes in the myocardium. A. Biopotential mapping device Based on previous experience with BSP mapping devices [2,3] a battery powered biopotential mapping system ProBio-8 (Fig.1) was developed to obtain high quality multichannel ECG recordings. The system consists of a data acquisition unit and a standard personal computer used for measurement control, data processing and analysis, diagnostic data interpretation and human interface. The data acquisition system is modular and can be configured for up to 128 channels. One input module with two 24-pin Centronix connectors contains 16 input channels. The multi-channel amplifying and measuring unit is placed in a patient terminal box and connected over Ethernet (wired or wireless) to the host personal computer.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 86–89, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Flexible Multichannel System for Bioelectrical Fields Analysis
Fig. 1. Block scheme of the biopotential mapping device
Fig. 2.
Active electrodes formed by a disposable Ag-AgCl electrode and an active adapter a snap connection and a flexible multi-wire cable are used as biopotential sensors.
Small geometric dimensions and metallic shielding of the patient terminal minimize the capacitive coupling with the environment in the examination room and allow high quality of measured signals. Biosignals are sensed by disposable Ag-AgCl electrodes with active adapters connected to the electrode snaps (Fig.2). Low output impedance of the active electrodes reduces the possible disturbing signals induced in the electrode cables. Signals are measured relative to a common mode sense (CMS) electrode that can be attached to the patient so that the interference from the common mode is minimal. Active neutralization of the patient using a driven electrode (DRL) is also employed. Besides further reduction of the common mode voltage, the current limiting resistor in the DRL protects the patient against defects in the amplifiers. In the worst case that one of the active electrodes would
87
break down and become shorted to the power supply voltage, function of the DRL results in a maximum error current of 50 μA which complies with the value specified for the IEC-60l CF type isolation used in Europe (the value of the resistor can be changed to achieve maximum of 10 μA error current and to comply with the US standards). Additional patient protection has been implemented in the data acquisition module. It prevents dangerous currents to flow through the patient body despite of the DRL current limiter in the very unlikely case that two electrodes would fail one after the other and electrodes would be connected to opposite power supply rails. This protection only enables the power supply if no errors are detected. Each measuring channel is equipped by a DC-coupled ECG amplifier with a fixed gain of 100 and a 22-bit Δ-δ A/D converter. Sampling frequency up to 2000 Hz can be selected. The data acquisition system is equipped with a 16/32-bit 600MHz high performance Analog Devices ADSP-BF537 processor with embedded network connectivity. Processor streams the sampled serial data from selected channels to the host PC and its DSP properties enable additional real-time biosignal processing. Selection of measured channels, real-time operations and proper formatting of acquired data with several possible byte lengths is controlled by commands received from the host PC computer. Communication with the host PC over Ethernet provides easy data transfer to or from the host with data rates of up to 10/100 Mbps (wired) or 54 Mbps (wireless). The patient terminal is powered by a rechargeable Li-Ion battery module. Due to the advanced power management controlled by the by the on-board controller the system can work up to one whole working day before the battery has to be recharged. Application software developed on the Microsoft .NET platform is running under Windows/XP. Real-time ECG data acquisition program enables to set the working mode of the data acquisition subsystem, to check the electrode contacts and to read, and store the stream of measured data. Biosignals from selected channels can be simultaneously on-line processed and monitored on the PC screen. Data acquisition is followed by modular off-line application-specific data processing written in several environments (Matlab, C#). It includes signal processing with time and frequency analysis, 2D or 3D biopotential mapping and consequent biophysical evaluation and medical interpretation of the measured data. B. Noninvasive assessment of cardiac ischemia As an example of using the measured ECG data, a noninvasive method for assessment of local ischemic regions is
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
88
P. Kneppo, M. Tysler, K. Hana, P. Smrcka, V. Rosik, S. Karas, E. Heblakova
presented. It interprets the measured surface ECG maps and evaluates changes in QRST integrals caused by local ischemia. These changes can be interpreted as being caused by sources originating in the ischemic heart region due to changed action potentials during the repolarization phase. For a small heart region, such sources can be represented by a single equivalent current dipole (ECD) located at the centre of the region and a fixed dipole model located at one of predefined positions within the ventricular myocardium can be used. The equivalent dipole can be inversely estimated as Mi = Ti+ Φ
for i=1, 2, ...n
were calculated. It was hypothesized that this difference appeared because some ischemic regions around the infracted tissue disappeared after the treatment due to the revascularization. To assess approximate position of the changed region, inhomogeneous torso model with lungs and heart (Fig. 3) was used in all patients and ECD representing the region with changed repolarization was located by dividing the ventricular volume into 28 segments and supposing possible positions of the ECD at their gravity centers (see Fig. 5).
(1)
where Mi is estimated integral of the dipole moment of a dipole located at the i-th predefined position in the myocardium, Φ are differences of QRST integrals in m measured surface points, and Ti+ is a pseudo-inverse of the transfer matrix between the i-th dipole and potentials in m surface points that depends only on the geometry and electrical properties of the torso. Minimal rms difference between measured QRST integrals and integrals produced by dipoles estimated at each of the i positions can be used as a criterion for finding the best ECD representing the ischemic changes in the QRST integrals.
III. RESULTS Measured and calculated surface distributions can be viewed as numerical tables, 2D or 3D maps, animation of maps as well as displaying of 3D maps at desired viewing angle is possible (Fig. 4). In patients measured before and after PCI there were noticeable changes in QRST integral maps. In 8 of them the measured difference integral maps could be reasonably represented by maps generated by single current dipole. In 6 cases the relative rms error between measured and approximated difference integral map was < 35%, in another 2 patients it was <
C. Experimental data Using the presented measuring device, possibility to assess the heart region with changed repolarization was tested on a group of patients after myocardial infarction (8 men and 3 women, age 45−69) that later underwent successful percutaneous cardiac intervention (PCI) on one of the main coronary arteries (8 on left anterior descending artery, 1 on right circumflex artery and 2 on right coronary artery). ECG signals were measured from 32 leads according to Lux before and after the PCI. Surface QRST integral maps were computed from both measurements and difference maps
Fig. 3.
Inhomogeneous torso model with lungs and heart was used to locate the ischemic lesions.
Fig. 4. Two examples of body surface distributions displayed by the measuring device: Top: 2D QRST integral map of a patient with an old inferior myocardial infarction, step in map is in [mV.ms]. Bottom: one 3D view of a body surface potential map in a patient with an old anterior MI, step in map is in [mV].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Flexible Multichannel System for Bioelectrical Fields Analysis
56%. In remaining 3 patients rms error of the approximation was higher and they were excluded from further analysis. Despite the use of a common torso model, in 7 of 8 analyzed patients positions of estimated ECD matched the region supplied by the treated vessel or at least they were located nearby the anterior or postero-lateral wall of the left ventricle. Dipole moments were directed inwards the ventricle and suggested changes near the endocardial surface. In 1 patient after PCI on RCA the ECD was incorrectly located in mid anterior left ventricular wall. In Fig. 5 there is an example of measured integral maps before and after the PCI on the left anterior descending artery (LAD) and corresponding difference integral map. Estimated ECD location on the anterior left ventricular wall as identified using an analytical heart model is shown in the right panel.
89
model, small ischemic lesions could be located with a mean error of 9±4 mm, in larger or transmural lesions the error was 17±14 mm what suggests that more sophisticated model should be used. ECD localization from 62 or 192 leads provided slightly better results than from 32 ECG leads and more than 32 leads should be used in the future. Our experiments indicate that the ProBio 8 device with its application software could be a useful tool for noninvasive cardiac diagnostics. Using model-based interpretation of differences in QRST integral maps it enables to identify small ischemic heart regions with changed repolarization. Using this method, acceptable localization of equivalent current dipole representing the revascularized region was obtained in 7 of 11 MI patients after PCI.
ACKNOWLEDGMENT
IV. DISCUSSION AND CONCLUSIONS Experience with ischemic changes in surface potential distribution shows that they are relatively small when compared with normal inter-individual fluctuations and can hardly be detected as departures from mean normal maps [3]. Therefore the possibility to identify ischemic heart regions using modelbased interpretation of differences in individual QRST integral maps was tested on simulated data [4]. The simulations showed that changes in QRST integral maps were greater than observed intra-individual variability, what, in principle, allows their identification. Using inhomogeneous torso
This work was supported by grants 2/4089/24 from the VEGA grant agency, APVV-20-059005 from the Slovak APVV agency and MSM 6840770012 from the Ministry of Education, Youth and Sports of the Czech Republic.
REFERENCES 1. 2. 3.
4.
5. 6.
Trudel MC et al. (2004) Simulation of QRST Integral Maps with a Membrane-Based Computer Heart Model Employing Parallel Processing. IEEE Trans on BME, 51: 1319-1329. Rosik V, Tysler M, Jurko S, Raso R, Turzova M (2002) Cardio 7 Portable System for High Resolution ECG Mapping. Studies in Health Technology and Informatics. 90: 41-46. Tysler M, Rosik V, Kneppo P (2006), Multichannel ECG Measurement for Noninvasive Identification of Heart Regions with Changed Repolarization. Proceedings of the XVIII IMEKO World Congress, Rio de Janeiro, Brazil (in press). Filipova S, Tysler M, Turzova M, Rosik V (2003) Reference ECGmapping etalons improve the diagnostic accuracy of myocardial ischemia according to departure isointegral surface maps. Int. J. of Bioelectromagnetism, 5(1): 369-370. Tysler M, Turzova M, Svehlikova J, Heblakova E, Filipova S (2005) Noninvasive detection of ischemic regions in the heart. IFMBE European Conference on BME. IFMBE Proceedings, 11: 2207. Zivcak J, Hudak R (2001) Biomechanizmy. Grafotlac, Presov. Address of the corresponding author:
Fig. 5.
Left: QRST integral maps in a 69 years old female patient after MI (95% stenosis of RIA) before and after the PCI on LAD (top, center left) and corresponding difference map (bottom left). Step in maps is 12 mV.ms, zero isointergral line is marked in black. Right: Realistic heart model with marked position of the calculated equivalent current dipole representing the region with repolarization changes caused by PCI.
Author: Prof. Peter Kneppo Institute: Faculty of Biomedical Engineering, Czech Technical University in Prague, Street: Nam. Sitna 3105 City: 272 01 Kladno Country: Czech Republic Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
FPGA-based System for ECG Beat Detection and Classification M. Cvikl1 and A. Zemva2 1
2
Iskra Sistemi, d.d. / Power System Protection and Control, Ljubljana, Slovenia Faculty of Electrical Engineering / Departement of Electronics, Laboratory for Integrated Circuit Design, Ljubljana, Slovenia
Abstract— We present a Field Programmable Gate Arraybased system for single-lead electrocardiogram signal processing which performs beat detection and classification to normal and ventricular beats. Geometrical properties of a phase-space portrait of an ECG signal are used for QRS complex detection, while classification is done with a modified classification algorithm that is a part of the Open Source ECG Analysis Software. The chosen Field Programmable Gate Array has an embedded PowerPC processor and is very suitable for mixed hardware and software designs. Beat detection is implemented in hardware and the classification is executed on the embedded PowerPC 405 core. The algorithm was developed on the MITBIH Arrhythmia Database resampled to 250 samples per second. Sensitivity of 99.80% and positive predictivity of 99.84% was achieved for QRS complex detection and sensitivity of 92.59% and positive predictivity of 95.55% was achieved for identification of premature ventricular complexes. A comparison of processing speed between a personal computer and the embedded system shows that a personal computer running at 18-times faster clock speed processes data only six times faster.
A. QRS Complex Detection Algorithm The algorithm is suitable for hardware (HW) implementation and is designed to processes blocks of ECG signal [2]. The QRS complex detection is based on geometrical properties of two-dimensional (2D) phase-space portrait of the ECG signal. QRS complexes are found based on the size of the polygon bounded by a number of consecutive data points. To minimize the effects of noise and artifacts on QRS complex detection, a prefiltered ECG signal is used for phase-space portrait generation. A nonrecursive band-pass filter with integer multipliers [3] was chosen, that emphasizes QRS complexes, attenuates high-frequency and lowfrequency components and has zero gain at 50 Hz and its multiples at 250 Hz sampling rate. An input ECG signal and the filtered signal are shown in Figure 1(a) and Figure 1(b).
Keywords— ECG, detection, classification, FPGA
I. INTRODUCTION Electrocardiogram (ECG) is an important clinical tool for the diagnosis of heart diseases. Reliable real-time detection and classification of heartbeats is an important task in detection of cardiac arrhythmias, which can save one's life, but requires a lot of processing speed. In this paper we describe a system with high processing speed which detects heartbeats and classifies them to either normal or ventricular. The entire system is built on a Field Programmable Gate Array (FPGA) with an external Double Data Rate (DDR) memory used for testing purposes. QRS complex detection and beat classification performance on the MIT-BIH Arrhythmia Database [1] is presented and its processing speed compared with a personal computer (PC). II. THE ALGORITHM The algorithm is partitioned into QRS complex detection algorithm and classification algorithm and is designed for block processing of an ECG signal sampled at 250 Hz.
Fig. 1 ECG signal processing stages: (a) input ECG signal, (b) band-pass filtered ECG signal, (c) phase-space portrait, (d) calculated areas
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 66–69, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
FPGA-based System for ECG Beat Detection and Classification
The 2D phase-space portrait is created using the method of delays, where the same data is plotted in each dimension, but with certain time delay (τ) between them. A phase-space portrait of a band-pass filtered signal using a time delay of 20 ms is shown in Figure 1(c). Two larger polygons created by the two QRS complexes and two smaller polygons that were created by P waves can be observed. The size of an area formed by eight data points is used as a detection function for locating QRS complexes. For this purpose, the coordinates of each data point in the 2D space are written as:
x [ nT ] =ECG[nT], y [ nT ] =ECG[(n − τ )T] (1) The area is calculated using plane geometry equation for a planar non-self-intersecting polygon:
1⎛ x Area = ⎜ 1 2 ⎝ y1
x2 x2 + y2 y2
x3 x + ...+ 7 y3 y7
x8 ⎞ ⎟ (2) y8 ⎠
The detection function obtained using the above equation for the ECG signal from Figure 1(a) is shown in Figure 1(d), where the two QRS complexes are easily spotted and the time of their occurrence is easily determined. QRS complexes are detected by comparing the detection function to a detection threshold level that is calculated in the next step. The level is set to four times the average value of the detection function in the current block. All peaks that exceed this level are treated as candidates for being created by QRS complexes. Once the peaks are located, the following two sets of criteria are used to improve QRS complex detection: 1. Search back rule: If the distance between two consecutive peaks is greater than 150% of previous R-to-R interval, an additional search is performed to discover if any peaks with lower amplitude were missed in the initial search. This search is therefore performed with the QRS complex detection threshold level lowered to 50% of its initial value. 2. Peak distance and Peak height: If consecutive peaks are less than 200 ms apart, then the highest of the peaks is considered to originate from a QRS complex. If the distance between peaks is more than 200 ms, both peaks are considered to originate from a QRS complex. To correctly process blocks of ECG signal and to minimize the risk of missing QRS complexes that might be located at the end of the block, an additional mechanism is used for block overlapping. A block-length of 700 samples is used for processing and the mechanism sets the last detected QRS complex in the current block as the starting point of the next block. If no QRS complexes are detected,
67
consecutive blocks are overlapped by 250 samples, so the last part of the block is again processed in the next block. B. Classification Algorithm The classification algorithm is taken up from the set of the Open Source ECG Analysis SW (OSEA) [4]. Characteristics like QRS complex width, R-to-R interval, noise presence and matching to previously classified beat types are combined with a set of rules and used to classify the beats as either normal or ventricular (PVC). Due to different approach to beat detection from the one used in the OSEA, the classification algorithm had to be modified. A one-second frame of ECG signal originally used for classification can not be provided for every detected beat, thus the frame is reduced to half of that length. For similar reasons, low frequency noise measurements originally used in the algorithm are excluded. Additionally, the embedded PowerPC processor (PPC) does not support floating point operations in HW, therefore only integer arithmetic is used in the algorithm; all floating point variables and constants are multiplied by 256 and all functions that perform calculations with such variables are altered in such a way, that calculation error caused by type-translation is minimized, i.e. assuring that multiplications are performed before division, especially if the result of multiplication is used as denominator. These modifications somewhat degraded classification performance, yet this was mitigated by changing some of the other parts of the classification algorithm; the intervals checked by the RRmatch and RRshort2 functions for rhythm classification were slightly extended, while the estimation of high frequency noise was modified to use a shorter interval. III. SYSTEM DESIGN A development system with Virtex-II PRO XC2VP30FF896-7 [5] Field Programmable Gate Array (FPGA) was used for system implementation. The detection algorithm is designed in such a way, that band-pass filtering, detection function calculation and QRS complex detection threshold level calculation can be formed into a pipeline. FPGAs make it possible that the elements of such a pipeline are independently and simultaneously executed. Moreover, 14 multiplications per data point required for area calculation can also be executed in parallel, which additionally increases the processing speed. For these reasons the QRS complex detection algorithm was implemented in HW, while the classification was assigned to one of the embedded PPC.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
68
M. Cvikl and A. Zemva
The system architecture is shown in Figure 2. PPC program and data are stored in a Block RAM (BRAM) connected to a high-performance 64-bit Processor Local Bus (PLB). The DDR memory is also connected to the PLB bus and is used for ECG signal storage and also as storage for beat detection and classification results. Peripheral devices like Universal Asynchronous Receiver-Transmitter (UART) and SysACE controller are connected to a low-performance 32-bit On-chip Peripheral Bus (OPB) and the PPC can access them through a PLB to OPB bridge. The UART interface provides communication with a personal computer (PC). Information about processing speed, QRS complex detection and classification, as well as information used for debugging are sent through it. The SysACE controller enables read/write operations from or to a CompactFlash (CF) device. Currently, the system is designed so that a CF memory card serves as a source of ECG data. For easier evaluation and comparison of processing speed, every ECG record used for beat detection and classification performance testing is first copied from the CF to the DDR memory. In the QRS complex detection phase the PPC copies a block of data from the DDR memory to a dual-port BRAM connected to the Data-Side On-Chip Memory (DSOCM) bus. The PPC then starts the beat detection process through the user logic control module connected on the OPB. The detection logic reads the ECG data from the second data port, executes beat detection and writes the results back to the BRAM for the PPC to access them. Additional information about task completion and the number of detected beats, if any, is passed to the user logic control module.
After the task completion is detected, the detected beats are classified and the results are written back to the DDR memory. IV. RESULTS AND DISCUSSION Two types of performance tests were made for our algorithm. QRS complex detection and classification performance evaluation of our algorithm was performed on a PC and on the embedded system and results compared with those of the OSEA beat detection and classification algorithm. Several versions of our algorithm were tested to see how performance is influenced by parameter reductions or optimizations. ECG data from all 48 records of the MIT-BIH Arrhythmia Database was resampled to 250 samples per second and used for performance evaluation of all algorithms and their implementations. QRS complex detection sensitivity (Q Se), positive predictivity (Q +P), PVC sensitivity (V Se) and PVC positive predictivity (V +P) of the algorithms were evaluated using ecgeval tool available in WFDB library [6]. Table 1 shows the QRS complex detection and classification performance results as reported by the ecgeval tool. The algorithms DetClas PC1 and DetClas PC2 are combinations of our QRS complex detection algorithm and different versions of the beat classification algorithm. Both algorithms were tested on a PC. The DetClas PC1 algorithm includes an unmodified classification algorithm, which has access to all data that is originally needed for beat classification. Better QRS complex detection and somewhat better classification of our algorithm than of the OSEA algorithm can be observed. The DetClas PC2 algorithm includes a modified version of beat classification algorithm, where the classification algorithm can only access ECG data in the processed block. A slight improvement in PVC sensitivity, but a decrease in predictivity can be observed. The DetClas Embedded presents the results of the algorithm that was implemented in the FPGA. QRS complex Table 1
Fig. 2 System architecture
QRS complex detection and classification performance on MIT-BIH Arrhythmia Database
Algorithm
Q Se [%]
Q +P [%]
V Se [%]
V +P [%]
OSEA
99.76
99.80
91.87
95.93
DetClas PC1
99.84
99.81
91.85
96.27
DetClas PC2
99.84
99.81
92.05
95.19
DetClas Embedded
99.80
99.84
92.59
95.55
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
FPGA-based System for ECG Beat Detection and Classification
Table 2
69
V. CONCLUSION
Processing speed for two MIT-BIH records
Record No.
PC [s]
Embedded [s]
Time Factor
117
0.344
1.057
3.07
208
0.594
5.850
9.85
detection performance is different from the algorithm implemented in SW, which also causes changes in classification performance. The HW implementation of the QRS complex detection algorithm is not an exact copy of the SW algorithm; the area calculation part of the QRS complex detection algorithm is partitioned to several parts to achieve higher processing speed. As a result, a different signal delay is introduced which affects the content of a data block and thus a difference in calculated QRS complex detection levels. A processing speed comparison test was also performed, where the time needed to process one ECG record was measured. Processing speed of the embedded system with the operating frequency of all components set at 100 MHz was compared with the processing speed of an AMD Athlon 2500+ processor with the operating frequency set at 1.83 GHz and the operating frequency of DDR memory set at 166 MHz. Both systems fetched the ECG data from the DDR memory and returned the beat detection and classification results to the DDR memory. A time factor was calculated as a ratio between the processing times of the embedded system and of the PC. The test results for two records are shown in Table 2. Among 48 records, these records exhibit the largest difference between the time factors. The reasons for this are functions used for calculation of beat matching in the classification algorithm; in contrast to record 117, these functions are heavily utilized in record 208 which is in favor of the AMD Athlon processor. Even though the AMD Athlon runs at 18-times higher clock frequency and its DDR memory interface at 1.6-times higher frequency, the embedded system processed the same data 9.85-times slower in the worst case and 5.95-times slower in average. Considering that the operating frequency of the PPC can be raised by three times and the operating frequency of the DSOCM interface by 50%, we expect to reduce this difference. Additionally, several tasks performed by the PPC could be implemented in HW and performed in parallel, including the BeatCompare function that consumes most of the processing time.
This paper presents an FPGA-based system for beat detection and classification in electrocardiograms with the QRS complex detection algorithm implemented in hardware and beat classification performed on an embedded PowerPC processor. Beat detection and classification performance was evaluated on the MIT-BIH Arrhythmia Database where good results were achieved. Although the processing speed performance is lower than of a personal computer, it is still high and it can be further improved in several ways. Due to FPGA reconfigurability, our system can be upgraded with additional communication and processing blocks and therefore used in various low-cost systems where high-speed processing of ECG signal is required.
ACKNOWLEDGEMENT This work is supported by the Ministry of Higher Education, Science and Technology of the Republic of Slovenia under grant 3211-05-000588.
REFERENCES 1.
2.
3. 4. 5. 6.
Mark R G, Schluter P S, Moody G B et al. (1982) An annotated ECG database for evaluating arrhythmia detectors, Proc. 4th Annu. Conf. IEEE EMBS on Frontiers of Engineering in Health Care 1982, IEEE Computer Society Press, pp 205–210 Cvikl M, Jager F, Žemva A (2007) Hardware Implementation of a Modified Delay-Coordinate Mapping-Based QRS Complex Detection Algorithm. EURASIP Journal on Advances in Signal Processing, in press Jager F (1982) QRS Complex Detection in Electrocardiogram. M. Sc. Thesis, University of Ljubljana, Ljubljana Hamilton P (2002) Open Source ECG Analysis. Computers in Cardiology, vol. 29, pp 101-104 Xilinx XUP Virtex II Pro Development System at http://www.xilinx.com/univ/xupv2p.html Moody G B (2005) WFDB Programmer's Guide (Tenth Edition) at http://www.physionet.org/physiotools/wpg/wpg.htm Author: Institute: Street: City: Country: Email:
Andrej Zemva Faculty of Electrical Engineering Trzaska ulica 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Fractal analysis of heart rate variability in COPD patients G. D’Addio1, A. Accardo2, G. Corbi3, N. Ferrara1, F. Rengo1 1
S. Maugeri Foundation, IRCCS, Rehabilitation Institute of Telese-Campoli, Italy 2 DEEI, University of Trieste, Italy 3 Dpt of Health Sciences, University of Molise, Italy
Abstract— Non linear parameters obtained from HRV analysis has recently been recognized to provide valuable information for physiological interpretation of heart rate fluctuation. Among the numerous non-linear parameters related to the fractal behaviour of the HRV signal, two classes have gained wide interest in the last years: the beta exponent based on the 1/f-like relationship, starting from the spectral power, and that based on fractal dimension (FD). In order to evaluate whether the FD is capable of discriminating HRV impairment between COPD patients and normal, 16 pathological and 10 healthy subjects were studied. All subjects underwent 24-hour Holter recording analysed by fractal and 1/f-like techniques. Differently from methods usually used in literature to evaluate the fractal dimension, in this work the FD was extracted by using the Higuchi's algorithm that permits to calculate the parameter directly from the HRV sequences in the time domain. Results show that fractal analysis contains relevant information related to different HRV dynamics that permits to separate normal subjects from COPD patients. Keywords— HRV, COPD patients, fractal analysis
I. INTRODUCTION Respiratory arrhythmia, which is the cyclical decrease and increase in heart period synchronous with breathing and disappearing after pharmacological blockade of autonomic ganglion transmission [1], represents the most recognizable evidence of a functional link between neural cardiac and respiratory controls [2]. The analysis of heart rate variability (HRV) is a wellrecognized tool in the investigation of the autonomic control of the heart [3]. Limited data, however, are available on the use of HRV in the assessment of the autonomic imbalance in patients with chronic obstructive pulmonary disease (COPD). The respiratory modulation of RR interval can be explored with various methodologies, and it has been shown that respiratory arrhythmia is modulated by both the amplitude and frequency of respiration [4,5] and is decreased in clinical conditions, such as myocardial infarction [6] and congestive heart failure [7]. The little that is known about cardiac autonomic function in patients with COPD suggests that it is adversely affected, as reflected in reduced HRV [8].
Thus, Holter recordings in a younger COPD patient population without coronary artery disease would help clarify the effect of COPD by itself on cardiac rate, rhythm, and autonomic tone. Also, it is possible that measurement of HRV will have prognostic value in patients with COPD. Decreased HRV associated with altered cardiac autonomic modulation is associated with an increased risk of cardiac events in clinically disease-free subjects, even after adjusting for known risk factors [9]. Decreased HRV is also associated with increased mortality in patients after myocardial infarction and in many other patient populations [6]. Changes in respiratory patterns and lung volumes may influence the autonomic outflows by complex reflex adjustments, mediated by both vagal and sympathetic efferent activity, as well [10]. Little is known about the effects of long term changes in respiratory patterns and volumes, as can occur on HRV in COPD. Among non-linear methods proposed so far to measure the fractal behaviour of the HRV signal, that based on the beta exponent of the 1/f-like relationship, starting from the spectral power [11-14], and that based on the fractal dimension (FD), have gained wide interest in the last years. The latter has traditionally been approached following the chaos-theory, with the aim of modelling the attractor extracted from HRV sequences [15], and the FD parameter has usually been estimated from the slope of the 1/f relationship [16]. However, the FD can also be directly extracted from HRV sequences by means of many methods [17,18]. In this work we followed this approach, using the FD estimated by the Higuchi algorithm [17]. This method allows a better fractal estimation, eliminating the errors due to the indirect estimation of FD from the spectral power. The aim of this study was to assess whether the Higuchi’s FD is capable of discriminating COPD patients from normal subjects. Results were compared with those obtained from the classical beta exponent.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 78–81, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Fractal analysis of heart rate variability in COPD patients
C. Fractal dimension analysis
II. MATERIALS AND METHODS A. Subjects We studied 16 male patients consecutively admitted to Pneumology Rehabilitation Division of “Salvatore Maugeri” Foundation, institute of "Telese Terme" (Table 1). All enrolled subjects had a positive medical history for COPD. Patients were considered to be affected by COPD if they fulfilled either of the following criteria: 1) they had an FEV1/FVC of <70% and no change or an FEV1 increase of >12%, but not FEV1 normalisation after 100 mg fenoterol; or 2) they nor reported history of wheeze in the last year, had an FEV1/FVC of <70%, an FEV1 of <80% and an FEV1 increase of <12% after 100 mg fenoterol. The control group (N) consisted of 10 healthy subjects (age 45±5). Table 1: Descriptive statistics of some variables in COPD population (age; gender; SO2, oxygen saturation; FEV1, forced expiratory volume in one second; HR, heart rate; EF, ejection fraction; SBP, systolic blood pressure; DBP, diastolic blood pressure)
variables
79
variables 13 (82%)
age
72±9
gender (M) (%)
SO2
88±6
FEV1
2.4±0.5
HR
83±8
EF
60±7
SBP
127±15
DBP
74±6
B. Holter analysis The study population underwent a 24-hour Holter ECG recording by a portable three-channel tape recorder, processed by a Marquette 8000 T system with a sampling frequency of 128 Hz. In order to be considered eligible for the study, each recording had to have at least 12 hours of analyzable RR intervals in sinus rhythm. Moreover, this period had to include at least half of the nighttime (from 00:00 AM trough to 5:00 AM) and half of the daytime (from 7:30 AM trough to 11:30 AM) [19]. Before analysis, identified RR time series were preprocessed according to the following criteria: 1) RR intervals associated with single or multiple ectopic beats or artifactswere automatically replaced by means of an interpolating algorithm, 2) RR values differing from the preceding one more than a prefixed threshold were replaced in the same way as for artefacts. The RR time series were finally interpolated by piecewise cubic spline and resampled at 2 Hz.
Fractal dimension was calculated by using the Higuchi's algorithm [17]. From a given time series X(1), X(2), ... X(N), the algorithm constructs k new time series; each of them, Xmk, is defined as Xmk:X(m),X(m+k),X(m+2*k),..., X(m+int((N-m)/k)*k) where m=1,2,...,k and k are integers indicating the initial time and the interval time, respectively. Then the length, Lm(k), of each curve Xmk is calculated and the length of the original curve for the time interval k, L(k), is estimated as the mean of the k values Lm(k) for m=1, 2, ..., k. If the L(k) value is proportional to k-D, the curve is fractal-like with the dimension D. Then, if L(k) is plotted against k, for k ranging from 1 to kmax, on a double logarithmic scale, the data should fall on a straight line with a slope equal to -D. Thus, by means of a least-square linear best-fitting procedure applied to the series of pairs (k, L(k)), obtained by increasing the k value, the angular coefficient of the linear regression of the graph ln(L(k)) vs. ln(1/k), which constitutes the D estimation, is calculated (Fig. 1). D. Beta exponent analysis Power law beta exponent was calculated from the power spectral density function estimated by the Blackman-Tukey method after linear trend removal. The beta index represents the slope (Figure 2) of the linear fit in the very low frequency band (<0.05 Hz) of the log(power) on log(frequency) relationship [20].
Fig. 1. Example of an hi sequence determination on a curve for the length calculation. The values of hi were calculated as |X(m+i*k) -
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
80
G. D’Addio, A. Accardo, G. Corbi, N. Ferrara, F. Rengo
Fig. 2. Example of the beta exponent evaluation by means of the slope of the linear best fitting (dashed line) of the power spectrum for frequencies < 0.05Hz.
III. RESULTS Descriptive statistics for the FD and beta exponent parameters in the two study group are reported in table 2. The normality of the distribution of HRV variables was assessed by the Shapiro-Wilks test. Between-group comparisons were carried out by the analysis of variance (t-Student test for unpaired data). The Higuchi's FD parameter showed a marked, highly significant, increase in the mean value passing from normal to patients with COPD. Conversely, the beta exponent did not reach statistical significance between the two studied groups. Table 2: Mean±SD of Higuchi's fractal dimension (FD) and beta exponent
The sensitivity of the FD and beta exponent parameters to the severity of the central nervous system damage, however, appears to be different. Indeed, the Higuchi's index strongly changes passing from normal to pathological subjects. The beta exponent, on the contrary, seems rather insensitive to changes in autonomic cardiovascular regulation brought about by COPD. These findings suggest that, although the two algorithms try to measure the same fractal property of HRV, they provide non superimposable results. This could be due to the fact that the beta exponent is usually calculated considering only the low band of the signal (<0.05 Hz). Probably the changes in autonomic cardiovascular regulation much more affect a band with higher frequency. Scarce are the studies in COPD patients that, using linear and spectral indexes of HRV, found association between COPD and impaired autonomic regulation [21,22], while it has not yet completely investigated if nonlinear analysis of HRV might provide more valuable information for the physiological interpretation of heart rate fluctuations and for the risk assessment in these patients. A major limitation of this study is the low sample size of the studied groups. Therefore our findings should be interpreted as purely exploratory. Nevertheless, they clearly suggest that fractal analysis contains relevant information related to different heart rate variability dynamics of COPD subjects, candidating this approach for future risk assessment studies of these patients.
REFERENCES 1 2
3
calculated on two study groups.N: normal subjects; COPD : chronic ostructive pulmonary disease patients.
FD Beta
N 1.354 ± 0.057 -1.016 ± 0.138
COPD 1.684 ± 0.111 -1.061 ± 0.197
p <0.0001 0.53
IV. DISCUSSION These preliminary results indicate that fractal indexes reflect the impairment of the autonomic nervous system in patients with COPD.
4 5 6
Rimoldi O, Pierini S, Ferrari A, Cerutti S, Pagani M, Malliani A. Analysis of short-term oscillations of R-R and arterial pressure in conscious dogs. Am J Physiol 1990;258:H967-H976 Koepchen HP, Abel HH, Kliibendorf D. Central cardiorespiratory organisation. In: K. Myakawa et al (Eda.). History of Studies and Concepts. Mechanisms of blood pressure waves, Japan Sci. Sot. Press, Springer Verlag, Tokyo. Berlin, 1984, pp. 3-23. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. Heart Rate Variability – Standard of Measurement, Physiological Interpretation and Clinical Use. Circulation 1996;93:104365 Angelone A, Coulter NA. Jr. Respiratory arrhythmia: a frequency dependent phenomenon, J. Appl. Physio l1964;19:479482 Hirsch JA, Bishop B. Respiratory smus arrhythmia in humans: how breathing pattern modulates heart rate. Am J Physiol 1981;241:H620-H629 Kleiger RE, Miller JP, Bigger JT, Moss AR. Multicenter PostInfarction Research Group. Decreased heart rate variability and its association with increased mortality after acute myocardial infarction. Am J Cardiol 1987;59:256-262
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Fractal analysis of heart rate variability in COPD patients 7
8 9 10
11
12
13
14
15
Saul JP, Arai Y, Berger RD, Lilly LS, Colucci WS, Cohen RJ., Assessment of autonomic regulation in chronic congestive heart failure by heart rate spectral analysis, Am J Cardiol 1988;61:1292-1299 Volterrani M, Scalvini S, Mazzuero G, et al. Decreased heart rate variability in patients with chronic obstructive pulmonary disease. Chest 1994;106:1432-37 Tsuji H, Lawrence MG, Venditti FJ Jr, et al. Impact of reduced heart rate variability on risk for cardiac events: the Framingham Heart Study. Circulation 1996; 94:2850-55 De Burgh Daly M. Interactions between respiration and circulation. In: A.P. Fishman, N.S. Cherniack, J.G. Widdicombe and S.R. Geiger (Eds.), Handbook of Physiology. The Respiratory System, Section 3. American Physiological Society. Bethesda. MD, 1986, pp.529-594 Saul JP, Albrecht P, Berger RD, Cohen RJ. Analysis of long term HRV: methods, 1/f scaling and implications. In: Computers in Cardiology 1987. IEEE Computer Society Press, 1987:419-22. Bigger T, Steinman R, Rolnitzky L, Fleiss J, Albrecht P, Cohen R. Power law behavior of RR-Interval Variability in healthy middle-aged persons, patients with recent acute myocardial infarction and patient with hearth transplants. Circulation 1996;93:2142-51. Makikallio TH, Hoiber S, Kober L, Torp-Pedersen C, Peng CK, Goldberger AL, Huikuri HV. Fractal analysis of heart rate dynamics as a predictor of mortality in patients with depressed left ventricular function after acute myocardial infarction. TRACE Investigators. TRAndolapril Cardiac Evaluation. Am J Cardiol 1999;83(6):836-9. Makikallio TH, Huikuri HV, Hintze U, Videbaek J, Mitrani RD, Castellanos A,Myerburg RJ, Moller M. Fractal analysis and time- and frequency-domain measures of heart rate variability as predictors of mortality in patients with heart failure. Am J Cardiol 2001;87(2):178-82. Cerutti S, Carrault G, Cluitmans PJ, Kinie A, Lipping T, Nikolaidis N, Pitas I, Signorini MG. Non-linear algorithms for proc-
81
16 17 18 19
20
21
22
essing biological signals. Comp Met Prog Biomed 1996;1:5173. Butler GC, Yamamoto Y, Xing HC, Northey DR, Hughson RL. Heart rate variability and fractal dimension during orthostatic challenges. J Appl Physiol 1993;75(6):2602-12. Higuchi T. Approach to an irregular time series on the basis of the fractal theory. Physica D 1988;31:277-83. Goldberger AL. Fractal mechanisms in the electrophysiology of the heart. IEEE Eng Med Biol 1992;11:47-52. Bigger T, Fleiss J, Rolnitzky L, Steinman R. Stability over time of heart period variability in patients with previous myocardial infarction and ventricular arrhythmias. Am J Cardiology 1992;69:718-23. Hikuri HV, Makikallio TH, Airaksinen KEJ, Seppanen T, Puikaka P, Raiha IJ, Sourander LB. Power-law relationship of heart rate variability as a predictor of mortality in the elderly. Circulation 1998;97:2031-36. Pagani M, Lucini D, Pizzinelli P, Sergi M, Mela GS, Malliani A. Effects of aging and of chronic obstructive pulmonary disease on RR interval variability J Auton Nervous System l996;59:125-132 Volterrani M, Scalvini S, Mazzero G, Lanfranchi P, Colombo R, Clark AL, Levi G. Decreased Heart Rate Variability in Patients With Chronic Obstructive Pulmonary Disease Chest 1994;106:1432-37
address of the corresponding author: Author: Institute: Street: City: Country: Email:
Gianni D’Addio S. Maugeri Foundation – Bioengeneering Dpt Via Bagni Vecchi 82037 Telese Terme (BN) Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Intelligent Internet Based, High Quality ECG Analysis for Clinical Trials T.K. Zywietz1, R. Fischer2 1
Biosigna – Medical Diagnostics, Munich, Germany 2 Dept. of Biometrics, MHH, Hannover, Germany
Abstract— Drugs in general have the ability to cause delay of cardiac repolarization which can be measured as prolongation of the QT-interval in the ECG. A delay in cardiac repolarization can lead to so called Torsade de Pointes (TdP) and ventricular tachycardia, eventually leading to sudden cardiac death. All drugs must be therefore evaluated for their potential to delay cardiac repolarization in clinical trials. However, measuring the QT-interval with the required statistical accuracy is difficult. We have therefore developed a solution based on our intelligent analysis algorithm, which allows managing and analyzing the results of QT/QTc studies centrally, with an efficient mechanism to proof results by a confirming cardiologist. Keywords— Central ECG analysis and interpretation, QT/QTc-study, delay of cardiac repolarization, sudden cardiac death, clinical trial solution, electronic data capturing
I. INTRODUCTION An undesirable property of some drugs is their potential for secondary effects, which may cause critical adverse reactions by patients. Some drugs have the ability to delay cardiac repolarization, which can be measured as prolongation of the QT (QTc) interval in the electrocardiogram (ECG). A delay in cardiac repolarization favors the development of arrhythmias, e.g., Torsade de Pointes (TdP). A main feature of TdP is a significantly prolonged QT interval – unfortunately, TdP may degenerate into ventricular fibrillation leading to sudden cardiac death. The Food and Drug Administration (FDA) of the United States and the European Medicines Agency therefore require the clinical evaluation of QT/QTc interval prolongation and pro-arrhythmic potential even for nonantiarrhythmic drugs. But the threshold level of regulatory concern is around 5 ms as evidenced by an upper bound of the 95% confidence interval around the mean effect on QTc of 10 ms [1] – this requires very accurate acquisition and analysis of the 12 lead surface ECG. Up to now, most central ECG laboratories work with multiple ECG acquisition devices (with varying, usually not validated accuracy!) and involve human readers for the analysis of the ECG intervals. However, it is well known that the inter- and intra-reader variability is usually high. This may lead to a reduced statistical accuracy and reduce or, in worst case, even lead to a
total loss of comparability of results within and across clinical trials. We have therefore developed, implemented and patented a new intelligent internet based concept for conducting and analysing QT/QTc studies. II. INTELLIGENT CLINICAL TRIAL SOLUTION Our concept for high quality, highly accurate ECG analysis for clinical trials is based on four main components [2]: 1. A high quality, low cost 12 lead PC-based ECG acquisition device. This is applied at the trial site and connected via internet. 2. A graphical user interface for doctors, trial managers core lab personnel and confirming cardiologists, which can be accessed via any internet browser and contains all processes needed in a central ECG core lab. 3. A data base, where all patient data, all ECGs and other diagnostic methods (e.g., blood pressure, SPO2) can be stored and used for further analysis (electronic data capturing). 4. An intelligent algorithm to automatically analyze incoming ECGs. For this we apply our well known Hannover ECG System HES® with specific improvements in measuring the QT/QTc interval.
Fig. 1 Clinical Trial Solution.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 22–25, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Intelligent Internet Based, High Quality ECG Analysis for Clinical Trials
The process of collecting and analyzing ECGs is given in figure 1. First, the 12 lead ECGs are collected at the trial site using a PC-based acquisition device. Simple acquisition software running on the PC contains an automatic secure upload function provided that the PC is connected to the internet. The ECG data (with patient ID) are then automatically uploaded to the central data server, where the 12 lead ECG is measured and interpreted with our HES-algorithm in real time. All data are then automatically stored in the data base and can be reviewed either by the trial site itself or a confirming cardiologist. Before official approval by a human reader, the ECG is marked “red” as “unconfirmed”. Usually, the confirming cardiologist will then do an overreading of the ECG (using, e.g., a caliper tool to correct wave points, if necessary) – the ECG is confirmed and marked “green”. All steps performed by any user are tracked and can be followed by the trial manager (administrator). The main advantage of this procedure is the fact that all ECGs are in principle analyzed with the same mathematical method – without the inevitable variability of humans. Statistical accuracy is therefore significantly higher, which is essential to detect, e.g., the small drug induced prolongations of the QT/QTc interval. Once a clinical trial is finished, all data can be exported to other data bases or statistical analysis tools. For the ECG, we also offer the FDA-XML format to submit directly to the FDA ECG warehouse. The access to the central server platform can be strictly limited – the administrator or trial managers can set-up accounts for every user of the platform and thereby prevent any unintended use of data. III. USER MODEL OF CLINICIAL TRIAL SOLUTION The clinical trial solution maps typical processes of a central ECG core lab onto internet based user models and processes. The platform is designed to deal with multicentre studies and can be accessed via any internet browser without the need of any locally installed software. We have implemented three main account types: 1. Trial Manager (administrator): The trial manager is responsible for the complete set-up of a clinical trial – this means creating the required logins, uploading patient data and setting the relationship between patients, physicians at trial sites, and confirming cardiologists. The trial manager also controls all settings, e.g., whether patient names are shown or only patient ID’s for blind studies. The trial manager is also responsible for data export, once a study is finished for further analysis and or data upload to the FDA.
23
2. Physicians at trial sites: The physicians record the ECG and perform other examinations (e.g., blood pressure, temperature, weight) at the trial site. They have a password protected log in and upload all data to the central server. Physicians may also add new patients to the trial. They can access all saved data for their patients (electronic patient record), to compare, e.g., an ECG before and after medication. 3. Confirming cardiologists: Confirming cardiologists log into the platform via a web-browser as well. The confirming cardiologist is assigned to a physician at the trial site and automatically receives all ECGs for overreading from the assigned physician. Once an ECG is confirmed, the physician gets status “green” for the respective ECG. All changes / comments of the cardiologist are tracked – once confirmed, an ECG cannot be changed anymore. All patient records (examinations) and actions performed on the platform receive an automatic stamp of time and date, which cannot be changed by physicians or confirming cardiologists and is saved in the data base. This allows a seamless tracking of every “click” performed on the platform to prevent from manipulation or loss of information. IV. INTELLIGENT ALGORITHMS FOR ECG-ANALYSIS The core of our clinical trial solution is the HES algorithm for the analysis of 12 lead resting ECGs or Holter ECGs with 2, 3, or 12 leads. The HES program was originally developed in 1971 and since then continuously improved together with cardiologists. We follow a complex strategy to analyze the ECG signals: Starting from the unfiltered signals, we locate all beats in the first step. The second step comprises typing of beats – we distinguish several types of normal beats, 20 different types of premature ventricular contractions (PVC’s), supra ventricular premature contractions (SPVC’s), aberrant beats or artifacts. Only with the normal beats, applying a mathematical complex averaging, we obtain the representative cycle for the ECG under evaluation. With this technique we get rid of most noise within the signal and can accurately locate all relevant wave points for the P-wave, the QRScomplex and the T-wave. Using the accurate wave point we then calculate all relevant intervals and amplitudes, like, e.g., the QT interval. Within the HES algorithm we apply a multi-variant statistical analysis for the interpretation of ECGs; with this method, the algorithm can be trained to very high sensitivity or specificity for detecting certain diseases within a given population.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
24
T.K. Zywietz, R. Fischer
Since the QT interval has an inverse relationship to heart rate, the measured QT intervals must be corrected for heart rate. Various correction formulae have been developed, of which Bazett’s and Fridericia’s corrections are the most common. The best correction formulae are still a subject of controversy. We have therefore implemented most of the currently available approaches [3, 4, 5, 6]: 1. Fridericia’s correction 2. Bazett’s correction 3. Framingham correction 4. Rautaharju correction The QTc-formulae should be applied in parallel to identify the optimal formula for the population under testing. In most clinical trials, pre-cordial leads and lead II are used to measure the QT/QTc interval. In our HES algorithm, we apply ALL available leads for the analysis of the intervals, increasing the accuracy of detecting Q and T-end. The algorithm also analyzes morphological changes of the T-wave and the ST-T segment. The performance of the HES program was tested against the 100 biological 12-lead ECGs of the CTS/CSE data base as required by the EN / IEC 60601-2-51 norm (Table HH.2). The performance of the algorithm was measured for each of the 100 ECGs and compared with the reference. The reference of the CSE data base is provided by the norm as average of the measurement of 10 internationally recognized cardiologists [7]. The summarized results over all ECGs are provided in Tab. 1. Table 1:
Median Mean Std. Deviation Std. Error Std. Error (%)
Performance of HES algorithm against CSE data base PQ-Interval Reference Difference 158,0 0,0 162,8 -0,6 27,6 5,9 0,6 0,4
QT Interval Reference Difference 398,0 2,0 400,6 0,2 43,8 8,9 0,9 0,2
The deviation of the HES algorithm is very small. Based on theses results we conclude that the accuracy of absolute wave duration and interval measurements is sufficient to be applied to QT/QTc studies. More detailed results on the performance of HES are available on request from the authors. V. INTEROPERABILITY – THE OLD PROBLEM A pre-requisite for a central ECG analysis platform is the interoperability with the acquisition devices. Unfortunately, many manufactures still use proprietary formats. However, for computer assisted electrocardiography a specific stan-
dard (“SCP-ECG”) was developed and approved by CEN (Current version EN 1064:2005 [8]). This standard specifies the interchange format and a messaging procedure for ECG cart-to-host communication and for retrieval of SCP-ECG records from the host (to the ECG cart). The SCP standard was implemented by a couple of European and American manufacturers and is also used within our central platform. Practical experience during implementation and in the field revealed its superior performance, e.g., for telemetric applications as well as for data volume effective storage and retrieval (e.g., in the OEDIPE project [9]). Despite the approved standard, most ECG system manufactures still do not comply completely with the standard. A central ECG laboratory must therefore check compliance of the incoming ECG records to avoid critical errors. We have developed a content and format checker for the SCP standard [10, 11] and have implemented it into our central ECG platform. All incoming ECGs are format checked, and only completely compliant records are accepted. Aditionally, other formats, such as the FDA-XML format are implemented as well, especially for the export of ECG data. VI. ADVANTAGES OF CONSISTENT ECG ANALYSIS Detecting the small drug induced changes within the ECG in general and specifically for the QT/QTc interval of 5 ms is, as mentioned above, a considerable task. Studies such as the European CSE project [8] have revealed the problem of variability of ECG measurement and interpretation between different human readers and even the variability of ONE human reader, if the same ECG is analyzed on different days. Although in difficult ECG cases experienced cardiologists demonstrate the superior capability of humans in pattern recognition, the automatic analysis using high performance algorithms reveals a number of advantages: 1. The consistency of analysis results. If the recording is correct and if the noise level is low enough, the computer does not get tired and will provide always the same results for the same ECG. 2. Especially for clinical trials the computer delivers consistent, statistically meaningful results. This is very helpful for serial ECG comparison, e.g., if changes are to be monitored after medication. 3. For each ECG analyzed by the computer, objective, human independent measurements are obtained which can be used in long term observation and serial comparison. 4. Usually the diagnostic criteria applied by the programs are carefully selected and not based upon just a single “ECG school”.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Intelligent Internet Based, High Quality ECG Analysis for Clinical Trials
5. Documentation of ECG analysis results is provided automatically (and always in the same format) and the physician does not need to dictate results for typing and writing a doctor’s letter. 6. All ECG and patient data are automatically saved within a data base and can be statistically analyzed over the complete patient population. 7. In case of “wrong” analysis by the algorithm, applying the internet based trial solution, confirming cardiologists can easily correct measurements and interpretations of ECGs using a consistent approach. However, using automatic analysis without the necessary care is dangerous: Many of the available automatic algorithms do not have the required accuracy to apply them for clinical trials. Algorithms should fulfill at least the international norm IEC 60601-2-51, clause 50.101 and 50.102 (measurement and interpretation accuracy). Clinical trial sponsors should spent extreme care to the quality of applied ECG recorders and analysis software. VII. CONCLUSIONS AND PERSPECTIVES The HES Clinical Trial Solution is a very accurate and efficient tool to support thorough QT/QTc studies for the approval of drugs. The centrally applied algorithms for 12 lead ECG analysis lead to a higher statistical accuracy of interval measurements and avoid inter- and intra-reader variability during clinical trials. The efficiency of a central ECG laboratory can be significantly enhanced applying modern internet based technologies to support or completely automating typical processes of QT/QTc studies. The platform will be systematically enhanced to allow also electronic data capturing for other diagnostic methods such as, e.g., echo cardiograms or magnetic resonance imaging.
25
ACKNOWLEDGMENT We thank Andreas Bulling, Eugen Tarita and Andreas Krüger for implementing a prototype of our internet based ECG management platform.
REFERENCES 1. 2. 3. 4. 5.
ICH E14, EMEA, European Medicines Ageny, 2005. Bulling, A, Master Thesis, Karlsruhe, Germany, 2006. Bazett HC, Heart 1920; 7:353-370. Fridericia, LS, Acta Med Scand, 1920. Sagie A, Larson MG, Goldberg RJ, et al., Am J Cardiol 1992;70:797-801. 6. Rautaharju, PM, Warren JW, Calhoun HP, J. Electrocardiol. 1990, 23 Suppl:111-7. PMID 2090728. 7. CSE study, New England J. of Medicine, Vol. 325, 25, page 1769, 1991. 8. Health informatics-Standard communication protocol - Computer- assisted electrocardiography. CEN European Standard EN 1064, 2005. 9. OEDIPE - AIM 2026, Open European Data Interchange and Processing for Computerised Cardiography, 1993-08-19, Deliverable 7. 10. Fischer R, Zywietz C, Widiger B, IFMBE Proc. vol 6, MEDICON and Health Telematics, Ischia (Naples), Italy, 2004, paper 225. 11. Fischer R, Chiarugi F, Zywietz TK, Enhanced Integrated Format and Content Checking for Processing of SCP ECG Records, Computers in Cardiology 2006, Vol. 33, pp 413-416. Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Dr. Tosja K. Zywietz Biosigna – Medical Diagnostics Lindwurmstrasse 109 80337 Munich Germany
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Modelling effects of Sotalol on Action Potential morphology using a novel Markov model of the HERG channel T.P. Brennan1, M. Fink2, B. Rodriguez3, L.T. Tarassenko1 1
Department of Engineering Science, University of Oxford, United Kingdom 2 Department of Physiology, University of Oxford, United Kingdom 3 Computing Laboratory, University of Oxford, United Kingdom
Abstract— In this paper, we present a simulation study of the effects of Sotalol, a known anti-arrhythmic drug, on the rapid delayed rectifier potassium current (IKr). The current is encoded by the Human Ether-a-go-go Related Gene (HERG), which plays a major role in repolarization in mammalian ventricles. HERG is also the target of class III anti-arrhythmic drugs, such as Sotalol. Due to its unique structure and electrophysiological qualities, non-cardiac drugs readily bind with residues inside HERG's intracellular cavity. A novel Markov model was developed to model Sotalol's interaction with HERG. The model was validated using experimental data from HERG expressed in Human Embryonic Kidney (HEK) cells and integrated into the ten Tusscher (2006) human ventricular cell model. The simulation results show that an increase in Sotalol concentration decreases the overall conductance of IKr over time, resulting in prolongation of the action potential duration. This effect is larger in mid-myocardial than in endocardial and epicardial cells. Therefore, Sotalol-induced effects on cardiac repolarization may result in enhanced transmural dispersion of repolarization in the ventricles, and also in changes in the T wave.
tential duration (APD), and are responsible for LQTS2, the most common phenotype of LQTS [2]. Furthermore, drugs such as Sotalol that inhibit IKr result in prolongation of the APD and the QT interval, and this might result in an increased propensity to develop Torsades de Pointes, a potentially fatal ventricular tachycardia historically linked with LQTS. However, changes in ventricular repolarization and in the morphology of the T wave caused by drug-induced IKr inhibition are not well investigated. This simulation study aims to investigate changes in action potential (AP) morphology caused by Sotalol binding to human IKr. To do so, a Markov model of the human IKr was adapted to represent the binding kinetics of Sotalol. The model was incorporated in the ten Tusscher human AP model [17] and changes in IKr and AP morphology with varying drug concentration were investigated. The results of this study represent a first step towards understanding the effect that drug inhibition of HERG has on the morphology of the T-wave in the human electrocardiogram.
Keywords— HERG, Sotalol, Markov model, Computer Simulation.
I. INTRODUCTION Drug-induced effects on cardiac behaviour are generally quantified as changes in the QT interval of the surface electrocardiogram. However, alternative biomarkers such as the T wave morphology might be more efficient in assessing the arrhythmic potential of new pharmaceuticals. Although the origins of T-wave morphology remain still incompletely understood, they are linked to cardiac repolarization. Therefore, alterations in cardiac repolarization due to drug-induced effects or genetic mutations are likely to result in changes in the T-wave morphology. Thus studies of congenital Long QT Syndrome (LQTS) identified distinct T-wave morphologies in patients with different phenotypes [1]. The rapid inward rectifier potassium current (IKr), encoded by the Human Ether-a-go-go Related Gene (HERG) is known to play an important role in cardiac repolarization. Thus, mutations of HERG result in alterations in action po-
II. METHODS A. Markov model of IKr inhibition by Sotalol The processes of Sotalol binding to IKr, channel activation and inactivation are not independent [3], and thus cannot be modelled using a Hodgkin-Huxley ion channel formulation as in the ten Tusscher model [17]. Therefore, in this study we used a continuous-time Markov model of IKr, which has the ability to represent ion channel gating characteristics and ligand-binding [4]. The Markov model is defined by a set of state transition rates, which are the elements of the transition rate matrix Q. Each element qij defines the transition rate between any two states i and j [5]. In the presence of a large number of identical channels the change in the fraction of channels can be expressed in terms of state occupancy probability and state transition rates [6]. The probability that the channel is in the open state O is then included into the ohmic equation for IKr, i.e. IKr = gKr·O·(Vm – Ekr), where gKr is the conductance of the channel, Vm is the transmembrane potential and Ekr is the Nernst potential for potassium.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 50–53, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Modelling effects of Sotalol on Action Potential morphology using a novel Markov model of the HERG channel
A number of previous Markov models of IKr have been developed for various species; guinea-pig [7], rabbit [8], HERG expressed in Xenopus Oocytes [9], Chinese Hamster Ovary (CHO) cells [10], Human Embryonic Kidney (HEK) cells [11], and combined data from HEK cells and human ventricular myocytes [12]. Since we are interested in investigating drug-induced changes in cardiac repolarization in human, here we used a modified version of the human IKr model proposed by Fink et al. [12]. A recent review by Sanguinetti and Tristani-Firouzi highlighted the molecular mechanisms of drug interaction with IKr. Seven residues inside the channel's intracellular cavity were identified that readily bind with drug compounds [13]. In particular, Sotalol binds to the open or activated state of IKr gaining access to the receptor from the intracellular side. The bond is then stabilized by channel inactivation. The Sotalol molecule is trapped inside the channel cavity until the channel returns to an activated state [14]. The proposed state diagram for the HERG-Sotalol Markov model for drug inhibition of IKr is shown in Fig. 1. The parameters for the transition rates (see Appendix) between closed, open, and inactivated states were taken from [12] and were, as a first approximation, assumed not to be affected by the binding of the drug. In order to model the inhibition of IKr by Sotalol, a blocked state D* was introduced and the transition rates between the open O state and the blocked state D* were given in terms of the association (k1) and dissociation (k2) rate constants as well as the concentration of Sotalol ([L]). The IKr model was run at near-physiological temperatures (35o C). Initial intracellular and extracellular potassium concentrations, [K+]i and [K+]o, were set to 130 mM and 4 mM in line with experimental data. [K+]o effects on gKr were modeled using a square root dependence as previously described [12]. All simulations were carried out using MATLABTM on Intel Pentium 4TM. The dissociation constant (Kd) is the ratio of k1 over k2. The binding kinetics of Sotalol and HERG have been investigated in competitive binding studies using HERG ex-
51
pressed in HEK cells. Kd was found to be 49 ± 9 μM in CHO cells at 37o C [15]. The association and dissociation rate constants could not be found in the literature; therefore k1 and k2 were determined by fitting experimental data using MATLABTM. The assumption was made that the kinetic rates were independent of membrane potential. B. Stimulation protocols Sotalol inhibition of IKr depends on the voltage step-pulse protocol in the context of in vitro experiments. Therefore, to facilitate comparison with experimental data, voltage step protocols used in [16] were used in the simulations to facilitate comparison with the experimental data for model validation. In brief, two protocols were used: 1. Step ramp consisting of a conditioning step (+20 mV amplitude, 1s duration) followed by a repolarizing test ramp (+20 to -80 mV at -0.5 V/s) repeated at 5s intervals from a holding potential of -80 mV. 2. Step pulse (2s) consisting of a pre-pulse to +20 mV for 2s followed by a test pulse to -50 mV for 2s repeated every 10s intervals from a holding potential of -80 mV. C. Human ventricular action potential model The effects of Sotalol on AP morphology were analyzed using a modified version of the ten Tusscher (2006) model of the human ventricular AP [17]. Our new Markov model of IKr including Sotalol binding, replaced the rapid inward rectifier potassium current in the ten Tusscher model. Endocardial, epicardial and mid-myocardial AP were simulated using the model, considering differences in the conductance of IKs (slow-delayed rectifier potassium current) and of the transient outward potassium current (Ito) across the ventricular wall, as previously described [17, 18]. All simulations were carried out using MATLABTM on Intel Pentium 4TM. III. RESULTS A. Model validation
Fig. 1 Schematic of the Markov model of the HERG ion channel including effects of Sotalol biding at the open state. [L] is the Sotalol concentration. States depicted with an asterisk(*) imply states with Sotalol block. C refers to closed state, O to open state, I to inactivated state and D to blocked state.
Fig. 2 illustrates the change in IKr caused by 300 μM of Sotalol at 35o C using a step-ramp pulse protocol. The HERG-Sotalol Markov model was compared to experimental data from HERG expressed in HEK cells under the same conditions [16]. Using fminsearch, which implements the Nelder-Meade algorithm to minimize a cost function, the association rate constant of Sotalol was determined to be k1 = 0.005 μM-1.s-1 and the dissociation rate constant k2 = 0.00125 s-1.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
52
T.P. Brennan, M. Fink, B. Rodriguez, L.T. Tarassenko
tion protocol or to a leak current. This dynamic was incorporated for this simulation only by linearly decreasing the driving force by 0.1% per second. C. Effects of Sotalol on Action Potential
Fig. 2 Sotalol inhibition of IKr simulated using a step-ramp pulse protocol and 300 μM Sotalol at 35o C. Steady state channel responses in control i.e. without drug (solid lines) and in the presence of 300 μM Sotalol (dashed lines) are shown. for (A) Experimental data from HERG in HEK cells from [16] Fig. 6A and (B) Model response. In both experiments and simulations, Sotalol-induced effects on IKr result in a reduction of both steady state and peak IKr current. Importantly, the simulated ratio between control and blocked current for steady state and peak levels is similar to experimental observations (Fig. 2). Differences in IKr re-activation levels between the experimental data and model for the control shown in Fig.2 might be due to differences in phosphorilation of the channel and ion concentrations in the intra- and extracellular medium. However, this does not affect the equivalence of the drug inhibitory effects between model and experiment. . B. Sotalol inhibition of IKr Fig. 3 illustrates the decrease in peak normalized IKr caused by 500 μM Sotalol in the experiments from [16] (dotted) and in the simulation results (solid). The slope after 200 s could only be explained by a decrease in the driving force (Vm-EK) over time due to the prolonged depolariza-
Fig. 4 shows the time course of Vm and IKr during an AP in control (solid) and in the presence of 300 μM Sotalol (dotted), for endocardial (top), mid-myocardial (middle) and epicardial cells, obtained from the computer simulations. For the three cell types, Sotalol results in a significant prolongation of the APD, which is more pronounced in midmyocardial cells than in the endo- and epicardial cells. This is due to the fact that IKs density is lower in the midmyocardium than in endo- and epicardium, and therefore IKr inhibition results in a slower repolarization in the midmyocardial cells than in the other two cell types. These results suggest that transmural differences in APD might be enhanced in the presence of Sotalol. IV. CONCLUSIONS This study investigates the electrophysiological changes in human ventricular myocytes caused by Sotalol-induced inhibition of IKr. A human ventricular Markov model of membrane kinetics was developed to include the interactions between HERG and Sotalol. The model was validated using experimental data of HERG expressed in HEK cells at near-physiological temperatures. Simulated IKr showed Sotalol-induced inhibition in line with experimental data. Simulation results also show that an increase in Sotalol concentration results in prolongation of the APD in endocardial, epicardial and mid-myocardial cells. The prolongation was larger in mid-myocardial than in epicardial and endocardial cells. Therefore, an increase in transmural dispersion of repolarization in the ventricles is expected in the presence of Sotalol, which could be pro-arrhythmogenic [7], and could lead to changes in the T wave morphology.
APPENDIX Table 1
Parameters for transition rates of IKr Markov Variable
ΔS
ΔH
C1→C2, C1*→C2*
α1
-100.36
66797
2.992e-004
C2→C1, C2*→C1*
β1
-188.89
40486
-1.611e-003
C2→C3, C2*→C3*
α2
-113.57
66797
0.0
C3→C2, C3*→C2*
β2
-193.00
40486
0.0
C3→O, C3*→D*
α3
-118.96
66797
9.751e-004
O→C3, D*→C3*
β4
-241.90
40486
-1.066e-003
O→ I, D*→I*
α5
-53.12
79619
5.958e-004
I→O, I*→D*
β5
-52.74
85688
-8.335e-004
Transitions
Fig. 3 Time course of HERG inhibition by Sotalol given as normalized peak of IKr current amplitude when applying a voltage clamp step pulse (2s) protocol and 500 μM Sotalol at 35o C. The solid line denotes the model output. Experimental data of IKr current from HERG expressed in HEK cells from [16] Fig. 4D is shown as dotted line.
Z
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Modelling effects of Sotalol on Action Potential morphology using a novel Markov model of the HERG channel
4. 5. 6.
7. 8.
9. 10. 11.
12. 13. 14. 15.
Fig. 4 Simulated time course of action potential (AP) and IKr in epi-, endo- and mid-myocardial cells in control (solid lines) and in the presence of block by 300 μM Sotalol.
REFERENCES 1.
2. 3.
Zhang L et. al. (2000), 'Spectrum of ST-T-Wave Patterns and Repolarization Parameters in Congenital Long-QT Syndrome: ECG Findings Identify Genotypes', Circulation 102(23), 2849-2855. Curran ME et. al. (1995), 'A molecular basis for cardiac arrhythmia: HERG mutations cause long QT syndrome.', Cell 80(5), 795-803. Kiehn J, Lacerda AE, Wible B & Brown AM (1996), 'Molecular physiology and pharmacology of HERG. Single-
16. 17. 18.
53
channel currents and block by dofetilide', Circulation 94(10), 2572-2579. Colquhoun D & Hawkes AG (1981), 'On the stochastic properties of single ion channels', Proc R Soc Lond B Biol Sci 211(1183), 205-235. Horn R & Lange K (1983), 'Estimating kinetic constants from single channel data.', Biophys. J. 43(2), 207-223. Destexhe A, Mainen ZF & Sejnowski TJ (1994), 'Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism', Journal of Computational Neuroscience V1(3), 195-230. Clancy CE & Rudy Y (2001), 'Cellular consequences of HERG mutations in the long QT syndrome: precursors to sudden cardiac death.', Cardiovasc Res 50(2), 301-313. Oehmen CS, Giles WR & Demir SS (2002), 'Mathematical model of the rapidly activating delayed rectifier potassium current I(Kr) in rabbit sinoatrial node.', J Cardiovasc Electrophysiol 13(11), 1131-1140. Weerapura M et. al. (2000), 'State-dependent barium block of wild-type and inactivation-deficient HERG channels in Xenopus oocytes.', J Physiol 526, 265-278. Lu Y et. al. (2001), 'Effects of premature stimulation on HERG K(+) channels.', J Physiol 537(Pt 3), 843-851. Mazhari R et. al. (2001), 'Molecular interactions between two long-QT syndrome gene products, HERG and KCNE2, rationalized by in vitro and in silico analysis.', Circ Res 89(1), 33-38. Fink M et. al. (2007), 'Contributions of HERG K+ current to repolarization of the human ventricular action potential', Prog Biophys Mol Biol [In press]. Sanguinetti MC & Tristani-Firouzi M (2006), 'hERG potassium channels and cardiac arrhythmia', Nature 440, 463-469. Mitcheson JS, Chen J & Sanguinetti MC (2000), 'Trapping of a methanesulfonanilide by closure of the HERG potassium channel activation gate', J Gen Physiol 115(3), 229-240. Fiset C, Feng ZP et. al. (1996), ' [H+]Dofetilide binding: biological models that manifest solely the high or the low affinity binding site' J Moll Cell Cardiol 28, 1085-1096. GE, Kirsch et. al. (2004), 'Variability in the measurement of hERG potassium channel inhibition: effects of temperature and stimulus pattern', J Pharm Tox Meth 50, 93-101. ten Tusscher KHW et. al. (2004), 'A model for human ventricular tissue', Am J Physiol Heart Circ Physiol 286, H1573H1589. Gima K & Rudy Y (2002), 'Ionic current basis of electrocardiographic waveforms: a model study', Circ Res 90, 889-896. Address of the corresponding author: Thomas Brennan Department of Engineering Science, University of Oxford Parks Road, Oxford United Kingdom OX1 3PG Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Neural Networks Based Approach to remove Baseline drift in Biomedical Signals J. Mateo1, C. Sanchez1, R. Alcaraz1, C. Vaya1 and J. J. Rieta2 1
Innovation in Bioengineering Research Group, Castilla La Mancha University, Cuenca, Spain 2 Biomedical Synergy, Valencia University of Technology, Gandia (Valencia), Spain
Abstract— Nowadays there exist different approaches to cancel out noise effect and baseline drift in biomedical signals. However, none of them can be considered as completely satisfactory. In this work an artificial neural network (ANN) based approach to cancel out baseline drift in electrocardiogram signals is presented. The system is based on a grown ANN allowing to optimize both the hidden layer number of nodes and the coefficient matrixes. These matrixes are optimized following the simultaneous perturbation algorithm, offering much lower computational cost that the traditional back propagation algorithm. The proposed methodology has been compared with traditional baseline reduction methods (FIR, Wavelet-based and Adaptive LMS filtering) making use of cross correlation, signal to interference ratio and signal to noise ratio indexes. Obtained results show that the ANN-based approach performs better, with respect to baseline drift reduction and signal distortion at filter output, than traditional methods. Keywords—Cancellation Noise, Baseline, Neural Networks and Madeline.
I. INTRODUCTION The Electrocardiogram (ECG) is a graphical representation of the electrical activity of the heart that offers information about the state of the cardiac muscle. With Filtering techniques applied in ECG it would be possible to improve the diagnosis of some diseases of the heart and diverse pathologies [1]. The bandwidth of the acquiring system is usually from the 0.05Hz to 100Hz with almost linear res-ponse, causing no distortion of the pulse waveform. How-ever, distortion may arise from the movement of the subject respiration. The frequency components of the respiration are usual-ly below 0.8Hz. The motion artefacts are characterized by lowfrequency components too. There are some approaches on handling baseline wander of ECG, MEG and impedance cardiogram [2]. Some researchers attempt to suppress baseline wander with high pass filter, which introduces nonline-ar phase distortion and the key points displacement. To preserve phase information, the symmetric FIR digital filter is designed, but it does very little to attenuate low frequency baseline wander [3]. Some people can have the rapid pulse, moderate pulse or slow pulse. What is more, the frequency of respiration is also various. In consequence, the pulse and its baseline drift vary with
different people. Sörnmo used the time-varying filter to remove the baseline wander of ECG [4]. But the algorithm is complex, level dependent and rhythm dependent. Laguna [5] has used adaptive filter to remove the ECG baseline wander. Similarly, adaptive filters and Wiener filters have been of limited value in the absence of prior knowledge of the physiological signal and its baseline drift [6] and Wavelets [7-8]. This method has not been applied in the cancellation of baseline noise in ECG signals yet. This system has two important advantages: to reduce the processing time, to provoke low signal distortion, to reduce diverse noise types and, in addition, it can be applied to a wide range of biomedical signals. II. MATERIALS The electrocardiography treated signals validation requires a set of signals which will have to cover the pathologies, leads, etc in real situations. For this study, two types of signals were used: real recordings from the PhysioNet Database [9], and synthetic signals. Table 1: Signals used for the study Synthetic Real Real+noise(Ecgsyn)
Nº Of registers 200 565 550
Time (seg) 1049 106 106
550 recordings with different pathologies have been obtained from PhysioNet with different types of QRS morphologies. These recordings were sampled at a frequency of 360 Hz and, later, they were up sampled to obtain a frequency of 1kHz. Synthetic signals with different noises have been generated making use of the ECGSyn software [9]. White noise, muscular and artefacts are included in these registers. The sampling frequency used is 1kHz. III. METHODS A. Neural Networks The neuronal perceptron multilayer network(MLP) using the algorithm of backpropagation has been applied to di-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 90–93, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Neural Networks Based Approach to remove Baseline drift in Biomedical Signals
verse practical problems [10]. Perceptron multilayers method consists of at least three layers: A input layer, one or more hidden layers and an exit layer. A way to consider the optimal number of nodes in the hidden layer is to stop the training after a certain number of iterations and to determine how many signals were filtered with the present number of neurons used in the hidden layer. If the result of this test is not satisfactory it will add one or more neurons in the hidden layer to improve the performance of the network. In these cases the network must be completely trained [10]. An attractive alternative is the development of increasing networks in which nodes are added in the hidden layer in systematic form during the learning process. With this idea diverse structures have been proposed such as the cascade-correlation network [11], as neural networks [12-14]. This article proposes an increasing neuronal network in which it is necessary to optimize the nodes number of the hidden layer to optimize and to give solution to two problems in the MLP. In the proposed system the number of nodes is added. The network is optimized adding a node to the hidden layer after a number of iterations. The weights that connect the input layer with the hidden layer are conserved. On the other hand to reduce the computational complexity, an algorithm based on the simultaneous perturbation method is used [10][15-18]. B. Proposed System The proposed system consists initially of a simple structure similar to the neural network ADALINE (ADAptive LINear Element), which is used like initial structure because this one is simple and easy to optimize using the algorithm of square minimums average, LMS [12]. It is had initially an input layer layer, one hidden layers (with three neurons) and an exit layer, where they will be added neurons in the intermediate layer. When network has converged, if the operation obtained by the system is not the required one, adds a neuron in the hidden layer, as in the Figure 1. In this case the weights that connect the input layer with the nodes of the hidden layer are congealed, these are been previously trained. The weights that connect the hidden layer with the exit layer are adapted, as well as the weights that connect the input layer with the neuron added in the intermediate layer, as it is shown in Figure 1. Once trained the network, it returns to evaluate its operation. If it is not correct, a new neuron will be added in the hidden layer. Next, the weights that connect the hidden layer with the exit, as well as the weights connect the input layer with the new neuron added in the intermediate layer
91
Input
Output
1
+
Fs
+
Fs
Y1
X1
+
Fs
+
Fs
Y2
Xn
+
Fs
+
Fs
Ym
+
Fs
W
V
Hidden layer
Fig. 1 Proposed Neuronal Network with one neuron in the hidden layer. The black coefficients are constants
are trained. This procedure is repeated until obtaining the wished operation. This new structure has a special characteristic: it grows while it learns. It means that the added neurons in the hidden layer adapted weights, whereas the weights of the input layer conserves the learning of the obtained network. This mechanism, although sometimes could produce neuronal networks with an sub-optimal number of neurons in the intermediate layer, allows to consider of approximate way the size of the required network to carry out a certain operation without having to train the neuronal network completely whenever a neuron in the hidden layer is added. Thus, the neurons that have been added allow a good operation of the network in general. In all the stages the neuronal network is adapted using the method of simultaneous perturbation which has been proven and has shown good results. C. Learning Algorithm using Simultaneous Perturbation In [16-17] it is possible to introduce the simultaneous perturbation method. Other authors [15] have also reported results of similar methods. To adapt the weights of the system it is necessary to consider the gradient of the error function, this is:
∇≅
∂J ( w) ∂w
(1)
Defining the error function like: J ( w) =
1 ( y − y d )2 2
(2)
where;
ε = (y − yd )
(3)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
92
J. Mateo, C. Sanchez, R. Alcaraz, C. Vaya and J. J. Rieta
Using the equation 3 it is possible to measure the error between the present exit and the wished exit. On the other hand, the approach of differences is a procedure known to obtain the derived from a function, so this approach to reduce the complexity can be used [10]. The vector w is defined as [16], c is a perturbation added to the i component: The neural network exit, Y, is a function of the vector of weights: ∂J ( w) ∂w i
≈
f (Y ( w i )) − f (Y ( w)) c i
w Ä for all the network operations. It is due to evaluate JÃ weights to obtain the amount modified for all the weights. Y (Wt + Ct , Vt + Dt ) − Y (Wt ,Vt ) cti
Frequency
0.1 Hz
SNR (dB) Error %
0.8
-5.2
-8.7
0.8
-5.2
-8.7
0.01
0.03
0.3
0.1
0.03
4.9
Neural
0.1
0.3
0.5
0.2
0.3
0.5
Frequency
(5)
So a good general operation for all the adapted weights cannot be hoped. In order to eliminate this difficulty the simultaneous perturbation was introduced in which all the network weights are perturbed simultaneously [17] as it is indicated in the equation 5. The projections for the convergence of the algorithm of simultaneous perturbation have been demonstrated [16].
Error %
0.4 Hz
0.6 Hz
0.8
-5.2
-8.7
6.8
0.8
-2.8
Wavelet
3.8
19
112
23
54
134
Neural
0.2
0.4
0.7
0.8
1.5
3.7
The aim is to find the best trade-off between information preservation and baseline wander removal. This distortion does not influence the efficient of pulse diagnosis and pulse pattern recognition. Especially, the actual pulse waveform baseline frequency contents are rich. Thus, it is prompted the neural filter for the removal of the pulse baseline. ⎛ signal ⎞ ⎟ SNR = 20 ∗ log 10⎜ ⎜ noise ⎟ ⎝ ⎠ error =
(6)
clean signal − filtered signal
(7)
clean signal
⎛ ⎜ E{ xin − x SIR = 20 log⎜ ⎜ E{ xout − x ⎝
IV. RESULTS For further research of the Neural Networks filter, it is simulated the clean pulse signal and corrupted with different baselines. It was compared the performance of the neural network approach with the standard filtering techniques. The Butterworth high pass digital filter is nonlinear phase filter, so the pulse waveform can be distorted. Bessel filter phase is perfect but its amplitude is very weak. Thus it is designed the traditional linear-phase FIR filter using least squares error minimization. Table 2 lists the error of the Wavelet and the Neural Networks during different signal to noise ratio (SNR) of baseline’s frequency is 0.1, 0.2, 0.4, 0.6 Hz respectively. When the baseline’s frequency is less than 0.2Hz, the Wavelets method is satisfied. To the Neural filter, when the baseline’s frequency is less than 0.6Hz its filter results are satisfied for the pulse parameters computing and analysis and the error is 3.7% even when the SNR is -2.8db.
0.2 Hz
Wavelet
SNR (dB)
(4)
Nevertheless the above idea which is very simple, needs more
Δwti =
Table 2 Comparison of errors of Wavelet and Neural Networks
2 2
⎞ ⎟ ⎟ ⎟ ⎠
(8)
Table 3 Obtained results of the cross correlation and SIR of baseline, average values Frequency
Synthetic
Real
SIR
FIR LMS Wavelet Neural Networks
0.92 ± 0.03 0.63 ± 0.32 0.94 ± 0.02 0.97 ± 0.02
0.91 ± 0.03 0.60 ± 0.35 0.93 ± 0.02 0.96 ± 0.02
13.2 ± 0.3 5.8 ± 2.23 16.2 ± 0.3 19.2 ± 0.3
It is token measures to show the cross correlation exit and input results, table 3. On the other hand it is need to measu-re the (SIR), this last way for synthetic signals. Equation 8, shows SIR expression where xin shows the input to the system, xout the exit and x the original registry without noise.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Neural Networks Based Approach to remove Baseline drift in Biomedical Signals
93
REFERENCES
2
0
A)
1.
-2
-4 2
2.
0
B) -2
-4 2
3.
0
C) -2
4.
-4 2
5.
0
D) -2
-4 2
6.
0
E) Muestras
-2
-4
0
500
1000
1500
2000
2500
7.
Samples
Fig. 2
A. Input Signal (Real ECG). B. FIR Filter C. LMS Filter D. Wavelet Filter E. Proposed Filtrate Systems
8. 9.
V. CONCLUSIONS This paper introduces an approach for removing baseline in ECG signals using neural networks. The approach can easily be combined with other wavelet-based ECG preprocessing techniques such as noise reduction and power line interference reduction. The paper illustrates the effectiveness of the approach by using examples with both simulated and measured ECG data. The current algorithm uses simultaneous perturbation which shows better results for noise reduction. The target application of our algorithm is pre-processing of ECG signals. The signal cancellation depends on the convergence of the method with the LMS methods. The results obtained are quite good with systems FIR and Wavelets (biorthogonal 6.8). However, the System based on Neural Networks, is the better to eliminate the baseline noise. It is possible to emphasize as well, that this last method, it is effective in high processing speed, it is easier to implement, it provokes low signal distortion and it needs minimum memory requirement.
ACKNOWLEDGMENT This work was partly funded by the project PAC-05-0081 from Consejería de Educación de la Junta de Comunidades de Castilla-La Mancha, GV06/299 from Consellería de Empresa, Universidad y Ciencia de la Generalitat Valenciana and TEC2007--64884 from the Spanish Ministry of Science and Education.
10. 11. 12. 13. 14. 15. 16. 17. 18.
Sörnmo L, Laguna P. (2005) Bioelectrical Signal Processing in Cardiac an Neuro-logical Applications. Elsevier Academic Press. Zheng Xiaoyun Wang Zhigang WN, Lan X. (1997) Restraining respiratory baseline drift of impedance cardiogram signals using wavelet transform. Journal of Chongquing University Natural science edition 1997; 20(5):58.62. Lian Y, HO P. (2004) ECG noise reduction using multiplier-free FIR digital filters. Proceedings of 2004 International Conference on Signal Pro- cessing 2004;2198.2201. Sörnmo L.. (1991) Time-varying filtering for removal of baseline wander in exercise ECGs. Computers in Cardiology 1991; 145.148. Laguna P.,J ane R. and Caminal P.(!992) Adaptive filtering of ECG baseline wander. Engineering in Medicine and Biology Society Proceedings of de Annual International Conference of the IEEE 1992;508.509. Chiu CC, Yeh SJ. (1997) A tentative approach based on wiener filter for the reduction of respiratory effect in pulse signals. Proc.19th Int Conf IEEEEMBS 1997;1394.1397. Lisheng Xu DZ, Wang K. (2005) Wavelet-based cascaded adaptive filter for removing baseline drift in pulse waveforms. IEEE Trans Biomed Eng 2005;53(11):1973.1975. Nibhanupudi S. Signal Denoising Using Wavelets. Ph.D. thesis, University of Cincinnati, 2003. Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PC, Mark RG, Mietus JE, Moody GB, Peng CK, Stanley HE. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000 (June 13);101(23):e215.e220. Haykin S., (1994) Neural Networks: A Comprehensive Approach, IEEE Computer Society Press 1994, Piscataway, USA Lehtokangas M.,(1999) Fast Initialization for Cascade-Correlation Lear- ning, IEEE Trans. on Neural Networks , 10 (2), 410-414. Sanchez G. K. Toscazo, M. Nakano and H. Perez. (2001) A growing cell neural network structure with back propagation learning algorithm. Telecommunications and Radio Engineering ;56(1):37-45 Hodge V.:(2001) Hierarchical Growing Cell Structures, Trees GCS, IEEE Trans. on Knowledge and Engineering , 13 (2), 207-218. Schetinin V.(2003) A Learning Algorithm for Evolving Cascade Neural Networks, Neural Letters , 17 (1), 21-3. Maeda Y. and R.J.P. De Figueiredo, (1997) Learning Rules for Neuro-controller via Simultaneous Perturbation, IEEE Trans. on Neural Networks ; 8 (6), 1119-1130. Spall J.C., A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates, Proc. of The American Control Conference 1987, 1161-1167 Spall J.C., y J.A. Criston, Nonlinear Adaptive Control Using Neural Networks: Estimation with a Smoothed Form of Simultaneous Perturbation Gradient Approximation, Statistical Sinica 1994, 4, 1-27, Kathivalavakumar I. y P. Thangavel; (2003) A New Learning Algorithm Using Simultaneous Perturbation with Weight Initialization, Neural Letters , 17 (1), 55-68. Author: Jorge Mateo Sotos Institute: City: Country: Email:
Innovation in Bioengineering Research Group Cuenca Spain
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Non-Linear Organization Analysis of the Dominant Atrial Frequency to Predict Spontaneous Termination of Atrial Fibrillation R. Alcaraz1 and J. J. Rieta2 1
Innovation in Bioengeeniering Research Group, University of Castilla-La Mancha, Cuenca, Spain 2 Biomedical Synergy, Valencia University of Technology, Valencia, Spain
Abstract— The prediction of spontaneous atrial fibrillation (AF) termination or maintenance could avoid unnecessary therapy and contribute to take the appropriate decisions in the management of the arrhythmia. The aim of this work is to predict if an AF episode terminates spontaneously or not. The prediction was carried out making use of non-linear organization analysis applied to the surface ECG. Sample entropy was selected as organization index, proving that atrial activity (AA) organization increases prior to AF termination. Using the dominant atrial frequency organization analysis, that is the frequency-selected signal produced by the main reentry wandering the atrial tissue, 92% of the terminating and nonterminating analyzed AF episodes were correctly classified. Because noise and ventricular residues degrade AA organization estimation performance, the use of selective filtering to get the dominant atrial frequency was necessary. The obtained outcomes allow to conclude that the dominant atrial frequency, and therefore, the main atrial reentry, contains the most relevant information about spontaneous AF termination. Keywords— Atrial Fibrillation, ECG, Atrial Activity, Sample Entropy, Organization.
I. INTRODUCTION Atrial fibrillation (AF) is the most commonly diagnosed sustained arrhythmia in the clinical practice and affects up to 1% of the general population. Considering its prevalence with age, this arrhythmia affects up to 15% of the population older than 80 and has an incidence that doubles with each advancing decade [1]. There exist evidence that AF is one of the main causes of embolic events that, in 75% of the cases, develop complications associated with cerebrovascular accidents, provoking that a patient with AF has twice the risk of death than a healthy person [2]. AF episodes that terminate spontaneously (paroxysmal AF episodes) are the preceding stage of permanent AF episodes, which only terminate applying pharmacological, electrical or surgical intervention. Permanent AF patients have a high embolic accidents risk and about 18% of paroxysmal AF degenerate into permanent AF in less than 4 years [1]. Therefore, the early prediction of AF maintenance is crucial, because appropriate interventions may terminate the arrhythmia and prevent AF chronification. In contrast,
the spontaneous AF termination prediction could avoid unnecessary therapy, and therefore, reduce the associated clinical costs and improve the patient’s quality of life. The aim of this work is to predict if an AF episode terminates spontaneously making use of surface electrocardiogram (ECG) recordings, which can be easily obtained. The decrease in the number of reentries prior to AF termination produces simpler wavefronts into the atrial tissue [3], and f waves, that characterize the ECG of an AF patient, evolve to P waves, which characterize a normal ECG with sinus rhythm [2]. Therefore, the atrial activity (AA) becomes more organized before AF termination [4] and, as a consequence, this fact can be used to predict AF termination when the proper analysis tools are used. Previous groups studied non-linear complexity indexes to characterize the atrial activity degree of organization, but obtained results did not reveal significative differences between terminating and non-terminating AF episodes. The low signal to noise ratio was believed to be the main reason to this unsuccessful result [5]. Thereby, in the present work, an selective filtering filter process adapted to the peak atrial frequency was used to reduce noise, ventricular residues and other nuisance signals. The obtained signal organization was estimated by means of sample entropy being able to discriminate between terminating and non-terminating AF episodes. This non-linear tool quantifies the regularity of a time series [6] and its use in this study is justified because (i) the non-linearity, as necessary condition for a chaotic behavior, is present in the heart with AF at cellular level and (ii) the electrical remodelling in AF is a far-fromlinear process [7]. II. MATERIALS A. Database The used database contained 50 one minute and two leads (II and V1) electrocardiogram (ECG) recordings, which were available in Physionet [8]. They were extracted from 24-hour Holter recordings from 50 different patients. The database included non-terminating AF episodes (group N), which were observed to continue in AF for at least one
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 94–98, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Non-Linear Organization Analysis of the Dominant Atrial Frequency to Predict Spontaneous Termination of Atrial Fibrillation
hour following the end of the excerpt, and AF episodes terminating immediately after the end of the extracted segment (group T). Recordings have been divided into a learning and a test set. Next, 10 labelled recordings of each group formed the learning set. An optimal threshold, which should allow to discern between terminating and non-terminating paroxysmal AF episodes, was defined making use of the proposed methodology together with the learning set. Finally, the test set was composed with the remaining 30 recordings. B. Data Preprocessing The ECG recordings were preprocessed in order to reduce noise, nuisance interferences and improve later analysis. Firstly, baseline wander was removed making use of bidirectional high pass filtering with 0.5 Hz cutt-off frequency [9]. Secondly, high frequency noise has been reduced with a eight order bidirectional IIR Chebyshev low pass filtering, whose cut-off frequency was 70 Hz [10]. Finally, powerline interference has been removed through adaptive filtering, which preserves the ECG spectral information [11]. III. METHODS
Sample Entropy (SampEn) examines time series for similar epochs and assigns a non-negative number to the sequence, with larger values corresponding to more complexity or irregularity in the data [12]. Two input parameters, a run length m and a tolerance window r, must be specified for SampEn to be computed. SampEn(m,r,N) is the negative logarithm of the conditional probability that two sequences similar during m points remain similar at the next point, where self-matches are not included in calculating the probability. Thus, a lower value of SampEn also indicates more self-similarity in the time series. Formally, given N data points from a time series {x(n)}=x(1),x(2),...,x(N), SampEn can be defined as follows: 1. Form m vectors defined by Xm(1),...,Xm(N-m+1), for 1≤i≤N-m+1. These vectors represent m consecutive x values, starting at the ith point. 2. Define the distance between vectors Xm(i) and Xm(j), d[Xm(i),Xm(j)], as the absolute maximum difference between their scalar components: max (|x(i + k ) − x( j + k )|) d [X m (i ), X m ( j )] = k=0,... ,m −1
3. For a given Xm(i), count the number of j (1≤j≤N-m, j≠i), denoted as Bi, such that the distance between Xm(i) and Xm(j) is less than or equal to r. Then, for 1≤i≤N-m, 1 Bi N − m −1
(2)
1 ∑ Bim (r ) N −m
(3)
Bim (r ) =
4. Define Bm(r) as Bm (r ) =
5. Increase the dimension to m+1 and calculate Aias the number of Xm+1(i) within r of Xm+1(j), where j ranges from 1 to N-m (j≠i). We then define Aim(r) as Aim (r ) =
1 Ai N − m −1
(1)
(4)
6. Set Am(r) as A m (r ) =
1 ∑ Aim (r ) N −m
(5)
Thus, Bm(r) is the probability that two sequences will match for m points, whereas Am(r) is the probability that two sequences will match for m+1 points. Finally, sample entropy can be defined as ⎧ A m (r )⎫ SampEn(m, r ) = lim ⎨− ln m ⎬ B (r )⎭ N →∞ ⎩
A. Sample Entropy
95
(6)
which is estimated by the statistic ⎡ A m (r ) ⎤ SampEn(m, r, N ) = −ln ⎢ m ⎥ ⎣ B (r ) ⎦
(7)
Although m and r are critical in determining the outcome of SampEn, no guidelines exist for optimizing their values. The m and r values suggested by Pincus are m=1 or m=2 and r between 0.1 and 0.25 times the standard deviation of the original time serie {x(n)} [7]. B. Main Atrial Wave Derivation and Classification The proposed methodology to obtain the MAW is shown in Fig. 1. Firstly, cancellation of QRST waves from the ECG signals has been performed, obtaining a raw approximation of the MAW. Though a variety of QRST cancellation techniques exist, the average QRST template cancellation method has been used, since only two leads were available [13]. Next, the power spectral density (PSD) of the residual signal was calculated using Welch Periodogram. A Hamming window of 4096 points in length, a 50% overlapping between adjacent windowed sections and a 8192points Fast Fourier Transform (FFT) were used as computa-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
96
R. Alcaraz and J. J. Rieta
tional parameters. The frequency with the largest amplitude within the 3-9 Hz range was selected as the dominant atrial frequency. Finally, the MAW was obtained applying a selective filtering to the AA signal centered around the dominant atrial frequency. To prevent distortion, a linear phase FIR filter has been used [14]. Chebyshev approximation was preferred because all the filter parameters can be suitably fitted and minimum ripple in the pass and stop bands were needed. Therefore, a high order filter should be used, such as the Kaiser approximation marks [15]: M=
− 20log 10
(
)
∂ 1 ∂ 2 − 13
14.6 Δf
+1
(8)
where M is the filter order, ∂1 and ∂2 are the pass and stop bands ripple, respectively, and Δf is the transition bandwidth between bands. A selective filter must have and lower than 0.5% of the gain and Δf lower than 0.01 Hz, thereby its order must be greater than 250. The selected filter bandwidth should be lower than 6 Hz because the typical frequency range of the AA is around 39 Hz [16]. In our experiments, the best results were obtained with a 3 Hz bandwidth and 768 filter coefficients. The MAW organization results obtained through the application of SampEn to the learning set, defined the optimum threshold (Th) that, later, will allow the test set classification into terminating and non-terminating AF episodes. IV. RESULTS The proposed methodology was applied to the learning set and 100% sensivity and 90% specificity were obtained, see Fig. 2(a). The Receiver Operating Characteristic (ROC) curve provided 0.089115 as optimum SampEn discrimination threshold between terminating and non-terminating AF sets. Fig. 2(b) shows the SampEn values for the 20 learning signals together with the mean and standard deviation values for each group. Note that all non-terminating and 9 out of 10 terminating recordings (95% of the learning signals) can be correctly discriminated. Making use of the aforementioned SamEn threshold, 27 out of 30 test signals (90%) were correctly classified, see Fig. 2(c). Therefore, the AF behaviour of 46 out of 50 recordings (92%) were correctly predicted through the organization analysis of the main atrial wave. The terminating episodes present lower SampEn values than the non-terminating ones, see Table 1. Indeed, both paroxysmal AF groups are statistically distinguishable, given that statistic significance is notably lower than 0.01.
Table 1. Mean value and standard deviation for T and N sets and the tstudent statistical significance.
mean ± std p
N Group
T Group
0.1047 ± 0.01352
0.0747 ± 0.0156
0.00000000245
Finally, remark that the obtained SampEn values are quite low, because of the MAW is a notably regular wave. V. DISCUSSION AND CONCLUSIONS By analyzing with sample entropy main atrial wave organization prior to spontaneous AF termination, 92% of the terminating and non-terminating AF episodes were correctly classified. Obtention of the MAW was necessary because direct SampEn analysis of the atrial activity did not revealed significative differences between terminating and nonterminating AF recordings. The presence of noise and ventricular residues is believed to be the main reason to this negative result [5]. In previous works different characteristics of the atrial activity have been analyzed in order to predict AF termination. Petrutiu et al. [17] studied the AA dominant frequency, whose inverse is the wavelength of the main reentry into the atrial tissue [18], and 94% of the episodes were correctly classified. Nilsson et al. [5] studied the AA energy distribution predicting successfully 88% of the recordings. The results obtained with the presented methodoloy allow to conclude that the MAW, and therefore, the main reentry, contains important information about spontaneous AF termination. Moreover, the AF terminates when the MAW disappears [19], because of the mechanisms that extinguish the MAW affect the existing reentries. So, an in deep analysis of the MAW should improve understanding of AF termination mechanisms. Terminating episodes present lower SampEn values and, consequently, higher organization than non-terminating episodes. This observation corroborates the organization increment in the atrial activity prior to AF termination obtained with invasive atrial electrograms [20] and clinically accepted [4]. In addition, the obtained results prove that this increase in the signal organization is reflected on surface ECG recordings. Therefore, it can be concluded that clinical relevant information can be extracted through sample entropy-based non invasive analysis. This fact, may lead towards the development of improved therapeutic interventions for the treatment of AF.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Non-Linear Organization Analysis of the Dominant Atrial Frequency to Predict Spontaneous Termination of Atrial Fibrillation
97
Fig. 1. Block diagram describing the proposed methodology. Firstly, the ventricular activity is removed from the input ECG to obtain the AA. Next, a low bandwidth selective filter is applied to the atrial signal. The filter is centered around the dominant atrial frequency. SampEn is computed on the resulting signal and threshold compared in order to decide if the episode terminates.
Fig. 2.
Obtained results. (a) Receiver Operating Characteristic (ROC) Curve obtained with the learning set estimated SampEn values. Classification into non-terminating and terminating AF for the recordings in (b) learning set and (c) test set.
VI. ACKNOWLEDGMENT This work was partly supported by the project GV06/299 from Consellería de Empresa, Universidad y Ciencia de la Generalitat Valenciana and TEC2007–64884 from the Spanish Ministry of Science and Education
REFERENCES V. Fuster, L. E. Ryden, R. W. Asinger, and et al., “ACC/AHA/ESC 2006 guidelines for the management of patients with atrial fibrillation: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines and the European Society of Cardiology committee for practice guidelines developed in collaboration with th european heart rhythm association and the heart rhythm society,” Europace, vol. 8, no. 9, pp. 651–745, 2006. 2. C. Blomstrom-Lundqvist, M. M. Scheinman, E. M. Aliot, J. S. Alpert, and et. al., “ACC/AHA/ESC guidelines for the management of patients with supraventricular arrhythmias,” European Heart Journal, vol. 24, no. 20, pp. 1857–1897, 2003. 3. K. Konings, C. Kirchhof, J. Smeets, H. Wellens, O. Penn, and M. Allessie, “High-density mapping of electrically induced atrial fibrillation in humans,” Circulation, vol. 89, pp. 1665– 1680, 1994.
1.
4. A. Bollmann and F. Lombardi, “Electrocardiology of atrial fibrillation - Current knowledge and future challenges,” IEEE Engineering in Medicine and Biology Magazine, vol. 25, no. 6, pp. 15–23, 2006. 5. F. Nilsson, M. Stridh, A. Bollmann, and L. Sornmo, “Predicting spontaneous termination of atrial fibrillation using the surface ECG,” Medical Engineering & Physics, vol. 8, pp. 802– 808, 2006. 6. S. M. Pincus, “Approximate entropy as a measure of system complexity,” in Proc. Natl. Acad. Sci. USA, vol. 88, no. 6, pp. 2297–2301, 1991. 7. A. Bollmann, “Quantification of electrical remodeling in human atrial fibrillation,” Cardiovasc Res, vol. 47, pp. 207–209, 2000. 8. A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, and et al., “Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals,” Circulation, vol. 101, no. 23, pp. e215– e220, 2000. 9. I. Dotsinsky and T. Stoyanov, “Optimization of bi-directional digital filtering for drift suppression in electrocardiogram signals,” J. Med. Eng. Technol., vol. 28, no. 4, pp. 178–180, 2004. 10. Y. Sun, K. Chan, and S. M. Krishnan, “ECG signal conditioning by morphological filtering,” Comput Biol Med, vol. 32, no. 6, pp. 465–479, 2002.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
98 11. M. Ferdjallah and R. E. Barr, “Adaptive digital notch filter design on the unit-circle for the removal of powerline noise from biomedical signals,” IEEE Transaction on Biomedical Engineering, vol. 42, no. 6, pp. 529–536, 1994. 12. J. S. Richman and J. R. Moorman, “Physiological time series analysis using approximate entropy and sample entropy,” Am J Physiol, vol. 278, no. 6, pp. H2039–H2049, 2000. 13. J. Slocum, A. Sahakian, and S. Swiryn, “Diagnosis of atrial fibrillation from surface electrocardiograms based on computer-detected atrial activity,” Journal of Electrocardiology, vol. 25, no. 1, pp. 1–8, 1992. 14. L. Sörnmo and P. Laguna, Bioelectrical Signal Processing in Cardiac and Neurological Applications.1em plus 0.5em minus 0.4emElsevier Academic Press, 2005. 15. L. R. Rabiner, J. H. McClellan, and T. W. Parks, “FIR digital filter design techniques using weighted chebyshev approximation,” Proc. IEEE, vol. 63, pp. 595–610, 1975. 16. M. Stridh and L. Sornmo, “Spatiotemporal QRST cancellation techniques for analysis of atrial fibrillation,” IEEE Trans. Biomed. Eng, vol. 48, no. 1, pp. 105–111, 2001. 17. S. Petrutiu, A. V. Sahakian, J.Ng, and S. Swiryn, “Analysis of the surface electrocardiogram to predict termination of atrial
R. Alcaraz and J. J. Rieta fibrillation: the 2004 computers in cardiology/physionet challenge,” Computers in Cardiology, pp. 105 – 108, 2004. 18. M. Holm, S. Pehrson, M. Ingemansson, L. Sörnmo, R. Jahansson, L. Sandhall, and et. al, “Non-invasive assessment of the atrial cycle length during atrial fibrillation in man: introducing, validating and illustrating a new ECG method,” Cardiovasc. Res., vol. 38, no. 1, pp. 69–81, 1998. 19. J. Kneller, J. Kalifa, R. Q. Zou, A. V. Zaitsev, M. Warren, O. Berenfeld, and et al, “Mechanisms of atrial fibrillation termination by pure sodium channel blockade in an ionicallyrealistic mathematical model,” Circulation Research, vol. 96, no. 5, pp. E35–E47, 2005. 20. L. Faes, G. Nollo, R. Antolini, F. Gaita, and F. Ravelli, “A method for quantifying atrial fibrillation organization based on wave-morphology similarity,” IEEE Transactions on Biomedical Engineering, vol. 49, no. 12, pp. 1504–1513, 2002. Author: Institute: Street: City: Country: Email:
Raúl Alcaraz Martínez Innovation in Bioengeeniering Research Group E. U. Politécnica, Campus Universitario Cuenca Spain
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Phase-Rectified Signal Averaging for the Detection of Quasi-Periodicities in Electrocardiogram R. Schneider1, A. Bauer1, J.W. Kantelhardt2, P. Barthel1 and G. Schmidt1 1
1. Medizinische Klinik und Deutsches Herzzentrum München, Technische Universität München, Munich, Germany 2 Institute of Physics, Martin-Luther-Universität Halle-Wittenberg, Halle (Saale), Germany
Abstract— The methods analyzing heart rate variability (HRV) have to deal with non-stationary and noisy signals. We present a recently developed method, called phase-rectified signal averaging (PRSA), which facilitates the processing of such signals. It provides a higher sensitivity for the detection of small oscillations in the heartbeat tachogram and it allows the separate analyzes of oscillations related to heart rate decelerations and heart rate accelerations. Because of these properties, PRSA enables a better risk stratification in post myocardial infarction patients than the standard HRV parameters. Keywords— Heart rate variability, autonomic nervous system, phase-rectified signal averaging.
I. INTRODUCTION Analysis of heart rate variability (HRV) is a non-invasive technique which provides information about the function of the autonomic nervous system. Several studies have shown that assessing the status of the autonomic nervous system is a useful method for assessing the mortality risk of cardiac patients [1-4]. Heartbeat intervals are modulated by the autonomic nervous system with high brain centers responding to varying inputs from the heart, the lung and blood vessels. Therefore, periodicities in the heart rate tachogram occur on different time scales and are only stationary over a short period of time. Standard HRV parameters are usually calculated using 24-hour ECG recordings and are either based on the heartbeat intervals directly (time domain) or from the power spectrum of the heartbeat intervals (frequency domain) [2]. When calculating the power spectrum with standard Fourier analysis, the non-stationarities in the heartbeat intervals (changes in the amplitude, frequency and phase) results in a spectrum showing predominantly 1/f noise. Only distinct oscillations caused by the autonomic nervous system are visible, weak and transient oscillations are masked by the noise. In this presentation we will describe a recently developed method, termed phase-rectified signal averaging (PRSA) [58], which allows the extraction of periodic elements from non-stationary and noisy signals.
II. PHASE-RECTIFIED SIGNAL AVERAGING The basic principle of PRSA is the alignment of segments of a signal relative to selected anchor points followed by averaging these segments. The complete PRSA procedure can be divided into five steps: (1) defining anchor points, (2) defining segments around each anchor point, (3) phase rectification, (4) signal averaging and (5) quantification of the transformed signal. Figure 1 show these five steps applied on a 24-hour ECG. The next sub-sections will explain each step. A. Definition of Anchor Points At the beginning, anchor points are selected according to certain properties of the signal. A very simple criterion is an increase (or alternatively decrease) of a heartbeat interval. This means, that each interval which is longer (or shorter) than the preceding RR interval is selected as an anchor point. In the graph showing the heartbeat intervals in figure 1 step A, the selected anchor points related to an interval prolongation are marked as full circles (the anchor points related to an interval shortening are marked as open circles). B. Definition of Segments In the next step, segments around each single anchor point are selected. All segments have the same size and the segments can overlap. The length of the segments is defined by the lowest frequency to be visualized. C. Phase rectification The defined segments are now aligned in this way that the anchor points in the segments are on top of each other. Figure 1C shows the alignment schematically; in figure 1D the aligned signal segments are the light grey lines.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 38–41, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Phase-Rectified Signal Averaging for the Detection of Quasi-Periodicities in Electrocardiogram
heartbeat interval
ECG
mV
39
A.Definition of anchors Heartbeat interval (ms)
ν2
νn
ν3
ν1
B.Definition of segments
C.Phase-rectification S1 S2 S3
νi
L
Sn
D. Signal averaging
E. Quantification
Heartbeat interval (ms)
DC(AC)=[X0+X1-X-1-X-2]/4 1100
X0
846
X1
1000 900
838
800
X -2
-60
X -1
830
700 -40
- 20
0
i
20
40
60
-8
-4
0
4
8
i
Fig. 1 Phase-rectified signal averaging applied to a 24-hour ECG. (A) Definition of anchor points: heartbeat intervals which are longer (shorter) than the previous interval are marked with full circles (open circles). (B) Definition of segments: around each anchor point (here interval prolongations are used) segments with the same size are selected. (C) Phase rectification: Segments are aligned at the anchors. (D) Signal averaging: Averaging of the selected segments to get the PRSA signal. (E) Quantification: measuring the central part of the PRSA signal using a Haar wavelet with s=2 and p=0. (From [6] with permission.)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
40
R. Schneider, A. Bauer, J.W. Kantelhardt, P. Barthel and G. Schmidt
D. Signal Averaging A 120
III. PROPERTIES OF PRSA Figure 2 shows – for a 24hour ECG - the original tachogram (A), the power spectrum of the tachogram (B), the PRSA transformed signal (C) and the power spectrum of the PRSA signal (D). As can be seen, both power spectra show two peaks, at the same frequency but in the PRSA signal based spectrum, the frequency components are more distinct than in the tachogram based spectrum. The PRSA transformation improves the signal-to-noise ratio while preserving the frequency content of the original signal. Furthermore, the length of the PRSA signal is much shorter than the tachogram (~80.000 vs. 100). The PRSA signal can be seen as a compressed version of the tachogram containing all the relevant frequencies. What is lost is the temporal association of the oscillations. A detailed theoretical description of the PRSA method can be found in [5] and [7].
106
B
104 102
80
100 40 0
E. Quantification
2
4
6 8 *10 # RR Interval
10-2 -2 10
4
C 110
-1 10 frequency f [interval -1]
PRSA Spectrum Power (ms2)
Phase-Rectified Signal Average RR interval (ms)
For the quantification of the PRSA signal, we suggest to use wavelet analysis. This allows selecting both the time scale s and the time position p relative to the center (the anchor points). For analyzing heartbeat intervals we are using a Haar wavelet with s = 2 and p = 0. With these settings, the central part of the PRSA signal (representing the contribution of all oscillations) is analyzed. The result of the wavelet analysis is called deceleration capacity (DC) or acceleration capacity (AC) when using heartbeat interval prolongations or heartbeat interval shortenings as the selection of the anchor points, respectively. Figure 1E shows the central part of the PRSA signal and how the calculation of DC and AC is performed (the only difference in the calculation of DC and AC is in step A when defining the anchor points using different selection criteria).
Power (ms2)
RR interval (ms)
To get the PRSA signal X(i), the signal values within the aligned segments are averaged – i.e. X(0) is the average of the RR intervals at all anchor points, X(1) and X(-1) are the averages of the RR intervals immediately following and preceding the anchor points, etc. The black line in figure 1D is the resulting PRSA signal.
Power Spectrum
Tachogram
10 3 10
D
2
10 1
109
10 0 108
-40
-20
0
20 40 # RR Interval
10 -1 -2 10
-1 10 frequency f [interval -1]
Fig. 2 Comparison of (A) a tachogram, (B) the power spectrum of the tachogram, (C) the PRSA transformed signal of the tachogram using RR interval prolongations as anchor points and (D) the power spectrum of the PRSA signal. In both power spectra two frequency peaks can be identified (marked with arrows) but the peaks in the PRSA spectrum are more prominent; the signal-to-noise ratio is much better. (From [5] with permission.) IV. CLINICAL APPLICATION Figure 3 shows representative PRSA signals of three post myocardial infarction patients. For each patient, the PRSA signals related to heart rate decelerations and accelerations are shown on the left and right side respectively. The top panel (figure 3 A and B) shows normal PRSA signals, the patient from whom the ECG was taken survived the follow-up period. Both PRSA signals are symmetric and the magnitude of DC and AC are nearly identical. The middle panel (figure 3 C and D) shows the data from a patient who died three month after the index infarction. The PRSA signals are blunted and symmetric and the magnitudes of both PRSA parameters are significantly smaller. Finally, in the bottom panel (figure 3 E and F) are the PRSA signals from a patient who died 5 month after the index infarction. In this case, the PRSA signals are asymmetric and the magnitudes of DC and AC are different. The acceleration related parameter AC has a normal value but the deceleration related value DC has a pathologic value. We found this asymmetric pattern in 15% of our patients [6].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Phase-Rectified Signal Averaging for the Detection of Quasi-Periodicities in Electrocardiogram
When applying PRSA to 24-hour ECGs from myocardial infarction patients, the DC parameter is highly associated with mortality whereas AC has only a weak association with mortality. DC can differentiate patients with high risk (DC ≤ 2.5ms), intermediate risk (DC between 2.6 ms and ≤ 4.5 ms) and low risk (DC > 4.5 ms) [6]. Acceleration related PRSA signal
Deceleration related PRSA signal Heart Beat Interval (ms)
Survivor
850
A
DC= 5.1ms
B
AC= -5.2ms
V. CONCLUSION The phase-rectified signal averaging method facilitates the detection of oscillations in biological signals; it provides a much better signal-to-noise ratio than the standard Fourier transformation [5,7]. Furthermore, it allows the separate investigation of heart rate deceleration related HRV and acceleration related HRV. As shown in [6], DC is a better risk predictor of mortality after myocardial infarction than not only the standard HRV parameters but also the left ventricular ejection fraction.
845
REFERENCES
840
1.
835
Non-survivor
Heart Beat Interval (ms)
2. 700 C
DC= 2.4ms
D
AC= -2.5ms
695
3.
690
4. 5.
685
Non-survivor
865 Heart Beat Interval (ms)
41
E
DC= 2.5ms
F
AC= -5.0ms
6.
860
7. 855
8.
850 -60 -40 -20
0
i
20
40
59 -60 -40 -20
0
i
20
40 59
Fig. 3 Representative PRSA signals of 24-hour ECGs from post myocardial infarction patients. (A) and (B) are from a patient who survived the follow-up period, both DC and AC are normal. (C) and (D) are from a patient who died 3 month after the index infarction, both DC and AC are abnormal. (E) and (F) are from a patient who died 5 month after the index infarction, the PRSA signal-pattern are asymmetric, DC is abnormal but AC is normal. (From [6] with permission.)
Kleiger RE, Miller JP, Bigger JT et al. Decreased heart rate variability and its association with increased mortality after acute myocardial infarction. American Journal of Cardiology 1987; 59: 256-62. Task Force of the European Society of Cardiology and the American Society of Pacing and Electrophysiology. Heart rate variability: standards of measurement, physiological interpretation and clinical use. Circulation 1996; 93: 1043-65. Schmidt G, Malik M, Barthel P et al. Heart-rate turbulence after ventricular premature beats as a predictor of mortality after acute myocardial infarction. Lancet 1999; 353: 1390-6. Barthel P, Schneider R, Bauer A et al. Risk Stratification After Acute Myocardial Infarction by Heart Rate Turbulence. Circulation 2003; 108: 1221-6. Bauer A, Kantelhardt JW, Bunde A et al. Phase-rectified signal averaging detects quasi-periodicities in non-stationary data. Physica A 2006; 364: 423-34. Bauer A, Kantelhardt JW, Barthel P et al. Deceleration capacity of heart rate as a predictor of mortality after myocardial infarction: cohort study. Lancet 2006; 367:1674-81. Kantelhardt JW, Bauer A, Schumann AY et al. Phase-rectified signal averaging for the detection of quasi-periodicities and the prediction of cardiovascular risk. Chaos (in press). Bauer A, Deisenhofer I, Schneider R et al. Effects of circumferential or segmental pulmonary vein ablation for paroxysmal atrial fibrillation on cardiac autonomic function. Heart Rhythm 2006; 3: 1428-35. Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Raphael Schneider Munich University of Technology, 1st Medical Clinic Ismaninger Str. 22 81675 Munich Germany
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
QT Intervals Are Prolonging Simultaneously with Increasing Heart Rate during Dynamical Experiment in Healthy Horses P. Kozelek, J. Holcik Czech Technical University in Prague, Faculty of Biomedical Engineering, nam. Sitna 3105, 27201 Kladno E-mail:
[email protected], Phone: +420 312 608 213, Fax: +420 312 608 204 Abstract – Prolonging of QT intervals was published numerously and it is considered as a significant factor in prediction of sudden death. Available sources are occupied by analysis of static re-cords, i.e. without transient processes. We have recorded ECG data in horses that show clear prolonging of QT intervals during increasing heart rate in dynamical experiment. Generally speaking it is suggested as great risk of sudden heart failure. Paradoxically those records were measured in well trained horses. We suppose that such a phenomenon is caused by changes in nervous control of a heart. We have developed several model structures of QT intervals control. Simulation results indicate that generally accepted attitude to QT intervals control is misguiding or incomplete. We have proved theoretically that the abnormalities are caused by variant sensitivity of ventricular cells to sympathetic and parasympathetic during repolarisation phase. I. INTRODUCTION
Past studies on controlling the myocardium based on the cardiovascular system anatomy show that there are two mechanisms (stimulating and inhibiting) controlling performance of heart in organism. The task of our project was to describe mathematically the sympathetic (stimulating) and vagal (inhibiting) activities in a vegetative part of nervous system. Direct measurement of electro-chemical activity of both branches would be very complicated considering practical reasons (invasive measurement; difficulties in connecting the measuring electrodes to the nervous fibres outside of laboratory environment etc.). Therefore, we decided to use data from indirect measurements through the activity of organs, which are controlled by the vegetative nervous system. The anatomy of equine heart shows that open ends of vegetative nerves have a great density close to the sine node which is the basic source of electrical impulses in heart muscle and determines a heart rate. Thus sine node also determines the length of RR intervals in ECG. RR intervals can serve as an indirect indicator of common sympathetic and parasympathetic activity. In our work we assumed mutually independent activities of both branches. Two independent controlling mechanisms are fully described by two signals, therefore, we have to define another nervous activ-
ity indicator. There are open nervous ends of both vegetative branches in equine heart ventricles. That is why a suitable solution for defining another nervous activity indicator was choosing a sequence of QT intervals (time of spreading the electrical excitation through tissue of myocardium ventricles). Based on knowledge of the sequences of RR and QT intervals we designed a model of a myocardium control that could help to explain the causes of control mechanisms of the heart performance [4]. II. METHODS
Block diagram of the model structure is depicted in Fig. 1. As follows from our previous studies [1], [4] we used the formula
RR (t ) = RRSAU − k SR N S (t ) + k PR N P (t )
(1)
for generating sequences of RR intervals. RRSAU is a basic heart period of sine node and NS and NP represent sympathetic and parasympathetic activity levels. kSR, kPR are multiplicative parameters that express levels of influence of each neural branch upon the duration of RR intervals. Similarly,
QT (t ) = QT0 − k SQ N S (t − τ SQ ) + + k PQ N P (t − τ PQ )
(2)
describes an equation generating QT intervals where τSQ and τPQ are delays in sympathetic and parasympathetic neural branches in heart ventricles and kSQ, kPQ are multiplicative parameters, similar to those in the eq. (1). QT0 is a basic length of QT interval at neural ventricular blockade. Both
Fig. 1: Principle structure of the cardiovascular system control
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 62–65, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
QT Intervals Are Prolonging Simultaneously with Increasing Heart Rate during Dynamical Experiment in Healthy Horses
63
the delays are connected with the finite velocity of nervous stimulation spreading. The aim of extending the described model was to find the internal structure of the subsystems SYM and PSYM (see Fig. 1) and their mutual relationship. The models for simulating static and dynamic properties of both the types of nerves fibres have structures depicted in Fig. 3, as published in [2] and [3]. First of all, it was necessary to describe sensitivity of the fibres on their stimulation represented by input signal NE. In our model, we have used bottom-limited piecewise linear function (Fig. 2) ⎧ ⎪a N + bS , P N S ,P = ⎨ S ,P E ⎪ 0 ⎩
if N E > −
bS , P
aS ,P otherwise
Fig. 2: Sympathetic and/or parasympathetic sensitivities, NS, NP (3)
where indexes “S” and/or “P” represent sympathetic and parasympathetic branch, aS,P is the sensitivity coefficient. Inertia of the nerves is modelled by the first-order low-pass filter described by the frequency response
F ( jω ) =
k T X jω + 1
⋅ e - jωτ X
(4)
where ω represents a frequency, TX is a time constant of the filter, τX is a unit delay and k is a gain of filter. The inertia is associated with the limited delay in response of the cells to their excitation. Finally, the “time-delay” block t represents a final velocity of spreading of the excitation along nervous threads and/or heart tissue. Input signal NE represents response of the neural feedback to impulse stimulation. It is described as
⎧ ⎡ ⎛ 2π π⎞ ⎤ (t − t1 ) − ⎟ + 1⎥, ⎪ A.⎢sin ⎜ 2⎠ ⎦ ⎪ ⎣ ⎝ T if t1 ≤ t ≤ t1 + T NE = ⎨ ⎪ ⎪ 0, otherwise, ⎩
Fig. 3: Basic structure of nervous fibres' model
Fig. 4: Input signal NE
(5)
where T is a duration of the input impulse and t1 represents its lag after some reference starting point. Fig. 5 shows a structure of the nervous heart control. It is based on the hypothesis that QT intervals are not controlled by the nervous activity only, but we can identify an indirect dependency of QT intervals on heart rate and its variability. Fig. 5: Detail structure of nervous control
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
64
P. Kozelek, J. Holcik
objective function
kS
objective function
kS
kP
kP
h
opti ma l path
lp at
op
kS = kP
tim
a
optimum k S = kP
optimum
Fig. 7: Schematic explanation of different behaviour in equine heart
parameters (record 1 – left, record 2 – right)
ventricles (record 1 – left, record 2 – right)
0.1
0.1
0.08
0.08 Objective function
Objective function
Fig. 6: Experimental and simulated data generated by the optimized
0.06 0.04 0.02 Optimum
0 -6
0 -2
kPQ -5
-4
-6
-4 k SQ
0.06 0.04 0.02 Optimum
0 -6
0 -2
kPQ -5
-4
-6
-4 k SQ
Fig. 8: Objective functions for different values of gain factors kSQ, kPQ.
III. RESULTS
The simulation results have been compared to 4 most representative sets of experimental data. The criterion for choosing records was noiseless signal with considerable changes in the sequences of RR and QT intervals as responses to impulse stimulation. The aim of the work is to define properties of controlling subsystems, we have identified some of the model parametres as time constants of the filters (TSRp, TPRp, TSQp, TPQp and/or TSRs, TPRs, TSQs, TPQs), time delays (τSQp, τPQp and/or τSQs, τPQs) and gain factors (kSQp, kPQp and/or kSQs, kPQs). We used Matlab® Optimization Toolbox as an optimization tool. A root mean square error between simulated and real experimental data has been used for an optimization and its minimum was searched by the gradient method. The identification results for two different types of records are summarized in Table 1:
Table 1: Summary of optimized parameters for two different relationships between sequences of RR and QT intervals record 1 (velvet - s el 1 - 2002-04-24) 1. TSR = 25 s
record 2 (nikita - s el - 2002-04-24) 2. TSR = 4 s
3. TPR = 25 s
4. TPR = 4 s
5. TSQ = 19 s
6. TSQ = 18 s
7. TPQ = 17 s
8. TPQ = 18 s
9. τSQ = 18 s
10. τSQ = 4 s
11. τPQ = 20 s
12. τPQ = 1.2 s
13. kSQ = – 5.1
14. kSQ = – 4.2
15. kPQ = – 4.6
16. kPQ = – 4.6
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
QT Intervals Are Prolonging Simultaneously with Increasing Heart Rate during Dynamical Experiment in Healthy Horses
65
a) direct dependency QT on RR intervals – shortening of the RR intervals is followed by the shortening of QT intervals (Figure 6-left) b) relationship between RR and QT that is not explained yet – shortening of the RR causes almost immediate prolonging of the sequences of QT intervals (Figure 6-right). IV. CONCLUSIONS
Having identified model parameters by means of computer simulations we are able to explain fundamental causes of various behaviour of the QT intervals. We recognised almost linear dependency with positive slope constant between optimum parameters of τSQ, τPQ, and TSQ, TPQ,. It is due to the fact that the developed model uses linear subsystems only (the non-linear functions NS = NS(NE) and NP = NP(NE) are used in their linear parts only) and the input signal NE is used for both sympathetic and parasympathetic branch. We have proved experimentally that the simple shape of the QT interval sequences with one local extreme is generated by the model using very similar values of τSQ, τPQ and TSQ, TPQ in both the neural branches. If the supposition about the similarity of the above mentioned parameters is not valid then the signals NS, NP are mutually shifted in time. The mutual shift will cause a change of the QT sequence shape. In the case, one local extreme is substituted by biphasic waveform with local maximum and minimum (Fig. 9). Dependency of parameters kPQ, kSQ can be approximated well by a linear relationship with a negative slope constant (see the x-y projection of the optimum path in Fig. 7 and Fig. 8) kSQ = a.kPQ + b,
(6)
where the estimated values of coefficients a = −2 and b = −13.5 are roughly valid for all analysed records. If we suppose the simplified criteria τSQ = τPQ and TSQ = TP, then the breaking point between the direct (Fig. 6-left) and inverse (Fig. 6-right) dependency of QT on RR intervals is set for kSQ = kPQ = −4.5. Then for kSQ > kPQ we observe the direct and for kSQ < kPQ the inverse dependency of QT on RR intervals.
Fig. 9: Bi-phase sequences of QT intervals (heda1)
ACKNOWLEDGEMENT The research was granted by the project of Internal Grant Competition in Czech Technical University in Prague.
REFERENCES 1.
2. 3.
Holcik, J., Kozelek, P., Hanak, J. and Sedlinska, M.: Mathematical Modelling as a Tool for Recognition of Causes of Disorders in QT/RR Interval Relationship in Equine ECG. Proc. of PRIA2004, St. Peterburg, Russia, part.III, p.688-691. Van der Voorde, B.J.: Modeling The Baroreflex - a system analysis approach, p.10-59, 136-178. Amsterdam, Netherlands, September 1992. Holcik, J., Kozelek, P., Jirina, M., Hanak, J., Sedlinska, M.: Open-loop Model of Equine Heart Control, 13th NordicBaltic Conference on Biomedical Engineering and Medical Physics, vol. 9, pp. 297–298. ISSN 1680-0737 Author: Institute: Street: City: Country: Email:
Petr Kozelek Czech Technical University in Prague nam. Sitna 3105 272 01 Kladno Czech Republic
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Relative contribution of heart regions to the precordial ECGan inverse computational approach A.C. Linnenbank1,2, A. van Oosterom3, T.F. Oostendorp4, P.F.H.M. van Dessel5, A.C. van Rossum2,6, R. Coronel1, H.L. Tan1,5, J.M.T. de Bakker1,2,7 1
Heart Lung Center, Dept. of Exp. Cardiology, AMC, Amsterdam, Netherlands, 2 ICIN, Utrecht, Netherlands 3 Dept. of Cardiology, University of Lausanne, Switzerland, 4 Dept. of Biophysics, Radboud University, Nijmegen, Netherlands, 5 Dept. of Clinical Cardiology, AMC, Amsterdam, Netherlands, 6 Dept. of Cardiology, VUMC, Amsterdam, Netherlands, 7 Dept. of Cardiology, UMC, Utrecht, Netherlands
Abstract— With an inverse computational approach using a multi-lead ECG recording of normal sinus rhythm in a patient spread of activation and repolarization was computed. Using this timing information, contributions to the standard 12-lead ECG were computed for selected segments of the heart. The results show that in none of the precordial leads is the influence of the right ventricle larger than from the left. The electric activity of basal regions and the RVOT are relatively poorly represented in the standard 12-lead ECG. Keywords— body surface mapping, activation time imaging, action potential duration, inverse procedure.
I. INTRODUCTION It is generally assumed that the precordial lead V1 mainly reflects electrical activity from the right ventricle, V2 that of from the interventricular septum and that V3-V6 signals are dominated by the left ventricle. The relative contribution of these cardiac regions to the ECG can not be verified experimentally. Knowledge about the contribution of activation of various parts of the heart to the ECG is important to ascribe ECG abnormalities to certain locations of the heart. Diseases like the Brugada syndrome affect the entire heart but ECG characteristics suggest that abnormalities predominantly arise in the right ventricle (RV). To study the contributions of various parts of the heart to the normal 12-lead ECG we built a (mathematical) volume conduction model of a patient from whom multiple body surface leads were previously recorded. Torso, lungs and heart geometry were reconstructed from magnetic resonance images (MRI). A previously developed inverse method [1,2] was used to estimate the timing of depolarization and repolarization at 1737 nodes specifying the geometry of the heart. The heart was digitally segmented, with triangles added to close the cutting edges. The inversely computed
depolarization and repolarization times were used to specify the local source strength while computing the contributions stemming from these different segments. II. METHODS 65 Channels of ECG data were recorded from a 55 year old female patient. The recordings were made as part of a standard family screening for the presence of the Brugada syndrome. Prior to this test 10 minutes of baseline ECGs were recorded. At baseline the ECG was within normal range and the heart was shown to have no structural abnormalities by MRI.
Fig 1. Left: anterior view of torso, lungs and heart. Right: posterior view. Positions of recording electrodes are indicated by blue dots. Electrodes on other side are visible as gray dots. Lungs are gray. Myocardium is brown. On the right blood volumes connected to the left and right cavities are visible in red and blue respectively
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 42–45, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Relative contribution of heart regions to the precordial ECG-an inverse computational approach
A. MRI and triangulation. A standard cardiac MRI protocol was used to build a 3D numerical model of the heart, lungs and thorax of this patient (Fig. 1), as required for the inverse procedure. From the MRI data slices in the horizontal, transversal and sagital direction, short axis views, and a long axis view were selected. The short axis views were used to create a triangulation of the heart. The same set of slices was used to define the boundaries of the lungs close to the heart to avoid intersection of the organs. The other slices were used to define lung boundaries further away from the heart. Special software to mark points on the slices and convert those to 3D vertices and to combine all these vertices into consistent triangulation was custom written. The thorax was created by deforming an existing mesh (Judy from Poser 5, efrontier.com, Santa Cruz CA) to fit the boundaries in the MRI slices. The electrodes were placed on the thorax at the documented recording sites. B. ECG recordings During a 40 minutes procedure, a 65-lead ECG was continuously recorded at the locations of the Amsterdam lead system [3], at 2 kHz sampling rate, bit_step of 1/8 µV and 24 bits resolution (modified Active2, BioSemi, Amsterdam). Post processing of the signals included the following elements: averaging of 1 minute episodes including rejection of artefacts, reference to zero mean, appropriate baseline correction and selection of the signals elements. For the part of study reported here only the 10 minutes of baseline ECGs were used.
43
electrodes by minimizing the difference between measured and model based potentials. As the relation between activation time and source strength is non-linear, the problem involved constitutes a non-linear parameter estimation problem. Such problems demand the specification of an initial estimate of the solution. Next, the minimization is performed iteratively, for which, in this study, the Levenberg-Marquardt method was used. The quality of the inverse procedure relies heavily on the initial estimate. Several methods to obtain an initial estimate have been described in the literature [2,6]. In the current study we have used a method based on the intra-myocardial distance function as previously described [6,7]. In short, with this method the initial estimate is selected from the exhaustive search of all activation patterns resulting from activations in which, successively, each node is taken as a focus. For each focus the timing of the other nodes was computed from the known distance to the focus and an assumed propagation velocity. The inverse problem is mathematically ill-posed. This is overcome by including the surface Laplacian to constrain the spread of activation to physiologically realistic values [2]. III. RESULTS The inversely computed activation and repolarization times are shown in Figure 2. The earliest activation was
C. Inverse procedure The method used for estimating the timing of the activation at the surface of the ventricular mass was based on the equivalent source model, as used previously [1,2,4]. In short, a tranfer matrix from every source location on the heart to every point on the surface is computed by using the boundary element method. Forward computation of surface potential is performed by multiplying this transfer matrix by simulated action potentials (AP) whose timing and shape depend on activation and repolarization times. A single logistic function was used as a model for the action potential when estimating activation times only. When estimating both activation and repolarization this function was replaced by a model of the action potential that also depends on the repolarization time, specified by a combination of logistic functions [5]. The inverse procedure estimates the activation times at the ventricular surface from the measured potentials at the
Fig 2. Top left: estimated depolarization sequence. Top right: the estimated repolarization times. Below: two views of the heart, showing the difference between the timing of local repolarization and depolarization (APD). The arrows mark an area, which extends into the septum, that has unrealistically short APDs. Times on the color bars below the hearts are in ms
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
44 A.C. Linnenbank, A. van Oosterom, T.F. Oostendorp, P.F.H.M. van Dessel, A.C. van Rossum, R. Coronel, H.L. Tan, J.M.T. de Bakker
Fig 3. Contributions to the ECG from the right ventricle (blue), the septum (green) and the left ventricle (red). In black the ECG that was generated by the entire heart. Division of the heart in indicated in the top right
found in the septum, the earliest epicardial activation at the RV free wall close to the septum. Latest activation was at the basal parts of the heart. Figure 2 shows an area in the apex, septum, and posterior apical wall where the repolarization is unrealistic early (arrows). Apart from this area the inversely computed timing is compatible with measured activation patterns [8] . The correspondence between the computed ECG and the measured ECG was excellent (not shown). The RMS difference of measured and simulated potentials was 0.04 mV during the QRS and even less during the T wave. Figure 3 shows the ECGs broken down to its constituent contributions from the LV, RV and septum. Figure 4 shows on the left an enlarged version of V1,V2 and V6 for the same data plus an investigation for the same leads into the contribution of various basal parts of the heart. IV. DISCUSSION The results demonstrate that the electric activity of the LV free wall provides the main contribution to all precordial leads. Even V1 is not dominated by the RV, but receives a contribution from the septum that is just as large, while having an opposite sign. The contribution of the left
Fig 4. Enlarged versions of V1,V2 and V6. On the left the contributing segments studied are: left ventricle (red), septum (green) and right (blue). On the right the RVOT (light blue), the basal part of the right ventricle (blue) and the basal part of the left ventricle (red). Black line indicates contribution of entire heart
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Relative contribution of heart regions to the precordial ECG-an inverse computational approach
ventricular free wall (red line in Fig. 3 and the left part of Fig. 4) matches in this case closely to the recorded V1 lead. In all leads of the 12-leads ECG, the contributions from the basal parts of the ventricles and of the RVOT are very small. The method presented here for studying the contributions to the ECG of selected regions differs in two major respects from looking at contribution maps [9] like the ones available in e.g. ECGSIM [10]. First, we used a realistic activation pattern during sinus rhythm and computed the ECG generated by an entire part of the heart. Second, we used closed surfaces to prevent border artefacts. V. LIMITATIONS A parameterized model of an AP was used that gives a reasonable good shape for action potential durations (APD) over 200ms. For shorter APDs the plateau has a downward slope. This not only influenced the ST and T-wave but may already be noticeable at the end of the QRS. Although the solutions for the repolarization of the baseline ECGs were acceptable when they converged, there were also initial conditions where they diverged into nonphysiological solutions, possibly as a consequence of an interaction of activation and repolarization times. In the solution shown above there is an area at the apex that has much shorter APDs than the surrounding area. This area yields a contribution to the surface ECG that is close to zero. This may be due to the more gradual character of the repolarization, both in time and in space. As a result the inverse estimation of repolarization may have to be regularized over larger areas than the activation. VI. CONCLUSIONS •
•
•
No single lead of the standard 12-lead ECG predominantly represents the electric activity of the RV. Even V1 receives equal or greater contributions from both the septum and the LV. The RVOT, and the basal parts of the LV and RV hardly contribute to the standard leads, at least not during sinus rhythm reported on in this study. To estimate the delay in, e.g., the RV multiple simultaneously recorded signals are required, supported by a dedicated inverse procedure. Estimation of activation (and repolarization) times by an inverse computation and a subsequent forward computation of a part of the heart, can give insight into the way that part contributes to the recorded ECG, information that can not be obtained in any other way.
45
ACKNOWLEDGMENT This study was supported by the Netherlands Heart Foundation grants 2002B087 and 2005B92
REFERENCES Cuppen, J.J.M. and A. van Oosterom, Model studies with the inversely calculated isochrones of ventricular depolarization. IEEE Trans Biomed Eng, 1984. BME-31: p. 652-659. 2. Huiskamp, G. and A. Van Oosterom, The depolarization sequence of the human heart surface computed from measured body surface potentials. IEEE Trans Biomed Eng, 1989. 35(12): p. 1047-1058. 3. SippensGroenewegen, A., et al., A radio-transparent carbon electrode array for body surface mapping during catherization. Proc 9th Ann Conf IEEE-EMBS, 1987: p. 178181. 4. van Oosterom, A. Genesis ot the T wave as based on an Equivalent Surface Source model. JElectrocardiol, 2001. 34 Supl, 217-227. 5. van Oosterom, A. and V. Jacquemet. A Parameterized Description of Transmembrane Potentials used in Forward and Inverse Procedures. in Int Conf Electrocardiol. 2005. Gdansk; Poland: Folia Cardiologica. 6. van Oosterom, A. and P. van Dam. The intra-myocardial distance function as used in the inverse computation of the timing of depolarization and repolarization. in Computers in Cardiology. 2005. 7. Linnenbank et al. Non invasive imaging of activation times during drug induced conduction changes. Proceedings of the World Congress on Medical Physics and Biomedical Engineering, Seoul, 2006. 8. Durrer, D et al. Total excitation of the Isolated Human Heart. Circulation. 1970 Jun;41(6):899-912. 9. van Oosterom, A and G.J. Huiskamp. The effect of Torso Inhomogenities on Body Surface Potantials Quantified using “Tailored” Geometry. J. Electrocardiol., 1989, 22, p 53-72. 10. van Oosterom, A. and T.F. Oostendorp. ECGSIM an Interactive tool for Simulating QRST Waveforms. Heart. 2004 Feb;90(2):165-8.
1.
Address of the corresponding author: Author: Institute: Street: City: Country: Email:
A.C. Linnenbank Dept. of Experimental and Clinical Cardiology, AMC Meibergdreef 15 Amsterdam Netherlands
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Sample Entropy Analysis of Electrocardiograms to Characterize Recurrent Atrial Fibrillation R. Cervigon1, C. Sanchez1, J.M. Blas1, R. Alcaraz1, J. Mateo1 and J. Millet2 1
Universidad de Castilla-La Mancha. Innovation in Bioengineering Research Group (GIBI). DIEEAC, Cuenca, Spain 2 Universidad Politecnica de Valencia. Bioengineering Electronic Telemedicina, Valencia, Spain
Abstract— In a substantial number of patients atrial fibrillation (AF) recurs after successful electrical cardioversion, but at present there are no reliable clinical markers for confidently identifying the patients in which recurrence will occur within a short period of time. This study evaluates the predictive classification performance of Sample Entropy (SampEn) in the discrimination between recurrent and non-recurrent AF episodes. A validated database of 35 ECG recordings acquired from AF subjects undergoing cardioversion was used throughout the study, together with their known recurrence status at one month. SampEn was applied to these QRST-reduced electrocardiograms, to atrial activity (AA), and also to heart rate (R-R intervals). The sample entropy of R–R intervals was significantly reduced (p=0.043) in the recurrent AF episodes compared with maintenance sinus rhythm episodes. SampEn applied to the AA signal showed a opposite results, it was reduced with a significant increasing trend in the maintenance sinus rhythm episodes (p=0.017). There is a need for welldefined studies with larger patient groups in order to assess the entropy changes further and to look for possible changes, which might predict AF recurrence. Keywords— Atrial Fibrillation, Sample Entropy, Electrical Cardioversion.
I. INTRODUCTION Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia in humans. It affects 2% of the unselected adult population and between 6% and 8% of the population over 65 years of age [1,2]. It is the most common cardiac cause of stroke [3]. In addition, the rapid heart resulting from AF can cause a number of adverse outcomes, including congestive heart failure and tachycardia related cardiomyopathy [4], and the risk of mortality is double than in patients with sinus rhythm [5]. AF results from multiple, rapidly changing and spatially disorganized activation wavelets circulate more or less randomly across the atrial myocardium [6,7]. In the surface electrocardiogram, this uncoordinated atrial activity (AA) is reflected as a randomly variable baseline measurement in place of a well-defined P-wave, though in general the subsequent ventricular electrical activity is unaffected in morphology. The progression from normal sinus rhythm (NSR) to AF is not completely understood, though this transition is
often associated with either changes in autonomic tone, or the presence of very early Premature Atrial Contractions (PAC), or atrial tachycardia. Medications are only marginally effective in treating this arrhythmia, and have the potential for serious side effects, including life-threatening pro-arrhythmia. Most of the drugs used to control heart rate in AF do not improve effort tolerance [8], whereas the restoration of sinus rhythm has been shown to significantly improve it [9]. Furthermore, structural cardiac changes associated with AF have been shown to be reversible when sinus rhythm is restored [10]. However, following NSR restoration after successful electrical cardioversion (ECV), the recurrence of AF within a year is up to 60-75% of patients [11]. As a consequence, it is required a reliable predictor for NSR maintenance. A number of clinical studies have been already suggested long term maintenance of sinus rhythm post-cardioversion predictors. Some of them are: left atrial size [12], age, functional class, energy requirements [15], AF duration, and antiarrhythmic drugs [11], though their role for prediction of outcome following cardioversion is controversial. Recent reports have investigated the length of atrial refractory period as an index of effective ECV, but its efficacy is also unclear [12,13,14]. In addition, among factors contributing to genesis or maintenance of circulating wavelets, Autonomous Nervous System (ANS) seems to play a significant pro-arrhythmic role [16]. Table 1 Patients Clinical Characteristics Parameter Patients Men Underlying heart disease Antiarrhytmic Amiodarona Treatment Flecainida
NSR Maintenance 15 (57.14%) 9 (60%) 8 (53%) 12 (80%) 3 (20%)
Redundance AF 20 (43.86%) 16 (80%) 18 (90%) 16 (85%) 4 (15%)
The present study was conducted to analyze ECG signals from patients with persistent AF in order to extract reliable parameters to predict early AF recurrence after successful ECV. The technique employed for ECG analysis was based on the non linear analysis, specifically the entropy, which have been successfully employed to solve other physiologist
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 54–57, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Sample Entropy Analysis of Electrocardiograms to Characterize Recurrent Atrial Fibrillation
problems, as in the analysis of the heart rate variability where has already showed its potential in the prediction of cardiac risk [17,18,19]. II. MATERIALS AND METHODS A. Materials This study was carried out with a signal database including standard 12-lead ECG recordings from 35 patients diagnosed with persistent AF. These recordings were obtained in the Electrophysiological Laboratory, Hospital Clínico Universitario de Valencia, during ECV protocol. These signals consisted in an AF segment before cardioversion. All these signals were digitized at a sampling rate of 1KHz and 16-bit resolution. In order to process these signals, a 1 minute-length AF segment preceding the ECV was extracted for each patient. All patients with AF were monitored 4 weeks after cardioversion, where 15 out of 35 patients (42.86%) remained in NSR, whereas the other 20 patients (57.14%) turned back to AF. B. Preprocessing The analysis was applied to lead V1, which is the lead that shows higher amplitude of the atrial fibrillatory signal. Before applying entropy, all signals were preprocessed using a 50-Hz notch filter to cancel out mains interference, followed by a band-pass filter with cut-off frequencies of 0.5 and 60 Hz to remove baseline wandering and reduce thermal noise. Since the atrial and ventricular activities overlap spectrally, linear filtering techniques are not suitable for extraction of the fibrillatory signal from the surface ECG. Instead, subtraction of averaged QRST complexes needs to be performed producing a remaining atrial fibrillatory signal for further analysis. Although there are different techniques, it was used a fixed averaged QRST-complex for cancellation in individual leads. [14,15] C. Sample Entropy In this study, we have used Sample Entropy (SampEn) as a useful measure of regularity. This is a similar, but less biased, measure than the approximate entropy (ApEn) family of parameters [21] introduced by Pincus to quantify the regularity of finite length time series. The Sample Entropy can be calculated as follows. Consider the distance between two vectors as the maximum of the absolute differences between their components and fix a
55
threshold value r for determining when these vectors are close to each other, ApEn reflects the likelihood that sequences that are close to each other, i.e., within r, for m consecutive data points remain close when one more data point is known. Mathematically, ApEn is computed as follows: Let Xi = x1;…; xi ;...; xN represent a time series of length N. Consider the m-length vectors: Um(i)= xi;xi+1;...;xi+m-1. Let nim(r) represent the number of vectors um(j) within r of um(i). SampEn has the advantage of being less dependent on the time series length, showing relative consistency over a broader range of possible r, m, and N values. Starting from the definition of the entropy, it is defined:
SampEn(m, r , N ) = − ln
U m+1 ( r ) U m (r )
The differences between Um+1(r) and Cm+1(r), Um(r) and C (r) are the definition of the distance between two vectors as the maximum absolute difference between their components, the exclusion of self-matches, given a time series with N data points, only the first N-m vectors of length m, um(i), are considered, ensuring that, for 1≤i≤N-m, the vector um+1(i) of length m+1 is also defined. SampEn is precisely equal to the negative of the natural logarithm of the conditional probability that sequences close to each other for m consecutive data points will also be close to each other when one more point is added to each sequence. Larger SampEn values indicate greater independence, less predictability, hence greater complexity in the data. This, in turn, may imply that decreased complexity or greater regularity in the time series is not associated with disease. For the study discussed in this paper, SampEn is estimated using the widely established parameter values of m = 2, and r = 0.25%, where σ represents the standard deviation of the original data sequence, as suggested by Pincus [21]. It assigns higher entropy values to certain pathologic time series than to time series derived from free running physiologic systems under healthy conditions [18,19,20]. m
III. RESULTS The values of SampEn of the R–R intervals (SampEn_RR) were low in the patients with recurrences 1.980±0.012 compare to those that maintenance the sinus rhythm 1.993±0.013 (p < 0.043 Mann-Whitney test) (Fig. 1). The opposite tendency was obtained in the measure of SampEn of the AA (SampEn_AA) with, 1.794±0.135 in the recurrent group vs. 1.554±0.319 in the non recurrent group (p < 0.017, Mann-Whitney test) (Fig. 2).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
56
R. Cervigon, C. Sanchez, J.M. Blas, R. Alcaraz, J. Mateo and J. Millet
points. The area under the curve, measures the ability of the test to correctly classify and assess subjects with and without the disease, such that higher areas indicate higher performance rates. Using this features on a separately training set that quantify the total number of the subjects resulted 71.70 % accuracy with the SampEn_AA and 73.20% from the measure of the SampEn _RR.
2,01
SampEn RR
2,00
1,99
1,98
1,97
Table 2 ROC curve values obtained from SampEn 1,96 0
1
NoRecurrente-Recurrente
Fig. 1 Sample Entropy RR
Threshold
Se (%)
1.755 1.985
73.30 85.50
1-Sp (%) 70.00 62.00
Area 0.738 0.775
To increase the capacity of the prediction, these variables were introduced in a forward stepwise logistic regression model. With SampEn_RR >1.985 (OR 0.080, p=0.008) and SampEn_AA<1.755 (OR 0.206, p=0.007), with a diagnostic capacity of 74.39%, due to the high correlation of the variables, despite of being conceived in different environments, ventricle (SampEn_RR) vs. atrial (SampEn_AA) (Fig. 3).
2,00
1,80
1,60
SampEn AA
Parameter SampEn_AA SampEn _RR
21 1,40
1,20
IV. CONCLUSIONS
1,00
0,80 0
1
NoRecurrente-Recurrente
Fig. 2 Sample Entropy AA The discrimination with the area under the Receiver Operation Characteristic (ROC) curve was performed to assess the correlation between the parameters and AF recurrence. The ROC curve represents sensitivity (SE) versus 1specificity (1-SP), which makes possible the performance evaluation of the detection process at different operation ROC Curve 1,0
Sensitivity
0,8
0,6
0,4
0,2
0,0 0,0
0,2
0,4
0,6
0,8
1,0
1 - Specificity
The role of identifying patients at risk of developing AF recurrence has been studied in numerous Publications, and only variables such as left atrial size and AF duration have been validated and are used in the clinic procedure. The proposed method was applied to evaluate if the non linear qualities of the signals could predict the evolution of the patients after a successful ECV. The results suggest that the ECG signals contain information which provides clues as to the potential recurrence of AF. AA extracted from ECG signals is more regular in patients that maintenance the rhythm, with a low SampEn measures. This difference found coincide with a high organization of the arrhythmia in those episodes where is easier the reversion. On the other hand, the decrease in the complexity of the ventricular rates, such as demonstrates the low values of SampEn measures in the R-R intervals in the recurrent patients, coincides with the results showed by Vikman where a change in the behaviour of the sinus rhythm plays an important paper in the AF recurrences [15]. All these results suggest that non linear study of ECG signals involve much more information than known at present. The results and conclusions presented in this contribution should be regarded as a first that must be extended with further studies, including new experiments with a larger database.
Fig. 3 ROC curve from Regression Logistic model
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Sample Entropy Analysis of Electrocardiograms to Characterize Recurrent Atrial Fibrillation
ACKNOWLEDGMENT The authors would like to acknowledge the helpful support received from Servicio de Hemodinamica of the Hospital Clinico Universitario de Valencia, and specially from R. Ruiz, S Morell and R Garca Civera, for providing signals and for the high quality of their clinical advice.
11. 12. 13.
REFERENCES 1. Levy S, Breithardt G, Campbell RW, Camm AJ, Daubert JC, Allessie M, et al. Atrial fibrillation: current knowledge and recommendations for management. Working Group on Arrhythmias of the European Society of Cardiology. Eur Heart J 1998; 19(9): 1294-320.
2. Benjamin EJ, Levy D, Vaziri SM, D’Agostino RB, Belanger AJ, Wolf PA. Independent risk factors for atrial fibrillation in a population-based cohort. The Framingham Heart Study. JAMA 1994; 271(11): 840-4.
3. Wolf PA, Abbot RD, Kannel WB: Atrial fibrillation as an independent risk factor for stroke: the Framingham Study Stroke 1991; 22: 983-988. 4. Crijns HJ, Tjeerdsma G, de Kam PJ, Boomsma F, Gelder IC, Berg MP, et al. Prognostic value of the presence and development of atrial fibrillation in patients with advanced chronic heart failure. Eur Heart J 2000; 21(15): 1238-45. 5. Wattigney WA, Mensah GA, Croft JB. Increased atrial fibrillation mortality: United States, 1980-1998. Am J Epidemiol 2002; 155(9): 819-26. 6. Moe GK, Abildskov JA. Atrial fibrillation as a self sustaining arrhythmia independent of focal discharge. Am Heart J. 1959; 58: 59–70. 7. Olsson SB, Allessie MA, Campbell RW, eds. In Atrial Fibrillation: mechanisms and therapeutic strategies. Armonk, NY: Futura Pub, 1994: 37–49. 8. Schumacher B, Luderitz B: Rate issues in atrial fibrillation: consequences of tachycardia and therapy for rate control. Am J Cardiol 1998; 82: 29N-36N. 9. Lok NS, Lau CP. Presentation and management of patients admitted with atrial fibrillation: A review of 291 cases in a regional hospital. Int J Cardiol 1995; 48(3): 271-8. 10. Gosselink ATM, Crijns HJGM, Berg MP, Broek SAJ, Hillege H, Landsman MLJ et al. Functional capacity before and after
14.
15.
16. 17.
18.
19. 20. 21.
57
cardioversion of atrial fibrillation: a controlled study.Br Heart J 1994; 72: 161-6. Lip GYH, Watson RDS, Singh SP. ABC of atrial fibrillation: cardioversion of atrial fibrillation. Br Med J 1996; 312: 1125. 9- Anon. Shimizu A, Centurion OA, Electrophysiological properties of the human atrium in atrial fibrillation Cardiovascular Research 54, 2002, 302–314 Stridh M, Sornmo L, Meurling CJ, Olsson SB. Characterization of Atrial Fibrillation Using the Surface ECG: TimeDependent Spectral Properties, IEEE Trans Biomed Eng 2001, 48:1. Bollman T, Mende M, Neugebauer A, Pfeiffer D. Atrial fibrillatory frecuency predicts atrial defibrillation threshold and early arrhythmia recurrence in patients undergoing internal cardioversion of persistent atrial fibrillaton. PACE 2002, 25:1179–1184. Shkurovich S, Sahakian AV, Swiryn S. Detection of atrial activity from high-voltage leads of implantable ventricular defibrillators using a cancellation technique. IEEE Trans Biomed Eng 1998, 45:229–234. Coumel P: Paroxysmal atrial fibrillation: a disorder of autonomic tone? Eur Heart J 1994, 15(Suppl A):9-16. Vikman S, Makikallio TH, Yli-Mayry S, Pikkujamsa S, Koivisto AM, Reinikainen P, Airaksinen KE, Huikuri HV. Altered complexity and correlation properties of R–R interval dynamics before the spontaneous onset of paroxysmal atrial fibrillation. Circulation 1999; 100:2079–84. Hogue CW Jr, Domitrovich PP, Stein PK, Despotis GD, Re L, Schuessler RB, Kleiger RE, Rottman JN. R–R interval dynamics before atrial fibrillation in patients after coronary artery bypass graft surgery. Circulation 1998; 98:429–34. Lake DE, Richman JS, Griffin MP, Moorman JR. Sample entropy analysis of neonatal heart rate variability. Am J Physiol Regul Integr Comp Physiol 2002; 283:R789–97. Richman J, Moorman J. Physiological time-series analysisusing approximate entropy and sample entropy. Am J Physiol Heart Circ Physiol 2000; 278:2039–2049. Pincus S, Acad A. Approximate entropy (apen) as complexity measure. Sci 2001; 954:245. Author: Institute: Street: City: Country: Email:
Raquel Cervigon Abad Universidad de Castilla-La Mancha. E. U. Politécnica Campus Universitario 16071 Cuenca Spain
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
USB Based ECG Acquisition System J. Mihel, R. Magjarevic University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia Abstract— Acquisition of ECG data from patients in Intensive Care Unit (ICU) following surgery is a vital part of cardiac disease detection and rehabilitation monitoring and therefore it is a standard function of ICU physiological monitors and monitoring systems. Recently, some authors have introduced continuous ECG monitoring and processing into cardiac disease prediction research. Such studies have to be performed on as many subjects as possible in order to acquire large enough amount of data, which imposes a necessity for simultaneous data acquisition from multiple patients. Current equipment found in a typical ICU is not designed to fulfill such demands and usually cannot be easily interfaced to a standard computer intended for signal data recording. The aim of this project was to design and develop a simple ECG acquisition system composed of a single recording and storage device (personal computer) and a sufficient number of pickup ECG amplifiers simultaneously connected to the recording device on a single bus. We chose USB for interconnecting the system because of its speed, reliability and power-supply and plug and play capabilities. USB is widely spread and every modern computer has multiple USB ports used for connecting peripheral devices and as such it fulfilled the demands of our project. In order to allow complete control of the recording device to amplifier interface we developed custom USB drivers designed for Microsoft Windows XP operating system. The system was evaluated with eight amplifiers working simultaneously and no data loss was detected. Keywords— ECG acquisition, Intensive care unit (ICU), USB, Continuous monitoring
I. INTRODUCTION In a standard Intensive Care Unit (ICU) patients are connected to ICU physiological monitors that measure some physiological parameters, especially important for a particular patient. In addition there is always a possibility to set values of physiological parameters that alarm if any of the parameters crosses the preset value. Usually, continuous recording of measured physiological parameters begins only in case of an alarm and the recording lasts only for a limited time. ICU monitors are not intended for long term, continuous recording of physiological parameters which may be a drawback in research and some clinical studies. In large hospitals different surgery departments have their own ICUs. Though such units are more suitable for
collecting patient data (both anamnesis and measured physiological parameters), it is still sometimes very difficult to collect enough data for clinical studies of a particular disease. We have started a research project aiming to find parameters in ECG that could predict atrial fibrillation after coronary artery bypass grafting (CABG). In a number of studies, different authors were trying to find reliable predictors from the patients’ ECG, particularly by studying P-wave and its relation to other ECG segments [1]. In these studies the authors were usually comparing the parameters before and after the surgery and they were using rather sophisticated medical instrumentation (12 lead ECG, vector cardiograph) for recording short periods of ECG. One of the most frequently used (measured or calculated) parameters in these studies was P-wave duration. Since P-wave duration parameter was measured using different methodology and at different times the criteria for it differed significantly from study to study (e.g. from 114ms [2] to 140ms [3]). Even though some of the studies had relatively high statistical sensitivity (73% in [4], 77% in [3], 83% in [2]), they did not result in accepting clinical procedures for atrial fibrillation prediction. We have decided to take another approach in research of AF after CABG. Instead of recording samples (short segments) of ECG at different time with sophisticated instrumentation that is not suitable for continuous recording of ECG, we have decided to continuously record the ECG from standard II ECG lead for the first 48 hours after the surgery and to search for predictors within that time. However, those 48 hours were segmented into shorter time slots (one hour) and we have measured or calculated more than 80 ECG parameters relevant for the study [5]. These parameters were statistically analyzed and after identifying statistically significant parameters, we have used them for building prediction models [6]. In this paper we described the design of a USB based ECG acquisition system we have developed for the purpose of this study. Our system is intended to be used in one ICU and enables acquisition of a single ECG channel from several patients simultaneously and all data from these patients’ ECGs is recorded on a PC. The device is simple and small and it connects the patient and the ICU monitor using standard cables while an USB cable connects the device with a laptop or a desktop PC.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 58–61, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
USB Based ECG Acquisition System
II. MATERIALS AND METHODS Our acquisition system consists of three main units: • • •
Pickup ECG amplifier – used for analog signal acquisition and A/D conversion. An arbitrary number of amplifiers can be working at the same time in the system. PC Host – used for recording data received from piickup amplifiers. Intended to be only one in the system, i.e. per ICU. USB – used to connect the PC Host and pickup amplifiers and to transfer data from the amplifiers to the PC host.
System was intended to connect several patients in a standard ICU to a single PC Host located somewhere in the ICU itself. Standard working order of the system is to have one pickup amplifier per patient each connected to a USB hub which is connected to a PC Host used as a data recording device. All amplifiers are to be powered from the USB alone, so no additional power source is necessary (this reduces the need for additional wiring in the ICU). System overview is given is given in Fig. 1:
59
ler over a serial bus (SPITM bus) and is further sent over the USB to the PC Host. USB communication protocol was implemented on a Microchip’s PIC18F/LF4550 microcontroller which we chose for its USB V2.0 compliance, Full Speed USB transfer support, embedded USB Serial Interface Engine (SIE) that enables easy USB communication and an on-chip USB transceiver and 3.3V voltage regulator. Pickup amplifier was designed to receive the ECG signal from a standard n-pin ECG connector. Input ECG signal is further transferred to an m-pin standard ECG connector (m ≥ n) to be connected to a standard ICU monitor used for patient monitoring. One of three standard ECG leads is connected to the input of the acquisition and A/D conversion hardware and is further amplified, sampled and as such transferred to a PC Host over the USB. Design of a pickup ECG amplifier can be depicted with a simplified block diagram shown in Fig. 2. Data is transferred from the microcontroller to a PC host using one USB input interrupt endpoint (Endpoint 1) and is formed in multiples of 3 byte packets. Each 3 byte packet contains a 2 byte (16 bit) sampled signal value and 1 byte representing PGA gain at the moment of sampling. Interrupt data transfer was chosen for its reliable and timely delivery of data. Interrupt interval was chosen to be 2 ms to ensure that the sampled data would be transferred as soon as possible without noticeable delays (since the sampling frequency was designed to be 500 Hz). Along with the implementation of required USB control transfer using Endpoint 0 (response to standard USB setup requests), one custom request was implemented for initiating the amplifier’s calibration process. Universal Serial Bus firmware implemented on the PIC18F/LF4550 microcontroller was developed to be compliant to the USB V2.0 specification requirements [7] (Chapter 9 – “USB Device Framework”). USB data transfer operates at Full Speed (12 Mbit/s) and given that a pickup amplifier consumes up to 70 mA each one is fully powered from the USB alone and does not require an additional power source.
Fig. 1 Acquisition system overview Pickup amplifier is intended for signal acquisition as a single channel ECG amplifier using one of three standard ECG leads as input. Input signal is sampled with 500 Hz sampling frequency and 16 bit resolution. We have anticipated the possibility to set four different gain values (200, 500, 1000 and 2000) to enable better effective A/D conversion resolution. Programmable gain is automatically adjusted dependently of the input signal level. In case of superimposition of a great number of artifacts to the ECG signal the gain changes to a smaller value to prevent amplifier saturation. Following A/D conversion sampled data is transferred to a microcontrolFig. 2 Pickup ECG amplifier block diagram
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
60
J. Mihel, R. Magjarevic
Custom Windows drivers were developed using Microsoft Kernel-Mode Driver Framework (KMDF) and they conform to Microsoft’s plug and play and power management requirements. Drivers are compatible with and were evaluated on Windows 2000 and Windows XP operating systems, and will be further evaluated for their compatibility with Windows Vista operating system. Using simple Win32 API calls a Windows application can enumerate all amplifiers plugged into the system, and retrieve data from each one of them. Double buffering – buffering on both the microcontroller and the Windows USB driver ensures that a sufficient number of pick up ECG amplifiers can be connected into the system at the same time without any data loss on the bus. Double buffering was necessary to insure that even a slower Windows application on the PC Host could properly and in timely fashion acquire data from a desired amplifier connected to the system. Data buffer on a pickup amplifier was set to be 63 bytes (maximum USB packet size for full speed interrupt transfers is 64 bytes) which enables buffering of 21 3 byte data packets and the buffer in the device driver was set to be 4095 bytes (1365 3 byte data packets). PC Host to pickup amplifier communication can be described with the simplified block diagram shown in Fig. 3.
and identification of all pick up amplifiers connected to a USB hub, and for recording of data received from each amplifier in a separate file. All amplifiers connected to the PC Host were properly identified via their serial numbers and accessed from the application. Evaluation lasted 48 hours in which no data loss was detected from any of the eight amplifiers. For purposes of the CABG project we developed several pickup ECG amplifiers to be used in the ICU of the Department of Cardiac Surgery of the Clinical Hospital Center in Zagreb. Amplifiers were constructed to be used for acquisition from the II. standard ECG lead but can be easily modified to accept any of the two other leads as input. Use of standard ECG connectors on our device facilitates uninterrupted work and standard procedures in the ICU and high acceptance by medical staff. Each pickup amplifier is embedded into a robust housing suitable for use in ICU. Fig. 4 shows a pickup ECG amplifier intended for use in the Clinical Hospital Center in Zagreb. For purposes of evaluating quality of signal acquisition and processing on a pickup amplifier we designed a simple Win-
III. RESULTS System was evaluated by simultaneously connecting eight amplifiers to a single PC Host. For the purposes of this evaluation a simple Win32 application was developed. This application was responsible for enumeration
Fig. 4 pickup ECG amplifier – a) top view, b) front panel view, c) back panel view
Fig. 3 Simplified block diagram of USB communication
Fig. 5 ECGView test application
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
USB Based ECG Acquisition System
dows application called ECGView that enables monitoring of the received data on the screen of the PC Host. Application was designed using Microsoft .NET Framework 2.0, C# 2.0 programming language and DirectX Software Development Kit. We evaluated the received signal and found it satisfying for purposes of monitoring and data acquisition. Further noise filtering (if needed) is to be done in an application that processes the received data and extracts information from the recorded signal.
61
REFERENCES 1. 2. 3. 4.
IV. CONCLUSIONS We have developed a simple ECG acquisition system intended for continuous recording of patients’ ECG in ICUs. The system is implemented at the Department of Cardiac Surgery of the Clinical Hospital Center in Zagreb in research of atrial fibrillation prediction after coronary artery bypass grafting. The system can be implemented in any other research application which needs long term continuous single channel ECG signal recording and its on line processing on personal computer. Our system is easily implemented in an ICU and does not disrupt its usual work flow. Use of USB and custom device drivers enables easy connection of pickup amplifiers to a PC Host running Windows operating system.
5. 6. 7.
Clavier L, Boucher JM, Lepage R, Blanc JJ, Cornily JC (2001) Automatic P-wave analysis of patients prone to atrial fibrillation. Medical & Biological Engineering and Computing, 40: 63-71 Buxton AE, Josephson ME (1981) The role of P-wave duration as a predictor of postoperative atrial arrhythmias. Chest, 80: 68-73 Steinberg JS, Zelenkofske S, Wong SC, Gelernt M, Sciacca R, Menchavez E (2000) Value of the P-wave signal-averaged ECG for predicting atrial fibrillation after cardiac surgery. Europace 2(1):32-41. Stafford PJ, Kolvekar S, Cooper J, Fothergill J, Schlindwein F, deBono DP, Spyt TJ, Garratt CJ (1997) Signal averaged P-wave compared with standard electrocardiography or echocardiography for prediction of atrial fibrillation after coronary bypass grafting. Heart 77: 417-422 Sovilj S, Rajsman G, Magjarevic R. (2005) Continuous Multiparameter Monitoring of P Wave Parameters after CABG Using Wavelet Detector. Proc. Computers in Cardiology; 32:945-8. Sovilj S, Rajsman G, Magjarevic R. (2006) Multiparameter Prediction Model for Atrial Fibrillation after CABG. Detector. Proc. Computers in Cardiology; 33:489-92. Universal Serial Bus Specification, Revision 2.0 (2000) Address of the corresponding author: Author: Ratko Magjarevic Institute: University of Zagreb, Faculty of Electrical Engineering and Computing Street: Unska 3 City: Zagreb Country: Croatia Email:
[email protected]
ACKNOWLEDGMENT This study was supported by Ministry of Science, Education and Sport of the Republic of Croatia under grant no. 036-0362979-1554.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using Supervised Fuzzy Clustering and CWT for Ventricular Late Potentials (VLP) Detection in High-Resolution ECG Signal Ayyoub Jafari, M.H. Morradi Biomedica Engineering Department Tehran Polytechnic University.Iran
[email protected],
[email protected] Abstract— Ventricular Late Potentials (VLPs) are lowamplitude, high-frequency signals that appear at the end of the QRS complex of a High-Resolution ECG (HRECG) records. VLPs are clinically useful for identifying post-MI (Myocardial Infarction) patients prone to Ventricular Tachycardia (VT) and Sudden Cardiac Death (SCD). In this paper, the Continuous Wavelet Transform (CWT) and a supervised fuzzy clustering algorithm are used together to detect VLPs. The terminal part of the QRS complex in the Vector Magnitude (VM) waveform is processed with the CWT to extract a feature vector. Resulting time-scale representation is subdivided into several sub bands, and the sum of the squared decomposition coefficients is computed in each region. Finally, a supervised Fuzzy clustering method, trained by an appropriate set of these feature vectors, is applied to this data in order to identify VLP. Keywords— VLP-Fuzzy Clustering-HRECG
I. INTRODUCTION Most of the Sudden Cardiac Deaths (SCDs) due to cardiac diseases are thought to be initiated by Ventricular Tachycardia (VT) which is one of the most serious types of cardiac arrhythmia [11]. As the appearance of Ventricular Late Potentials (VLPs) is associated with VT, there is a clinical interest in detection of these signals as a noninvasive diagnosis for post-MI (Myocardial Infarction) patients prone to VT. VLPs are low-amplitude, highfrequency signals which appear at the end of the QRS complex and arise as a result of the late depolarization of damaged myocardium. Because of their very low amplitudes and the noise overlay, VLPs are obscure in a standard electrocardiogram (ECG) but they can be detected by a High-Resolution ECG (HRECG) record acquired using three orthogonal XYZ leads with a minimum sampling frequency of 1000Hz and a resolution of 12 bits [1,9,11]. There are various techniques to improve signal-to-noise ratio (SNR) in VLP analysis. Typically several heart beats (200-300) are averaged to suppress the background noise and form the Signal Averaged ECG (SAECG). In this paper, a new fuzzy model structure is proposed for the VLP classification problem in which each rule can represent more than one class with
different probabilities. Typical fuzzy classifiers consist of interpretable if-then rules with fuzzy antecedents and class labels in the consequent part. The antecedents (if-parts) of the rules partitions the input space into a number of fuzzy regions by means of fuzzy sets, while the consequents (thenparts) describe the output of the classifier in these regions. Fuzzy logic improves rule-based classifiers by allowing the use of overlapping class definitions and improves the interpretability of the results by providing more insights into the decision making process. The automatic determination of compact fuzzy classifiers rules from data has been approached by several different techniques: neuro-fuzzy methods, genetic-algorithm (GA) based rule selection, and fuzzy clustering in combination with GA-optimization. Generally, the bottleneck of the data-driven identification of fuzzy systems is the structure identification that requires nonlinear optimization. Thus for high-dimensional problems, the initialization of the fuzzy model becomes very significant. Common initialization methods such as grid-type partitioning [26] and rule generation on extreme initialization, result in complex and non-interpretable initial models and the rule-base simplification and reduction steps become computationally demanding. To avoid these problems, fuzzy clustering algorithms [37] were put forward. However, the obtained membership values have to be projected onto the input variables and approximated by parameterized membership functions that deteriorate the performance of the classifier. This decomposition error can be reduced by using eigenvector projection [28], but the obtained linearly transformed input variables do not allow the interpretation of the model. To avoid the projection error and maintain the interpretability of the model, the proposed approach is based on the Gath-Geva (GG) clustering algorithm [22] instead of the widely used Gustafson-Kessel (GK) algorithm [38], because the simplified version of GG clustering allows the direct identification of fuzzy models with exponential membership functions [25]. Neither GG nor GK algorithm utilizes the class labels. Hence, they give suboptimal result if the obtained clusters are directly used to formulate a classical fuzzy classifier.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 99–104, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
100
Ayyoub Jafari, M.H. Morradi
Filtered QRS QRST Amplitude (μV)
Hence, there is a need for fine-tuning of the model. This GA or gradient-based fine-tuning, however, can result in overfitting and thus poor generalization of the identified model. Unfortunately, the severe computational requirements of these approaches limit their applicability as a rapid model-development tool. This paper focuses on the design of interpretable fuzzy rule based classifiers from data with low-human intervention and low-computational complexity. Hence, a new modeling scheme is introduced based only on fuzzy clustering. The proposed algorithm is similar to the Multi-Prototype Classifier technique [18, 33]. The main difference of this approach is that each cluster represents different classes, and the number of clusters used to approximate a given class has to be determined manually, while the proposed approach does not suffer from such problems. Generally, there is a very large set of possible features to compose feature vectors of classifiers. As it is ideal that the training dataset's size should increase exponentially with the feature vector size, it is desired to choose a minimal subset among it. Some generic tips to choose a good feature set include the facts that they should discriminate as much as possible the pattern classes and they should not be correlated or redundant. The paper organization is as follows. In section II, the Simson method is described. Section III will describe the feature extraction process. Section IV will take the developed clustering algorithm into consideration that allows the direct identification of fuzzy classifiers. For selecting the most important features of the fuzzy system, a method based on Fisher interclass separability criteria will be presented in section V. Section VI presents the results of applying this method to a HRECG database. Finally, some conclusions and discussions will be made in section VII.
V40 (area) Offset Time (ms)
Fig. 1 A typical filtered QRS complex and the definition of three conventional time-domain features: QRST, D40 and V40.
conventional time-domain features can be measured to detect VLPs [1, 11, 12, 13]: -
QRST: Duration of the filtered QRS complex (from the onset to the offset) D40: Low-amplitude signal duration (from the offset backward to the point where VM reaches the 40μV) V40:Root-mean-square value of the last 40ms of the filtered QRS (showed in Fig. 1)
The criteria to define a VLP positive test are QRST >114ms, D40>38ms and V40<20μV [11]. In Fig.1, a plot of a typical filtered QRS complex and the definition of the conventional time-domain features, introduced above, can be viewed. III. FEATURE EXTRACTION
The conventional time-domain method of VLP detection, developed by Simson [8], is based on feature extraction from the filtered SAECG [8, 9, 13]. Simson's method employs a high-pass filter (cutoff frequency of 25 or 40Hz) to attenuate low-frequency components of averaged XYZ signals (SAECG). To avoid the filter ringing effect in the terminal parts of QRS complex, Simson proposed a bi-directional four-pole Butterworth high-pass filter [8]. After high-pass filtering of the averaged XYZ signals, these signals are combined into a Vector Magnitude (VM) waveform defined by X 2 +Y 2 + Z2
40ms Onset
II. SIOMSON METHOD
VM =
D40
(1)
After estimating the onset and offset of the filtered QRS complex (the QRS complex in the VM signal), three
The adopted feature extraction method for VLP consists of 5 stages: -
-
Averaging XYZ leads to improve the SNR Bidirectional Butterworth filtering of averaged XYZ signals consisting of a 4th order high-pass and a 5th order low-pass filters with cutoff frequencies of 40Hz and 250 Hz respectively. Combination of the filtered averaged signals into a VM waveform using equation (1). Applying the CWT to the terminal part of the QRS complex in the VM signal. Feature extraction from the resulted time-scale plot.
In recent years, the wavelet analysis has been used widely in biomedical researches [2, 5, 7, 14]. The wavelet transform is a linear time-scale transform which is based on
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using Supervised Fuzzy Clustering and CWT for Ventricular Late Potentials (VLP) Detection in High-Resolution ECG Signal
decomposition of a signal using a set of basis functions. These basis functions are scaled and shifted versions of a prototype mother wavelet [4, 11]. The CWT produces a time-scale representation of the signal which is a function of time τ and scale a, like CWT (τ, a) named wavelet decomposition coefficient. The scale can be considered as the inverse of the frequency [9]. The smaller scales bring about a higher resolution in time which is useful to detect VLPs as high-frequency, shortduration signals. In this study, the CWT is adopted, using the MATLAB wavelet toolbox, to the last 40ms of the QRS complex in the VM signal after estimating the offset point. To improve the robustness of the method to the error in QRS offset detection, a 8ms right shift is considered for the estimated offset [13] so that the CWT is applied to 48ms interval of the VM waveform, with scale range of 1-8 and the Morlet wavelet as the mother wavelet [6]. Then, the resulted time-scale representation is subdivided into 81 regions [6] (nine subdivisions on both the time and scale axes). Finally, the sum of the squared wavelet decomposition coefficients is computed in each region to form a feature vector composed of the 81 elements. IV. SUPERVISED FUZZY CLUSTERING The objective of clustering is to partition the identification data Z into R clusters. This means, each observation consists of input and output variables grouped into a row vector Zk = [xTk yk], where the k subscript denotes the k = 1… Nth row of the pattern matrix of Z. The fuzzy partition is represented by the U= [μi,k] R*N matrix, where the μi,k element of the matrix represents the degree of membership, how the Zk observation is in the cluster i = 1…R. The clustering is based on the minimization of the sum of weighted D2i, k squared distances between the data points and the ηi cluster prototypes that contains the parameters of the clusters. R N ⎡ ⎤ = J ( Z , U , η ) ( μ i ,k ) m D 2 ( z k , ri ) ⎥ ∑∑ ⎢ i =1 k =1 ⎣ ⎦
(2)
In which, m is the fuzzy weighting exponent that determines the fuzziness of the resulting clusters. Usually, the chosen value for m will be m = 2. Classical fuzzy clustering algorithms are used to estimate the distribution of the data. Hence, they do not utilize the class label of each data point available for the identification. Furthermore, the obtained clusters cannot be directly used to build the classifier. In the following a new cluster prototype and the related distance measure will be introduced that allows the direct supervised identification of fuzzy classifiers. As the
101
clusters are used to obtain the parameters of the fuzzy classifier, the distance measure is defined similarly to the distance measure of the Bayes classifier: 1 = P(ri ) D (zk , ri ) 2 i,k
1 (xj,k − vi, j )
n
2
∏ exp (− 2
σ
j =1
2 i, j
) P(cj = yk ri )
(3)
This distance measure consists of two terms; the first term is based on the geometrical distance between the vi cluster centers and the xk observation vector, while the second is based on the probability that the ri-th cluster describes the density of the class of the k-th data, P (cj = yk | ri) It is interesting to note, that this distance measure only slightly differs from the unsupervised Gath–Geva clustering algorithm which can also be interpreted in a probabilistic framework [21]. However, the novelty of the proposed approach is the second term, which allows the use of class labels. To get a fuzzy partitioning space, the membership values have to satisfy the following conditions: U ∈ Rc×N μi,k ∈[0,1], ∀i, k ;
R
∑μ i=1
i ,k
N
= 1 , ∀k ; 0 < ∑μi,k < N , ∀i (4) k =1
The minimization of the (6) functional represents a nonlinear optimization problem that is subject to constraints defined by (5) and can be solved by using a variety of available methods. The most popular method, is the alternating optimization (AO), which consists of the application of Picard iteration through the first-order conditions for the stationary points of (6), which can be found by adjoining the constraints (5) to J by means of LaGrange multipliers [24], R
−
N
N
R
k =1
j =1
J (Z ,U ,η, λ) = ∑∑(μi,k )m D2 (zk , ri ) + ∑λk (∑ μi,k − 1) i=1 k =1
(5)
And by setting the gradients of J to zero, with respect to Z, U, η and λ. Hence, similarly to update the Gath-Geva clustering algorithm equations, the following equations will result in a solution that satisfies the (6) constraints. Initialization: Given a set of data Z specify R, choose a termination tolerance ε>0. Initialize the U= [μi,k] RεN partition matrix randomly, where μi,k denotes the membership that the Zk data is generated by the i'th cluster. Repeat for l = 1, 2… . Step1: Calculate the parameters of the clusters. Calculate the centers and standard deviation of the Gaussian membership functions (the diagonal elements of the Fi covariance matrices) N
v
(l ) i
=
∑ (μ k =1 N
( l −1) m i ,k
∑ (μ k =1
) xk
( l −1) m i ,k
)
N
, σ
2(l ) i, j
=
∑ (μ k =1
( l −1) m i ,k
) ( x j ,k − v j ,k ) 2
N
∑ (μ k =1
(6)
( l −1) m i ,k
)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
102
Ayyoub Jafari, M.H. Morradi
Estimate the consequent probability parameters.
p( ci rj ) =
∑ ∑
k yk =ci N
k =1
(μ (jl,k−1) )m
(μ
(l −1) m j ,k
)
, 1 ≤ i ≤C, 1 ≤ j ≤ R
(7)
A priori probability of the cluster and the weight (impact) of the rules. P (ri ) =
1 N
N
∑ (μ k =1
( l −1) m i ,k
)
n
, wi = P (ri )∏ j =1
1
(8)
2πσ i2, j
Step2: Compute the distance measure D2i, k by (4). Step3: Update the partition matrix μi(,lk) =
1 R
∑ (D
i,k
j =1
, 1≤ i ≤ R , 1 ≤ k ≤ N
( zk , ri ) / D j , k ( zk , rj )) 2 /( m −1)
untill U ( l ) − U ( l −1) 〈ε
(9)
(10)
V. FEATURE SELECTION Using too many input variables may result in difficulties in the interpretability capabilities of the obtained classifier. Hence, selection of the relevant features is usually necessary. In this paper, the modified Fischer interclass separability method is used which is based on statistical properties of the data. The interclass separability criterion is based on the FB between-class and the FW within class covariance matrices that sum up to the total covariance of the data FT=FB+FW, where: FW = v0 =
R
∑
l =1
P ( rl ) F l , F B =
R
∑
l =1
P ( r l ) ( v l − v 0 ) T ( v l − v 0 ),
(10)
R
∑
l =1
P ( rl ) v
(11 )
l
J=
det ( FB ) det ( Fw )
(11)
The feature interclass seperatibility selection criterion is a trade-off between FW and FB (Eq.12). The importance of a feature is measured by leaving out the interested feature and calculating J for the reduced covariance matrices. The feature selection is a step-wise procedure, when in every step the least needed feature is deleted from the model [29]. VI. RESULTS To evaluate the method used in this study a HRECG database consisting of two groups of signals was selected. The first group contained 50 healthy volunteers’ HRECG records acquired by a digital data acquisition system,
ML785 PowerLab/8SP, with a sampling frequency of 2000Hz and a 16-bit analog-to-digital converter (ADC). The second group was consisted of semi-simulated HRECG signals with VLPs. In order to simulate each of these signals, three basic simulated waveforms resembling the VLP characteristics were added to XYZ leads of a basic HRECG record, a HRECG record without VLP. VLPs are low-amplitude signals (~1-20μV) with short duration (~550ms) and broadband spectrum (~40-250Hz) [11]. According to these characteristics, VLPs were simulated as colored Gaussian processes resembling better the real world signals. The basic VLP waveforms were added to the end part of the QRS complex of every heart beat of the XYZ leads belonging to the basic HRECG records. The position of the VLPs was varied randomly from beat to beat with respect to the fiducial mark, QRS peak [10, 11]. This HRECG database was divided into a training set, including thirty HRECG signals with VLPs and thirty without, and a test set consisting of 20 records without VLPs and 20 with. For a better training of the neural network and preserving its generalization, the training set was expanded; five sets containing 300 heart beats were selected from every HRECG record of training set randomly. Because of the fact that every HRECG record had at least 350 heart beats, the beat selection was done without replacement for each set. Therefore, an expanded training set consisting of 300 patterns was obtained. The performance of the VLP detection method was measured using conventional criteria i.e. the accuracy ACC, sensitivity SE, and specificity SP defined by ACC = 100 × (TP + TN ) / N
SE = 100 × TP /(TP + FN )
SP = 100 × TN /(TN + FP )
(12)
(13)
Where N, TP, TN, FP, and FN are respectively the total number of patterns, the number of true positive, the number of true negative, the number of false positive, and the number of false negative [6]. Using the expanded training set and the test set, the method based on the CWT and the Fuzzy Supervised Clustering system, introduced in Fig.2, was evaluated that showed good results for the test set. To investigate the performance of the VLP detection method proposed in this work, the conventional time-domain method (Simson's method) was applied to the test set; also, a method based on applying our system to the conventional time-domain features [4,13] was used to detect VLP. Table1 presents the results of the proposed method in comparison with Simson's method and applying a Proposed Fuzzy System to the conventional time-domain features, for the test set. In Simson's method, the balance between SE and SP can be controlled by choosing one, two, or three of the positive VLP criteria (QRST>114ms, D40>38ms, and V40<20μV) at the same time. For example, if higher SE is
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using Supervised Fuzzy Clustering and CWT for Ventricular Late Potentials (VLP) Detection in High-Resolution ECG Signal
103
ACC( %)
SE( %)
SP( %)
• One Criterion
80
100
60
• Two Criterion
75
80
70
• Three Criterion
72
60
85
offset, and even a few errors in estimating of this point can result in wrong diagnostic. However the method used in this study is robust to the error in QRS offset detection. The results represented in Table 1 shows that the CWT based features, used in this study, are more capable than the conventional time-domain features to detect VLPs using Supervised Fuzzy Clustering Systems. According to the above advantages, the proposed method may be an appropriate alternative method for the clinical detection of VLP, if it is evaluated completely. Due to the lack of real HRECG for VLP in this research, the proposed method must be applied to a larger HRECG database consisting of real signals with and without VLPs to complete the evaluation. Form Table1, Neural Network Method is a little better than Our Supervised Clustering method but our system is based on only five fuzzy rule and so the training of system is much faster than a MLP Neural Network and can be used in practical applications.
92.5
91
94
REFERENCES
• Applying Supervised Fuzzy Clustering to the conventional time-domain features
82
81. 26
83. 66
• Applying Supervised Fuzzy Clustering to VM_QRS40 Features
84
73. 76
94. 13
• Applying Supervised Fuzzy Clustering to mVM_QRS40 Features
90
89. 88
91. 16
• Applying Supervised Fuzzy Clustering to mVM_CWT1 Features
92
88. 83
95. 16
desired, only one of the three criteria is chosen; in contrast, all of the criteria must be satisfied at the same time to achieve higher SP. This fact can be seen in Table 1. In most cases, two of the three criteria are used to obtain the balance between SE and SP [13]. These results obtained with a Fuzzy system with 5 rules. • Table1: VLP detection results. The comparison between the proposed method, Simson's method and applying a Supervised Fuzzy Clustering system to the features (for the test set).
Method
• Simso n's Method
• Applying Neural Network Method to mVM_CWT1 Features
VII. CONCLUSION The aim of this work was to investigate the capability of a method based on the continuous wavelet transform and a Supervised Fuzzy Clustering, in order to extract features and classify for VLP detection problem in HRECG. The results show good improvements in sensitivity and discriminancy compared to Simson's method and Supervised Fuzzy Clustering to the conventional time-domain features (see Table1). Another possible advantage of the proposed method can be the ability of the VLP detection in patients with bundle branch block, while Simson's method can not be used with these patients because the bundle branch block causes the QRS duration to be extended and consequently increases QRST. Simson's method is very sensitive to QRS
[1] I.C.Baykal and A.Yilmaz, Detection of Late Potentials in Electrocardiogram Signals in both Time and Frequency Domains Using Artificial Neural Networks, 44th IEEE Midwest Symposium on Circuits & Systems, 2001, pp.576-579. [2] S.W.Chen, A Wavelet-Based Heart Rate Variability Analysis for the Study of Nonsustained Ventricular Tachycardia, IEEE Transactions on Biomedical Engineering, Vol.49, No.7, 2002, pp.736-742. [3] A.Cohen , Biomedical Signal Processing, CRC Press, VolumeII, 1986. [4] A.Mousa and A.Yilmaz, Neural Network Detection of Ventricular Late Potentials in ECG Signals Using Wavelet Transform Extracted Parameters, IEEE Proceedings of the 23rd Annual EMBS International Conference, Turkey, 2001, pp.1668-1671. [5] A.Mousa and A.Yilmaz, A Method Based on Wavelet Analysis for the Detection of Ventricular Late Potentials in ECG Signals, 44th IEEE Midwest Symposium on Circuits & Systems, 2001, pp.497-500 . [6] A.Rakotomamonjy, B.Migeon, and P.Marche, Automated Neural Network Detection of Wavelet preprocessed Electrocardiogram Late Potentials, Med.Biol.Eng.Comput.,Vol.36, 1998, pp.346 -350. [7] M.B.Simson, Use of Signals in The Terminal QRS Complex to Identify Patients with Ventricular Tachycardia after Myocardial Infarction, Circulation, Vol.64, No.2, 1981, pp. 235-242. [8] A.Spaargaren and M.J.English, Detecting Ventricular Late Potentials Using the Continuous Wavelet Transform, Proceedings of Computers in Cardiology,IEEE Comput.Soc.,Vol.26, 1999, pp.5-8. [9] A.Taboada-Crispi, J.V.Lorenzo-Ginori, and D.F.Lovely, Adaptive Line Enhancing Plus Modified Signal Averaging for
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
104 Ventricular Late Potential Detection, Electronics Letters, Vol.35, No.16, 1999, pp.1293-1295. [10] A.Taboada-Crispi, Improving Ventricular Late Potentials Detection Effectiveness, Doctoral Thesis, University of New Brunswick,2002. [11] S.Wu, Y.Qian, Z.Gao, and J.Lin, A Novel Method for Beat-toBeat Detection of Ventricular Late Potentials, IEEE Transactions on Biomedical Engineering,Vol.48,No.8,2001, pp.931-935. [12] Q.Xue and B.R.S.Reddy, Late Potential Recognition by Artificial Neural Networks, IEEE Transactions on Biomedical Engineering, Vol.44, No.2, 1997, pp.132-143. [13] Janos Abonyi and Ferenc Szeifert, Supervised Fuzzy Clustering for the Identification Of Fuzzy Classifiers , University of Veszprem, Dep. of Process Engineering, [14] Detection of Ventricular Late Potentials in High-Resolution ECG Signals by a Method Based on the Neural Network and Continuous Wavelet Transform , Ali Shahidi Zandi , Mohammad Hassan Moradi [15] Baraldi A. and Blonda P. (1999) A survey of fuzzy clustering algorithms for pattern recognition - Part I, IEEE Transaction On Systems, Man and Cybernetics Part B29 (6): 778-785. [16] Bezdek J.C., Hathaway R.J., Howard R.E., Wilson C.A. and Windham M.P., (1987) Local Convergence Analysis of a Grouped Variable Version of Coordinate Descent. Journal of Optimization Theory and Applications, 71:471–477. [17] Biem A., Katagiri S., McDermott E., Juang BH. (2001) An application of discriminative feature extraction lo filter-bankbased speech recognition.IEEE Transactions On Speech And Audio Processing 9 (2): 96-110. [18] Campos T. E., Bloch I., Cesar R. M. Jr. (2001) Feature Selection Based on Fuzzy Distances Between Clusters: First Results on Simulated Data.Lecture Notes in Computer Science, Springer-Verlag press, ICAPR’2001 - International Conference on Advances in Pattern Recognition, Rio de Janeiro, Brazil, May.16 [19] Cios K.J., Pedrycz W., Swiniarski R.W. (1998) Data Mining Methods forKnowledge Discovery. Kluwer Academic Press, Boston.[20] Corcoran A.L., Sen S. (1994) Using real-valued genetic algorithms to evolve rule sets for classification. In IEEE-CEC, June 27-29, 120–124, Orlando, USA. [20] Gath I. and Geva. A.B. (1989) Unsupervised Optimal Fuzzy Clustering,IEEE Transactions on Pattern Analysis and Machine Intelligence 7 773–781. [21] Gustafson, D.E., Kessel, W.C. (1979) Fuzzy Clustering With a Fuzzy Covariance Matrix, In Proc. IEEE CDC, San Diego, USA. [22] Hathaway R.J. and Bezdek J.C. (1993) Switching Regression Models and Fuzzy Clustering. IEEE Transactions on Fuzzy Systems, 1:195–204.
Ayyoub Jafari, M.H. Morradi [23] Hoppner F., Klawonn F., Kruse R. and Runkler T. (1999) Fuzzy Cluster Analysis – Methods for Classification, Data Analysis and Image Recognition,John Wiley and Sons. [24] Ishibuchi H., Nakashima T., Murata T. (1999) Performance evaluation of fuzzy classifier systems for multidimensional pattern classification problems.IEEE Trans. SMC–B 29, 601–618. [25] Kambhatala N. (1996) Local Models and Gaussian Mixture Models for Statistical Data Processing, Ph.D. Thesis, Oregon Gradual Institute of Science and Technology. [26] Kim E., Park M., Kim S. and Park M. (1998) A Transformed Input–Domain Approach to Fuzzy Modeling. IEEE Transactions on Fuzzy Systems, 6:596–604. [27] Loog L.C. M., Duin R.P.W. ,Haeb-Umbach R. (2001) Multiclass linear dimension reduction by weighted pairwise Fisher criteria, IEEE Trans. On PAMI, vol. 23, no. 7, pp. 762-766. [28] Nauck D. and Kruse R. (1999) Obtaining interpretable fuzzy classification rules from medical data. Artificial Intelligence in Medicine, 16: 149–169. [29] Pe˜na-Reyes C. A. and Sipper M. (2000) A fuzzy genetic approach to breast cancer diagnosis. Artificial Intelligence in Medicine, 17: 131–155. [30] Quinlan J. R. (1996) Improved Use of Continuous Attributes in C4.5, Journal of Artificial Intelligence Research, 4: 77–90. [31] Rahman A.F.R. and Fairhurst M.C. (1997) Multi-prototype classification: improved modelling of the variability of handwritten data using statistical clustering algorithms. Electron. Lett. 33 14, pp. 12081209.17 [32] Roubos J.A., Setnes M. (2000) Compact fuzzy models through complexity reduction and evolutionary optimization. In FUZZ-IEEE, pp762-767, May 7-10, San Antonio, USA. [33] Roubos J.A., Setnes M. and Abonyi J. (2001) Learning fuzzy classification rules from data. Developments in Soft Computing, eds. John, R. and Birkenhead, R., Springer Verlag Berlin/Heidelberg, 108-115. [34] Setiono R. (2000) Generating concise and accurate classification rules for breast cancer diagnosis. Artificial Intelligence in Medicine, 18: 205–219. [35] Setnes M., Babuˇska R. (1999) Fuzzy Relational Classifier Trained by Fuzzy Clustering, IEEE Trans. SMC–B, 29, 619– 625 [36] Setnes M., Babuˇska R., Kaymak U., van Nauta Lemke H.R. (1998) Similarity measures in fuzzy rule base simplification. IEEE Trans. SMC–B 28, 376–386. [37] Takagi T., Sugeno M. (1985) Fuzzy identification of systems and its application to modeling and control. IEEE Trans. SMC 15, 116–132. [38] Valente de Oliveira J. (1999) Semantic constraints for membership function optimization. IEEE Trans. FS 19, 128–138.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
An Approach to the Real-Time Surface Electromyogram Decomposition V. Glaser, A. Holobar and D. Zazula University of Maribor, Slovenia Abstract— This paper studies a sequential decomposition method suitable for real-time separation of linear mixtures of finite-length signals. The signals are modelled as channel responses in a multiple-input multiple-output (MIMO) model, with positive pulse trains as channel inputs. Our decomposition method compensates the channel responses and aims to reconstruct the input pulse trains in real time. Tests on synthetic surface electromyograms (SEMG) show how well the proposed method performs, in comparison to its batch version, and how robust it is. Keywords— Compound signal decomposition, Surface electromyogram, Real-time signal processing, Sherman-Morrison matrix inversion, Sequential convolution kernel compensation
I. INTRODUCTION Linear modelling of multiple observations excited by multiple input source signals proved to be the right answer for signal decomposition in many practical cases. We meet this approach in the fields of biomedicine, radar, sonar, image processing, etc. Several established methods for monitoring and diagnosing the human muscle and nerve system make use of the electromyography (EMG). Special electrodes detect the electrical muscle or nerve responses called action potentials (AP). These responses accompany muscle contractions that are caused by the potentials travelling along the muscle fibres. A bunch of muscle fibres is bound into a muscle unit, the so called motor unit (MU). Every motor unit is innervated and excited by a train of electric pulses that arrive through the nerves from the spinal cord and brain [4]. For practical reasons, the EMG signals are recently more and more measured by surface electrode (surface EMG). This means highly superimposed signals where the contributions of individual motor-unit action potentials (MUAPs) sum up into a compound signal [4]. A real EMG diagnostic value, whose application has been paved by the tradition of the needle EMG measurements, is based on the knowledge about its constituent components observed through individual MUAPs. Therefore, a strong need for reliable, robust and fast methods for SEMG decomposition has emerged recently. One of new promising approaches was developed in the System Software Laboratory at the University of Maribor and was called Convolution Kernel Compensation (CKC) [1]. Its original derivation utilises the correlation
matrix built out of several simultaneous measurements that are processed in rather long segments. Hence, the underlying process of this approach is batch, which does not meet the requirement for a fast decomposition solution. In this paper, we are going to explain how the batch CKC version can be upgraded into a real-time processing algorithm. It will be derived as a sequential CKC. In Section II, a brief summary of the batch version of CKC is provided, followed by the derivation details for its sequential version in Section III. Both the CKC and sequential CKC are quite noise resistant, which is shown by simulations in Section IV. Section V concludes the paper. II. DATA MODEL AND THE IDEA OF CONVOLUTION KERNEL COMPENSATION
Resume briefly the CKC basics from [1]. A MIMO model of SEMG assumes M different observations xi = { xi ( n); n = 0,1, 2,...} ; i = 1,..., M , each N samples long: xi (n ) =
N
L −1
∑∑
j =1 l = 0
h ij ( l ) s j ( n − l );
(1)
i = 1 ,..., M ,
where hij (l ) is a response of the j-th source with L samples, appearing in i-th observation, while s j (n − l ) stands for a positive pulse train from the j-th source. When noisy observations are considered, (1) extends to: y i ( n ) = x i ( n ) + ω i ( n );
i = 1,..., M ,
(2)
where additive noise ω i is considered as a stationary, zeromean Gaussian random process. To assure that EMG model is over-determined, the model is extended with K-1 delayed repetitions of each observation: y ( n ) = ⎡⎣ y1 ( n ) , y1 ( n − 1) ,...., y1 ( n − K + 1) ,...., y M ( n ) ,...., y M ( n − K + 1) ⎤⎦ . T
(3)
Adopting the model from (2) and notations from Eq. (3), the so called activity index can be calculated as follows: γ ( n ) = y T ( n ) C #y y y ( n ) = s ( n ) C −s s1 s ( n )
,
(4) T
−1
where # stands for pseudoinverse, and for matrix transpose and inverse, respectively, C yy is the correlation matrix of noisy observations y i , and C s s the correlation matrix of noise-free source signals, si . Activity index γ (n)
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 105–108, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
106
V. Glaser, A. Holobar and D. Zazula
actually indicates a global pulse-train activity. Suppose now we fix the premultiplying vector in (4) to the position n0, y (n 0 ) , and write it in the following form: ν n 0 ( n ) = y T ( n 0 ) C #y y y ( n )
.
(5)
It was proved in [1] that when n0 is selected such that s j (n 0 ) = 1 ,
(6)
and the j-th source is the only one active in position n0, v n0 (n) from Eq. (5) equals v n0 ( n ) = s j ( n ) .
(7)
It is evident that Eq. (5) means a recipe how to decompose a linear, convolutive mixture of signals. Hints for the approximation of condition (6) are explained in detail in [1], and will also be taken into account in the sequential form of CKC derived in the following section.
Sequential CKC builds on the batch CKC by changing the correlation matrix computation from segments to iterative sample-by-sample approximation, and by constructing an improved version of the premultiplying vector y T (n0 ) in Eq. (5) in the same iterative manner. Subsections III.A and III.B reveal these processes in more detail, respectively. A. Sequential inverse of correlation matrix To create a sequential CKC the biggest problem is posed by the pseudoinverse of correlation matrix C #yy in (5). The most obvious way for the correlation matrix calculation is based on longer signals segments. The matrix is a square matrix and owing to additive noise as defined in Eq. (2), which is always present in real situations, correlation matrix has a full column rank. This means that instead of pseudo−1
can, in generally, be inverse , direct matrix inverse implemented. With such a starting point, a few different methods for the sequential matrix inverse calculation are available. It was shown in [2] that the fastest one is the so called Sherman-Morrison formula published in [3] as: ( A + uv T ) −1 = A −1 −
A − 1 uv T A − 1 1 + v T A −1 u
Now, assume that correlation matrix is known after the k-th observation sample. When the next sample vector is observed, correlation matrix in step k+1 calculates as follows: C y y , k +1 = C y y , k + y ( k + 1) y ( k + 1) T ,
(9)
where C yy ,k is correlation matrix after the k-th sample
III. SEQUENTIAL CONVOLUTION KERNEL COMPENSATION
#
Fig. 1: Estimated pulse sequence obtained by sequential CKC
(8)
where A is a square matrix, u and v are column vectors.
vector of observations. It is clear that the right-hand side of Eq. (9) corresponds to the left-hand side of (8), which also suggests how to compute correlation matrix iteratively. This way causes the correlation matrix coefficients grow by iteration steps and the matrix inverse values decrease correspondingly. The sequentially computed inverse of correlation matrix can be used in Eq. (5), which makes the source pulse trains v n0 (n) computable sequentially as well. The pulse amplitudes decrease in time because of decreasing inverse matrix values, as shown in Fig. 1. To avoid the problem of decreasing values, correlation matrix is averaged across the number of samples involved, which changes Sherman-Morrison formula to: ⎛ k A + uv ⎜ ⎜ k +1 ⎝
T
⎞ ⎟ ⎟ ⎠
−1
⎛ A −1 A −1 ⎜ −1 uv T A k = (k + 1)⎜ − k −1 ⎜ k T A ⎜ 1+ v u k ⎝
⎞ ⎟ ⎟, ⎟ ⎟ ⎠
(10)
where k is the iteration step number. B. Improvement of pulse trains Premultiplying vector y (n0 ) in Eq. (5) constitutes a linear filter, together with the inverse of correlation matrix. The quality of decomposed trains depends on the correctness of this sample vector, i.e. if condition (6) is fulfilled. Now assume that k different sources trigger at the same time
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
An Approach to the Real-Time Surface Electromyogram Decomposition
vn0 n1 (n) = ν n0 (n) ⋅ν n1 (n)
(11)
generates a superimposition of only those pulse trains that trigger in both positions n0 and n1. This principle can be extended to additional “filtering” positions. It was estimated in [1] that already four proper positions combine in a pulse train which most probably belongs to a single source, e.g. the j-th one. Denoting this train by v n0 n1n2 n3 (n) , suppose the pulses in this train appear at t0, … , tg-1. Then by averaging all the observation vectors in these instants, an optimum filter is constructed as follows: fj =
1 g −1 ∑ y (ti ) g i =0
(12)
where fj stands for the j-th source filter. It is updated throughout the sequential decomposition process, and so is the verification (11). The sequential decomposition begins by selecting the initial filter positions n0 for a certain number of foreseen trains. These initial positions are decided according to the activity index values. Afterwards, the filters fj are updated according to (11) and (12) for all the estimated pulse trains in parallel. The proposed sequential decomposition algorithm is depicted in Fig. 2. 0:
Initialization: Calculate starting matrix inverse (SVD) and select fj
1:
A new observation vector is obtained, y(k+1).
2:
Using (10) the new inverse of correlation matrix is calculated.
3:
Calculate new pulse train samples: ν n 0 ( k + 1 ) = f Tj C #y y y ( k + 1 )
for
all
foreseen
sources j; if any of these samples exceeds a preselected threshold, go to 4, otherwise go back to step 1. 4:
Update fj according to (12), checking first (11); go back to 1.
Fig. 2: Pseudocode of the sequential CKC decomposition approach.
IV. SIMULATION RESULTS The sequential CKC was tested in two experiments. The first experiment evaluated the influence of noise, while the second experiment studied the influence of different number of observed sources. The decomposed pulse trains were assessed with three different statistics: the rate of properly placed pulses, the rate of missed pulses, and the rate of misplaced pulses. At the same time, the number of properly decomposed sources was also counted. A pulse train was considered properly decomposed if more than 95% of its pulses were placed correctly. A. Influence of noise First experiment was conducted in 20 Monte-Carlo runs of sequential CKC per each signal-to-noise ratio (SNR). Three different ratios were tested: 20 dB, 15 dB and 10 dB. In each run 10 simulated signals (SEMG) which were 10000 samples long with maximum voluntary contraction (MVC) of 10% were used. The CKC extension factor K was set equal to 5. The initial observation window size for the initial correlation matrix was selected at 250 samples. Number of filters has been limited to 10. A sample in the reconstructed train was considered a triggering pulse if it exceeded 80% of the average amplitude of three highest pulses in every subsequent analysing window. Fig. 3 shows that noise does not affect the accuracy of estimated pulse trains. The recognition rate depends more on the selection of good starting filters than on small SNR. Although the average rate is about 75% of properly placed pulses, we still detect more than a half of trains with 95% correctly placed pulses as shown in Fig. 4. This fact is also confirmed by Fig. 6. As we can see in Fig. 5, there were about 25% of misplaced pulses and no missed pulses, which is indicating that wrong pulses belong to the supreimpositions of several trains. Rate of properly placed pulses [%]
instant n0. The filter placed at n0 extracts a superimposition of all k pulse trains. Even worse, these pulse trains are corrupted by noise in practice. To improve the decomposition results, filter updating is necessary along with the progress of the sequential decomposition. Assume n0 and n1 are two different time moments, both satisfying condition (6) for the same source. The element-wise product
107
90 70 50 20
15
10
Signal-to-noise ratio [dB] Fig. 3: Rate of properly placed pulses versus different SNRs
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
108
V. Glaser, A. Holobar and D. Zazula
the number of sources with 10% MVC and 30% MVC. The SNR was set equal to 10 dB in all test runs. More sources cause bigger probability of the superimpositions of different source triggerings. This fact causes a decrease in the number of properly recognized pulse trains. Fig. 6 depicts the number of different recognized pulse trains versus the different MVC levels, i.e. the number of source, where the maximum number of sources followed in parallel by our method was limited to 10. V. CONCLUSION
Rate of misplaced and missed pulses [%]
Fig. 4: Triggering pulse train (light) and estimated pulse train (dark)
40 20 0 20
15
10
Signal-to-noise ratio [dB] Misplaced
Missed
We derived an iterative, sequential version of CKC. It mimics the steps of the batch CKC, but all the elements are calculated in a sequential manner being updated with every new input sample vector. The correlation matrix inversion, which is the most computationally complex, was implemented by the Sherman-Morrison formula. It proves to be O (( KM ) 2 ) and also almost five times faster than the updating of 10 filters. Extensive statistical testing with synthetic SEMG signals proved that all the advantages and performance of the CKC method were also kept by its sequential version.
Number of different recognized pulse trains
Fig. 5: Rate of misplaced and missed pulses versus different SNRs
6
REFERENCES 1.
5 4 3
2.
2 1 0 10
30
3.
MVC
Fig. 6: Number of different recognized pulse trains versus different pulse triggerings.
B. Influence of different number of sources The second experiment was also conducted with 20 runs of sequential CKC per each number of sources. The simulated signal sets were generated in the same way as with the first type of experiments. The only difference was related to
4.
A. Holobar, D. Zazula: Multichannel Blind Source Separation Using Convolution Kernel Compensation, IEEE Trans. on Sig,. Proc., in press. V. Glaser, Sequential Convolution Kernel Composition for Composite Signal Decomposition, Diploma Thesis, FERI, University of Maribor, 2006. C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, 2001. A. Holobar: Blind decomposition of convolutive mixtures of close-to-orthogonal pulse sources applied to surface electromyogram, PhD Thesiss, FERI, University of Maribor, 2004. Author: Institute: Street: Country: Email:
Vojko Glaser Faculty of EE and CS Smetanova 17, 2000 Maribor Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
EMG Based Muscle Force Estimation using Motor Unit Twitch Model and Convolution Kernel Compensation R. Istenic1, A. Holobar1,2 , R. Merletti2 and D. Zazula1 1
Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, Slovenia 2 LISiN, Department of Electronics, Politecnico di Torino, Torino, Italy
Abstract— In this paper we introduce a new method for muscle force estimation from multi-channel surface electromyograms. The method combines a motor unit twitch model with motor unit innervation pulse trains, which are estimated from multi-channel surface electromyograms. The motor unit twitches are then aligned to the innervation pulse trains and summed up to obtain the total muscle force. The method was tested on real surface EMG signals acquired during force ramp contractions of abductor pollicis brevis muscle in 8 male subjects. With 22 ± 5 (mean ± std. dev.) motor units identified per subject, the force estimation error of our method was 16 ± 4 % RMS. These results were compared to the method which uses the EMG amplitude processing to estimate muscle force. The results of our new concept proved to be completely comparable to those of EMG amplitude processing. Keywords— muscle force estimation, EMG force relation, twitch, convolution kernel compensation.
I. INTRODUCTION Force production in a muscle is regulated by two main mechanisms: the recruitment of motor units and the modulation of their discharge rates. The greater the number of motor units recruited and their discharge frequency, the greater the force will be. The same two mechanisms determine also the electric activity in a muscle. Thus a direct relationship between the electromyogram (EMG) and exerted muscle force might be expected [1]. In order to estimate muscle force from surface electromyograms (SEMG), we have to know the relationship between the electrical and mechanical behavior of muscles. When the electric response of muscles is measured by SEMG, only part of the muscle and active motor units (MU) is detected. Another problem that arises is that superficial MUs contribute more to the observed SEMG than deep MUs. As we know that the force produced by a muscle means a resultant of all MU forces, we also have to take into account that the MUs trigger with different frequencies, they may be recruited at different time intervals, and the amount of force they exert at every excitation (the force twitch) depends on the MU type [2]. Reference muscle forces are measured externally via the moments they cause in the observed joint or extremities.
This means that we hardly measure the force of just one muscle. Usually, several muscles contribute to the force level which we detect. Knowing all these facts, we are trying to verify and validate two approaches that predict the muscle forces from the SEMG observations. We decided to use isometric ramp contractions, because this kind of exercise gives higher probability that we really observe the force level produced just by one selected muscle. By constant force contractions, it can happen that the tested subject produces force by activating also other muscles, which are not under the recording electrodes. In the sequel, we reveal our new force estimation approach which sums up the MU twitches aligned by the decomposed innervation pulse trains. We briefly summarize the muscle force generation model introduced by [2] and two methods for muscle force estimation: the SEMG amplitude processing and the convolution kernel compensation (CKC) decomposition. Section III explains the experiments and compares the model-based forces obtained on real SEMGs with the real measured forces. Sections IV and V discuss the results and conclude the paper. II. METHODS A. Muscle force model We adopted the muscle force model proposed in [2]. The complete pool consisted of 120 motor units. The distribution of twitch forces for the motor units was represented as an exponential function [2]. A large number of motor units produced small forces, while relatively few units generated large forces. Twitch force was modeled as the impulse response of a critically damped, second order system. Fuglevand [2] used Eq. 1 to represent a motor unit twitch:
f (t ) =
P ⋅ t 1−( t / T ) , ⋅e T
(1)
where T is contraction time to peak force of the twitch and P is its peak amplitude. Twitch amplitudes were assigned according to rank in the recruitment order, and twitch contraction times
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 114–117, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
EMG Based Muscle Force Estimation using Motor Unit Twitch Model and Convolution Kernel Compensation
B. Muscle force estimation using SEMG amplitude processing The majority of today’s methods use SEMG amplitude to estimate force. They rely on conventional SEMG amplitude processing, such as rectification followed by low pass filtering, to preprocess the SEMG before relating it to torque. Clancy et al. [3] found that advanced SEMG processing that incorporates signal whitening and multiple-channel combination significantly improve force estimation. Both whitening and multiple-channel combination reduced EMG-torque errors and their combination provided an additive benefit. Potvin et. al [4] found that high pass filtering of SEMG signals improves SEMG based muscle force estimates. An iterative approach was used to process the EMG from the biceps brachii, using progressively greater high pass cutoff frequencies (20-440 Hz in steps of 30 Hz) with first and sixth order filters, to determine the effects on the accuracy of force estimates. The results indicate that removing up to 99% of the raw SEMG signal power resulted in significant and substantial improvements in force estimates. For the purpose of force prediction, it appears that a small high band of SEMG frequencies may be associated with force while the remainder of the spectrum has little relevance. C. Our method For a muscle force modeling, described in Subsection II.A, innervation pulse trains of individual MUs are needed. They were obtained from recorded SEMG signals with the Convolution Kernel Compensation (CKC) method described in [5] and [6]. The method extracts MU discharge patterns in lowlevel (0-10% MVC) force-varying isometric contractions, is not sensitive to superimpositions of action potentials and detects on average a larger number of MUs that it is usually possible with single-channel intramuscular EMG [5]. However, it is able to recognize only superficial MUs, which have the greatest impact on SEMG. As a result, we deal with a limited MU pool. In our experiment, the method recognized 22 ± 5 MUs per subject (mean ± std. dev.). This number is substantially smaller than the one proposed in [2] (120 MUs),
Motor unit twitch forces → MU22
1.4
Twitch force (arb. units)
were inversely related to twitch amplitudes [2]. The range of twitch forces used in the model was 100-fold. One unit of force was equivalent to the twitch force of the first unit recruited, and the last unit recruited had a twitch force of 100 units. The range of twitch contraction times was 3-fold, with the twitch for the first recruited unit having the time to peak duration of 90 ms, and for the last recruited unit of 30 ms. All MUs followed the widely reported sigmoidal relationship between MU force and firing rate. The total force of the muscle was determined as a linear summation of all the individual MU forces.
115
1.2 → MU1
1 0.8 0.6 0.4 0.2 100
200
300
400 500 Time (ms)
600
700
800
900
Fig. 1 MU twitch forces assigned to recognized MUs for contraction up to 10 % MVC. Only low threshold motor units were assumed to be recruited with twitch forces from 1 to 1.6 units and contraction times from 90 ms to 80 ms.
hence, the twitch force and twitch contraction time ranges from [2] had to be modified. Firstly, with recorded contractions ranging from 0 to 10 % maximum voluntary contraction (MVC), we assumed only low-threshold units are recruited. To correlate MUs with twitch forces correctly, the recognized MU innervation pulse trains were sorted according to the recruitment order. The first recruited MU was assigned twitch force of 1 unit with contraction time of 90 ms, while the last recruited MU had a twitch force of 1.6 units with contraction time of 80 ms. Such values would be assigned to the first twenty units of all 120 MUs in model [2] (see Fig. 1). Model [2] allows also gain in MU twitch force to vary as a function of the firing rate. The maximum gain is obtained when contraction time of the twitch equals the interstimulus interval (interval between two firings). This gain factor was used to amplify the MU twitch force for each discharge. The force produced by a single MU is equivalent to the sum of individual amplified twitches. The mechanical actions of MUs were assumed to be independent of one another, thus the total force in the muscle was determined as the sum of the individual MU forces. Both measured and estimated forces were filtered with first order Butterworth low pass filter with cutoff of 1Hz. III. EXPERIMENTAL RESULTS A. Data acquisition SEMG recording took part at LISiN, Department of Electronics, Politecnico di Torino. Eight healthy male subjects (age 27.0 ± 2.3 years, height 181.1 ± 6.7 cm and weight of 75.5 ± 9.0 kg) participated to the experiment. SEMG signals
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
116
R. Istenic, A. Holobar, R. Merletti and D. Zazula
9 8 7 6 5 4 3 2 1 0 0
10
20
30
40 50 Time (s)
60
70
80
90
Fig. 2 Comparison of measured force (thick solid line), force estimated with our method (dashed line) and force estimated with Potvin's method (thin solid line). The forces were estimated from real SEMG signals (subject A) Comparison of Measured and Predicted Force 9
B. Results
RMS ( f estimated − f measured ) = ⋅ 100 (2) RMS ( f measured )
Results of our method were compared to the Potvin’s method described in [4]. The method builds on a single SEMG channel and uses high-pass filtering to enhance the force estimation. The high pass cutoff frequency was set equal to 180 Hz, the low pass cutoff frequency to 1 Hz while the non-linear normalization constant was 23 [4]. With 60 SEMG channels available, only the channel with the smallest RMS error was taken into consideration. The measured and predicted muscle forces for subject A (7 consecutive ramp contractions) are depicted in Fig. 2. Our method yields 9.9% error (Eq. 2) and correlation coefficient of 0.98, while Potvin's method 12.2% error and correlation coefficient of 0.96. Fig. 3 depicts the results for the 6th force ramp only.
8 F orce Am plitude (% MVC)
To compare estimated and measured force and to calculate estimation error, both forces must be normalized first. In our case, linear normalization w.r.t. the maximum force amplitude was used. Estimation error was computed as root mean square (RMS) percent error (Eq. 2). In addition, the correlation coefficient between signals was also computed.
ErrorRMS %
Comparison of Measured and Predicted Force
10
F orce Am plitude (% MVC)
were acquired by a matrix of 61 electrodes arranged in 5 columns and 13 lines (with four corner electrodes missing). Inter-electrode distance was 3.5 mm. The electrode matrix was located with the columns in the direction of the muscle fibres and covered the entire distal semifibre length (from the innervation zone to the distal tendon) and part of the proximal semifibre of abductor pollicis brevis muscle. Before electrode placement, the skin was abraded with abrasive paste. The matrix was fixed on the skin by adhesive tape and a reference electrode was placed at the wrist. A custom designed brace was used to measure abduction force. The subject’s wrist was fixed in a padded wood support with the head of the thumb phalanx in touch with a load cell. The force signal was amplified, provided as feedback to the subject on an oscilloscope, and recorded in parallel with the SEMG signals. The subjects performed three maximal voluntary contractions separated by 2-min rest, after which the electrode grid was located over the abductor pollicis brevis. The subject was then asked to linearly increase force from 0% to 10% MVC in 6 s and then decrease from 10% to 0% MVC in other 6 s, using the visual feedback on force. All together, 7 consecutive force ramps were recorded from each subject. The SEMG signals were amplified, band-pass filtered (3 dB bandwidth, 10-500 Hz) and sampled at 1650 Hz by a 12 bit A/D converter.
7 6 5 4 3 Measured Force Predicted Force - twitch Predicted Force - amplitude
2 1 62
64
66
68 70 Time (s)
72
74
76
Fig. 3 Comparison of measured force (thick solid line), force estimated with our method (dashed line) and force estimated with Potvin's method (thin solid line). The forces were estimated from real SEMG signals (subject A, 6th force ramp) and normalized w.r.t. their peak value. Both methods were tested on signals from all 8 subjects. Errors and correlation coefficients were calculated for each subject. Average error was 15.8% ± 4.2% for our method and 16.1% ± 3.5% for Potvin's method [4]. Average correla-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
EMG Based Muscle Force Estimation using Motor Unit Twitch Model and Convolution Kernel Compensation
tion coefficient was 0.96 ± 0.02 for our method and 0.93 ± 0.03 for the compared one. As found by Potvin [4], high pass filtering of SEMG signals improved the force estimation also by our signals. High pass cutoff frequencies were changed from 20 Hz to 420 Hz in steps of 80 Hz. The optimal sixth order high pass cutoff frequency for our signals was found to be 180 Hz. At that frequency the minimal error was obtained (Table 1).
117
V. CONCLUSION Several simulation studies of the MU force model were proposed in the past [2, 7]. Our study went a step further as we investigated how this model is performing on real SEMG signals. Despite some open issues discussed in Section IV, our method yields comparable results to the Potvin’s method [4] which is based on SEMG amplitude processing.
ACKNOWLEDGEMENT
Table 1 High pass cutoff frequency (Hz)
RMS error
20 100 180 260 340 420
16.8 % 16.3 % 16.1 % 17.3 % 16.7 % 18.4 %
IV. DISCUSSION Our method performed well in comparison to the Potvin’s method. However, there are some open issues to be discussed when interpreting the results of our method. The first one is the number of MUs that can be detected by the pick-up electrodes and recognized by the decomposition algorithm. With surface electrodes only part of all active MUs is detected. Moreover, not all the detected MUs are recognized by the CKC decomposition technique. This implies that our method operates on a limited set of MUs. Question arises whether this set is representative enough for the purpose of force estimation. Typically, the number of MUs contributing to the muscle force is much larger than the number of recognized MUs. Nevertheless, the results of this study demonstrate that, at least in the case of abductor pollicis brevis muscle, recognized motor units form a good base for force estimation. Finally, co-activation of antagonist and agonist muscles must be taken into account. As we stated in introduction, force at a joint is normally produced by several concurrently active muscles. For the best possible force estimations all muscles that contribute to the joint force must be included in SEMG recordings and in the force estimation process.
This work was supported by the Slovenian Ministry of Higher Education, Science and Technology (Contract No. 1000-05-310083 and Programme Funding P2-0041) and European Commission within the Sixth Framework (Project Cybermans) and Marie Curie Intra-European Fellowships Action (DE MUSE, Contract No. 023537).
REFERENCES 1. 2. 3.
4. 5.
6. 7.
Merletti R, Parker P A (2004) Electromyography: Physiology, engineering and noninvasive applications. John Willey & Sons, New Jersey. Fuglevand A J, Winter D A, Patla A E. (1993) Models of recruitment and rate coding organization in motor-unit pools. Journal of Neurophysiology 70:2470-2488 Clancy E A, Bida O, Rancourt D (2007) Influence of advanced electromyogram (EMG) amplitude processors on EMG-to-torque estimation during constant-posture, force varying contractions. Journal of Biomechanics 39(14): 2690–2698 Potvin J R, Brown S H M (2004) Less is more: high pass filtering, to remove up to 99% of the surface EMG signal power, improves EMGbased biceps brachii muscle force estimates. JEK 14:389-399 Holobar A, Zazula D, Gazzoni M, Merletti R, Farina D (2006) Noninvasive analysis of motor unit discharge patterns in isometric forcevarying contractions. ISEK Proc., XVI congress of ISEK, Torino, Italy, 2006, pp 12 Holobar A, Zazula D (2004) Correlation-based decomposition of surface EMG signals at low contraction forces. Med. Biol. Eng. Comput 42:487-495 Zhou P, Rymer W Z (2004) Factors governing the form of the relation between muscle force and the EMG: A simulation study. Journal of Neurophysiology 92:2878-2886 Address of the corresponding author: Author: Rok Istenic Institute: Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor Street: Smetanova 17 City: 2000 Maribor Country: Slovenija Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Fast-Slow phase separation of Near InfraRed Spectroscopy to study Oxigenation v/s sEMG Changes Gian Carlo Filligoi1,2 1
2
Dpt. INFOCOM, Faculty of Engineering ,Università degli Studi"La Sapienza", Italy Biomedical Systems Research Center (CISB), Università degli Studi "La Sapienza", Italy
Abstract – The possibility of studying non-invasively local muscle oxidative metabolism during exercise has been recently enhanced thanks to the use of Near-InfraRed Spectroscopy (NIRS). Moreover, the myoelectric manifestations of muscle fatigue occurring during sustained isometric contractions have been extensively studied with quantitative surface electromyography (sEMG) and are described by means of some sEMG global parameters extracted in time- and/or frequency-domain. With the aim of analyzing together both NIRS and sEMG data recorded on Biceps Brachii, an experimental protocol has been applied to seven subjects. During the experimental session, each subject had to perform two different kind of physical trials, a constant force and a cyclically varying isometric contraction. While examining the whole set of data with the aim of investigating the relationship between modifications of sEMG and underlying metabolic status, we faced up with the problem of objectively separating the two main phases of NIRS data. Results clearly indicated the presence of an initial fast phase of muscle desaturation followed by a slow phase, regardless of the kind of exercise. This behavior was paralleled by an analogous rate of change of sEMG parameters, thus suggesting a strong link between the two phenomena. As an objective criterion for separating the fast and the following slow phase an ad-hoc algorithm has been implemented. NIRS & sEMG data have been analysed by considering only those data pertaining rigorously to one of the two phases, whereas data collected during initial, transiction and final phases were discarded. In this paper we present some details on the algorithm for automatic separation of the two phases, with the most important results on statistical significance of the relationship between parameters extracted from sEMG in the fast phase v/s muscle oxigenation. Keywords – Myoelectric signal, sEMG, Near Infrared Spectroscopy, Muscle oxigenation, NIRS Slow-Fast phase.
I. INTRODUCTION Near InfraRed Spectrocopy (NIRS) is a new technique able to offer reliable information on local muscle oxidative metabolism during a physical exercise [1, 2]. The O2 saturation assessment in muscles is possible by using two different NIRS methods [3], and the most widely quoted in the literature, the near infrared spatially resolved
spectroscopy (NIRSRS), was also adopted in our experiments. By means of this technique, we measured several metabolic parameters such as the changes in oxyhemoglobin (O2Hb) and deoxyhemoglobin (HHb) concentration, the total hemoglobin volume (simply by adding the two previous parameters, tHb = O2Hb + HHb), and, finally, the average muscle tissue O2 saturation connected to neighbouring blood circulation within small vessels (such as arteriolar & venular, or capillary bed). The latter value provides a good estimation of the dynamic balance between O2 supply and O2 consumption in the investigated muscle area. Concerning the surface ElectroMyoGram (sEMG), the myoelectric signal analysis has been widely applied to investigate early changes occurring in the course of sustained isometric contractions (see [4] for a review). In our approach, time- and frequencydomain typical parameters have been extracted, and in particular Root Mean Square (RMS) and average muscle fiber conduction velocity (CV) in temporal space, whereas median frequency (MDF) in spectral domain. Several studies have combined sEMG data with those regarding muscle oxygenation (by NIRS) during exercise [5-10] and it is generally accepted that a linear relationship connects level of isometric effort with sEMG amplitude and muscle oxygen uptake. Though those studies demonstrates feasibility and importance of coupled sEMG-NIRS measurements, the heterogeneity in the tested muscles and exercise protocols, a somehow limited sEMG analysis techniques adopted as well as the intrinsic methodological limitations of the first generation NIRS devices utilized, suggest further investigations. On the basis of the above considerations, the present study was designed to investigate the relationship between myoelectric and metabolic (oxidative) changes occurring in the biceps brachii muscle (BB) of a group of seven healthy subjects (age: 34±6 years, body mass: 74±6 kg, height: 173±6 cm) during sub-maximal isometric contractions. The sEMG and NIRSRS measurements were performed during 30-sec. submaximal constant isometric force (at an effort equal to 20, 40, 60, and 80% MVC-the Maximal Voluntary Contraction,
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 124–127, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Fast-Slow phase separation of Near InfraRed Spectroscopy to study Oxigenation v/s sEMG Changes
125
Fig.1 Method used to determine the linear portion of the fast and slow phase of ΔSRS-O2. The plot refers to an ST at 80% MVC contraction. measured before of the experiment - called ST20, ST40, ST60 and ST80, respectively, being ST the acronym for STeady State contraction) and cyclically varying force isometric contractions (40±20%MVC; 60±20%MVC, called SIN40 and SIN 60, respectively, with SIN used as a contract form for SINusoidally varying isometric contraction). We hypothesize that different isometric constant force levels, as well as isometric cyclically varying muscle activation might have a different impact on local blood flow, and this might be reflected by different myoelectric and metabolic responses. Some deeper reasons for the adoption of the present experimental protocol comprising two different isometric exercises are presented elsewhere [11] as well as the details of the experimental setup and the instrumentation.
II. FAST AND SLOW PHASE AUTOMATIC SEPARATION While approaching the problem of looking for the relation among the various parameters extracted from both sEMG and NIRS recordings, we observed that data pertaining to the initial fast phase of muscle desaturation had to be objectively divided from those collected during the following slow phase, regardless of the kind of exercise. In fact, if we observe the average tissue O2 saturation index measured by means of the spatially resolved spectroscopy (SRS) technique (and, in particular, the index value ΔSRSO2 given by the percentage of O2 saturation with respect to its value at rest, evaluated just before the experiment [11]), the relative plot shows two evidently distinct phases: after
an initial brisk decrease (fast phase), ΔSRS-O2 presented an evident lower decay rate (slow phase). In Fig.1, it is shown an example of NIRS data collected during ST80, i.e., a constant force isometric contraction at 80%MVC. The problem to be solved was to find an objective procedure able to separate the two different behaviours. After several different attempts, it has been decided to adopt the following strategy for the selection of the two fiducial points representing the initial and the end of the fast phase: 1)- NIRS-data were first smoothed by a 7-degree polynomial interpolation procedure; 2)- the following step of the relative algorithm was to look for a very low oscillating negative long plateau in the first derivative of the relative smoothed ΔSRS-O2 time-course. Obviously, this plateau was preceeded and followed by an easily detectable decreasing and increasing phase; 3)- The fast ΔSRS-O2 decreasing phase was quantified by the slope of least square regression line; 4)-The exact definition of the two fiducial points marked as initial and final points was taken by finely choosing a suitable threshold which had to be tuned on the basis of the distance of those points from the regression line. The whole set of parameters extracted either from NIRS data or from sEMG recordings were evaluated uniquely within the fast–phase interval (indicated in Fig.1 as Regression Interval) together with the statistical significance of their eventual relationship (see RESULTS section). This simply because the slow-phase showed a markedly constant ΔSRS-O2 behaviour in all subjects and in all experiments consisting in a very low value of negative slope of the regression line (ranging from -0.0013%, which
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
126
Gian Carlo Filligoi
corresponds practically to a constant variation with time, and –1.2%, that represents, only for one case and in a unique subject, the maximal rate of decrease of ΔSRS-O2 during the slow phase). All various values of the parameters extracted from sEMG and NIRS during slow-phase and evaluated one versus the others did not show any statistically significant difference.
slow-phase gave satisfactory results. The test for statistically significative differences gave the results shown in Table 1. The relative time-course behaviour of the parameters extracted from sEMG and NIRS are reported elsewhere [11] and are far from the scope of this paper mainly devoted to the presentation of our technique, able to separate the two phases of muscle tissue oxigenation.
III. RESULTS As a main result, we may say that in all cases during the slow-phase, MNF slope, CV slope and relative NIRS parameters did not show any statistical difference when evaluated one against the others among various ST and SIN isometric contractions. At the contrary, while the automatic procedure for fast phase separation worked fairly well in all cases of ST contraction (see Fig.2), some more difficulties arised in SIN contraction due to the intrinsic oscillations of the relative parameters extracted during the cyclically varying exercises (see Fig.3). Nevertheless, the application of the previously discribed procedure for automatic definition of onset and offset ΔSRS-O2 fiducial points to
Fig.2 ΔSRS-O2 during ST exercises at the different force levels
Table 1: Each table column reports results of the statistical comparison performed (two-tailed t-Test for paired data; p<0.05) between the variables indicated in the upper part of the column during the two different physical tasks indicated in the left side. The X symbol denotes the presence of statistical significance (after Bonferroni correction when appropriate). COMPARISON
ΔSRS-O2 min
ΔSRS-O2 slope (fast phase)
ST20 – ST40
X
X
X
X
ST20 - ST60
X
X
X
X
ST20 - ST80
X
X
X
X
X
X
X
ST60 - ST80
X
X
X
SIN40 - SIN60
X
X
ST40 - ST60 ST40 - ST80
SIN40 - ST20
MDF slope (fast phase)
CV slope (fast phase)
X X
X
X
X
SIN40 – ST40
X
X
SIN40 – ST60
X
X
X
X
SIN60 – ST40
X
X
SIN60 – ST60 SIN60 – ST80
X
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Fast-Slow phase separation of Near InfraRed Spectroscopy to study Oxigenation v/s sEMG Changes
127
3.
Fig.3 ΔSRS-O2 during SIN exercises at 40+/- 20 % and 60 +/-20% force levels
IV. DISCUSSION In substance, the results of the present paper can be resumed as follows: • the initial rate of BB desaturation is related to the level of effort, and is not dependent upon the kind of isometric exercise; • the same robust relationship exists between force and rate of change of myoelectric parameters. It thus emerges that the rate of change of sEMG parameters is strongly linked to the initial rate of muscle desaturation. Moreover, during the low phase for both ST and SIN contractions, even at the highest force levels, tHb data suggest that muscle blood flow is not completely restricted. Concerning the fast phase, Biceps Brachii desaturation (ΔSRS-O2) at the onset of isometric exercise shows an evident delay.
REFERENCES 1.
Boushel R, Langberg H, Green S, Skovgaard D, Bulow J, and. Kjaer M (2000) Blood flow and oxygenation in peritendinous tissue and calf muscle during dynamic exercise in humans. J. Physiol. Apr 1;524 Pt 1:305-13.
2.
Quaresima V, Komiyama T, and Ferrari M (2002) Differences in oxygen re-saturation of thigh and calf muscles after two treadmill stress tests. Comp. Biochem. Physiol. 132:67-73.
Delpy DT and Cope M (1997) Quantification in tissue near-infrared spectroscopy. Philos. Trans. R. So.c Lond. B. Biol. Sci. 352:649-659. 4. De Luca C.J (1997) The use of electromyography in biomechanics. J. Appl. Biomech. 13:135-163. 5. Burnley M, Doust J.H, Ball D, and Jones A.M (2002) Effects of prior exercise on VO2 kinetics during heavy exercise are related to changes in muscle activity. J. Appl. Physiol. 93:167-174. 6. Kouzaki M, Shinohara M, Masani K, Tachi M, Kanehisa H, Fukunaga T (2003) Local blood circulation among knee extensor synergists in relation to alternate muscle activity during low-level sustained contraction. J. Appl. Physiol. 95:49-56. 7. Miura H, Araki H, Matoba H, and K. Kitagawa K (2000) Relationship among oxygenation, myoelectric activity, and lactic acid accumulation in vastus lateralis muscle during exercise with constant work rate. Int. J. Sports Med. 21:180-184. 8. Praagman M, Veeger H.E.J, Chadwick E.K.J, Colier W.N.J.M, and Van Der Helm F.C.T (2003) Muscle oxygen consumption, determined by NIRS, in relation to external force and EMG. J. Biomech. 36:905-912. 9. Takaishi T, Sugiura T, Katayama K, Sato Y, Shima N, Yamamoto T, and Moritani T (2002) Changes in blood volume and oxygenation level in a working muscle during a crank cycle. Med Sci Sports Exerc 33: 520-528. 10. Yoshitake Y, Ue H, Miyazaki M, and Moritani T (2001) Assessment of lower-back muscle fatigue using electromyography, mechanomyography, and nearinfrared spectroscopy. Eur. J. Appl. Physiol. 84:174-179. 11. Felici F, Quaresima V, Fattorini L, SbricColi P, Filligoi G, and Ferrari M (2007) (in course of publication) Biceps brachii myoelectric and oxygenation changes during static and cyclic isometric exercises. J. Electromyogr. and Kinesiol.
Author: Institute: Street: City: Country: Email:
Gian Carlo Filligoi Dpt. INFOCOM, Faculty of Engineering, Universita degli Studi "La Sapienza" Via Eudossiana 18 00185 ROMA Italy
[email protected]
1.
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Model Based Decomposition of Muaps Into Their Constituent Sfeaps M.G. Xyda1,2, C.S. Pattichis1,2, P. Kaplanis1,2, C. Christodoulou1,2 and D. Zazula3 1
Department of Computer Science, University of Cyprus, Cyprus The Cyprus Institute of Neurology and Genetics, Nicosia, Cyprus 3 Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, Slovenia 2
Abstract— The motor unit action potential (MUAP) represents the spatial and temporal summation of single fibre extracellular action potentials (SFEAPS) generated from the same motor unit. In this study, a model based decomposition of MUAPS into their constituent SFEAPS is investigated. The aim of this study has been to develop a system that will give to the neurophysiologist a visualization of an “estimated” structural organisation of the motor unit, which includes information about number of fibres, fibre distribution and positioning and fibre diameter. The mathematical model developed in two dimensions by Dimitrova and Dimitrov [1], [2] was used to generate SFEAPS. In addition, this model was extended to three dimensions. Two-dimensional and three-dimensional models of MUAPS consisting of 1 to 10, and 50 fibres were developed and decomposed for a small recoding radius up to 5 mm. The non linear least squares optimization procedure based on the Levenberg-Marquardt algorithm was used to obtain a solution to the MUAP decomposition problem i.e. fibre distribution, positioning and diameter. Using this method, a satisfactory solution to the decomposition problem was obtained. Future work will investigate the usefulness of the proposed analysis on MUAPS recorded from normal subjects and subjects suffering with neuromuscular disorders. Keywords— EMG, MUAPs, SFEAPs, Modeling, Decomposition.
a model based decomposition of MUAPS into their constituent SFEAPs is investigated. Similar work to this study was carried out for decomposing surface EMG into SFEAPs based on neural networks [3]. In addition, several studies investigated successfully, the decomposition of needle EMG into its constituent MUAPs [4]-[6]. Moreover, a decomposition of the Compound Action Potential in EMG was proposed [7]. The aim of this study was to develop a system that will give to the neurophysiologist a visualization of an “estimated” structural organisation of the motor unit, which includes information about number of fibres, fibre distribution and positioning and fibre diameter. This consists of the inverse problem in EMG. II. METHODOLOGY A. Simulation of SFEAPs The model developed by Dimitrova and Dimitrov [1], [2] was used in order to generate SFEAPS where the muscle fibre is considered as a dipole. The model has been implemented as a convolution within a specific time interval as follows: 1 ⎞ ⎛ ϑ( ) ⎟ ⎜ ϑ φ ran ⎟ Φ( x0k , y0k ,dk ,t)= - Ce ⎜ i * ⎜ ϑx ϑx ⎟ ⎜ ⎟ ⎝ ⎠
I. INTRODUCTION Electromyography (EMG) studies the electrical activity of the muscle, and forms a valuable aid in the diagnosis of neuromuscular disorders. EMG findings are used to detect and describe different disease processes affecting the motor unit, the smallest functional unit of the muscle. During slight voluntary contraction individual units are recorded, known as Motor Unit Action Potentials (MUAPs). The motor unit action potential (MUAP) represents the spatial and temporal summation of single fibre extracellular action potentials (SFEAPs) belonging to the same motor unit. MUAP morphology is affected by structural reorganisation of the motor unit that takes place due to disorders affecting peripheral nerve and muscle. MUAP features extracted in the time domain like duration, amplitude and phases are extensively used by the neurophysiologist for the assessment of neuromuscular disorders. In this study,
(1)
φ i is the intracellular action potential = A1 x A2 e xA3 and A1 = 72234 , A2 = 5 , A3 = -11 are constants. x = ut u = 4 m/s is the propagation velocity which is considered constant. L1 , L2 are the distances between the endplate and the two
edges of the fibre Ce =
d 2 K an V m σ i where σ an = σ x σ y 16 σ an
d is a fibre diameter σ ι = 1.01 is the conductivity inside the fibre and
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 118–123, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Model Based Decomposition of Muaps Into Their Constituent Sfeaps
σ x = 0.33 σ y = 0.63
is the conductivity along the x axis is the conductivity along the y axis
K an = 5
is the medium anisotropicity
x0 , y0 are the horizontal and vertical distances of the electrode from the endplate and considered 0. Vm = 1 is the intracellular action potential amplitude which is considered unity 2
2 r an = (x - x0 ) + K an (y - y 0 ) is a distance between the
electrode ( x0 , y0 ) and fibre endplate, considering the medium anisotropicity.
B. Simulation of MUAPs The structural organisation of normal motor units in human biceps brachii muscle was used as the basis of the simulations. A similar structure was also used by Nandedkar and co-workers [8]. Motor units were simulated by randomly distributing 50 muscle fibres within a circular territory of 5.64 mm diameter. The mean and standard deviation of the diameter of muscle fibres in the motor unit were 50 μm and 28 μm, respectively [9]. The endplates were assumed to be distributed within a 5 mm wide zone midway along the length of the muscle fibres [10]. It was assumed that the recording electrode was positioned at the centre of the motor unit circular territory, at the centre of the endplate zone. The MUAP was computed by summing the SFEAPs within a predetermined semicircular recordingtesting area facing the electrode as follows: Φ(ti ) =
∑ K =1 Φ(xk , yk , d k , tik ) i = 1,… N , M
(2)
where M is the number of fibres positioned within the testing area, and N is the maximum number of sample points. In the first case, a MUAP has been simulated as a sum of SFEAPS generated for 1, 4, 5 and 10 fibres enclosed within a semicircular area of radii 0.6, 0.9, 1.0 and 1.2 mm. The experiments start with 1 fibre in order to determine the behaviour of a single fibre before extending the model. In the second case, a MUAP has been simulated as a sum of SFEAPS generated for 50 fibres enclosed within a semicircular area of radii 5 mm. In this case, a third dimension has been added and also the model has been applied for myopathic cases.
C. Decomposition The decomposition of simulated MUAPS into their constituent SFEAPs was addressed as an unconstrained optimi-
119
zation problem. The non-linear least squares optimization procedure based on the Levenberg-Marquardt algorithm [11] was used to obtain a solution to the MUAP decomposition problem, i.e. fibre distribution and positioning, and fibre diameter. The problem to be solved was formulated as follows: minimize pε R n
{F(p) = ∑
N i=1
(R(p,t i ) - Φ( t i ))2}
(3)
where p[x01 , y01 , d1 ,.....x0 M , y0 M , d M ] is the n-dimensional unknown vector, R is a reconstructed MUAP defined as follows: R ( p , t ) = −C e ∫
L1
− L2
ϑϕ1 ϑχ
1 ) ran dx ϑx
ϑ(
(4)
The equation (3) also gives the error function. The n-dimensional vector has dimensions lM, where l is three, representing variables x01k , y0 k , d k of the SFEAP.
D. Simulation and decomposition for a 2-D model The simulation and decomposition for the 2-D models was carried out as follows:
Step 1. Generate the random numbers to create the components of each fibre for the x -horizontal distance, y vertical distance and d -diameter. The same generator was used to produce values for 1, 4, 5 and 10 fibres in a test radius of 0.6 mm, 0.9 mm, 1 mm and 1.2 mm respectively. The generator was used to produce simulated and initial values. Step 2. Develop the model using equation (1) to simulate a SFEAP. Step 3. Sum the SFEAPS using equation (2) to produce the MUAP in all cases. Step 4. Decompose the MUAP into its constituent SFEAPS using equation (3). The non-linear least-squares optimization procedure based on the Levenberg-Marquardt algorithm was used (matlab function leastsq( )). Compare simulated and the forecasted signals. The forecasted signals were developed by applying the above model using initial values.
E. Simulation and decomposition for a 3-D model The simulation and decomposition for the 3-D model was carried out as follows:
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
120
M.G. Xyda, C.S. Pattichis, P. Kaplanis, C. Christodoulou and D. Zazula
Step 1. Generate random numbers to create the components of each fibre for x -horizontal distance, y -vertical distance, z − z distance, d -diameter and t-delay. The generator was used to produce simulated and initial values for 50 fibers in a test radius of 5mm.
0.8
0.7
0.6
0.5 y mm
Step 2. Apply equation (1) to develop the model adding a third dimension z. Therefore the distance between the electrode ( x0 , y0 , z0 ) and the fibre endplate, the third dimension
0.9
0.4
0.3
z is r an = ( x - x0) 2 + K an ( y - y 0) 2 + K an ( z - z 0) 2 .
0.2
0.1
Step 3. Same as for the 2-D case. Step 4. Same as for the 2-D case.
0 -0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
x mm
Simulated . Forecasted -
0.6
F. Simulation and decomposition for a 3-Dmodel for myopathy
0.4
Three different cases of myopathy were examined:
0.2
•
III. RESULTS MUAP simulations with different number of fibres using the 2-D and 3-D models were carried out. For each experiment, the simulated values, initial values and final results (forecasted values) are given for the x -horizontal distance, y -vertical distance and d -diameter. Plots of the position of the coordinates of the simulated and final values, as well as the simulated and final waveforms are given.
A. MUAP simulation for a 2-D model with 10 fibres Figure 1 illustrates the simulation and decomposition for a 2-D model with 10 fibres for a test radius of 1.2 mm. The solution was achieved within 97 iterations with an infinitely small error. Figure 1 illustrates clearly the overlap of the coordinates between the simulated and final values as well as the overlap between the simulated and final waveforms.
0 EAP (m v)
Variability in diameter, where the mean diameter of the fibres changes. • Loss of half of the fibres • Reinnervation of fibres. Twenty five fibres were added: a. In the whole MU territory and b. In the right plane of the MU territory. Extending the model to three dimensions and adding more fibres increases the computational complexity and performance of the algorithm.
-0.2
-0.4
-0.6
-0.8
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Time(ms)
Fig. 1 Simulation and decomposition of 10 fibres for a test radius of 1.2 mm (simulated blue ‘.’, forecasted red ‘-‘). The top graph shows the positioning of the fibres (simulated blue ‘o’, forecasted red ‘+’).
B. MUAP simulation for a 3-D model with 50 fibers MUAP simulations with 50 fibres using a 3-D model were carried out. Plots of the simulated and final waveforms are given in Fig. 2. The model converged within 5065 iteration with an infinitely small error (2.7x10-8). Figure 2 illustrates the closeness of the coordinates between the simulated and the final values as well as the overlap between the simulated and the final waveform.
C. MUAP simulation for 3-D, loss of half of the fibres A MUAP simulation and decomposition for a 3-D model with a loss of half of the fibres (from 50 to 25) for a test radius of 5 mm representing the case of a corresponding MUAP recorded in myopathy was carried out. A solution was obtained within 2500 iterations, with an almost infinitely small error (3.8x10-7)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Model Based Decomposition of Muaps Into Their Constituent Sfeaps
121
3 3
2
2 1
0 z mm
z mm
1
-1
0 -1
-2
-2
-3 4 2 0 -2 -4
y mm
-4
0
-2
-3 3
4
2
2
3
1
2
0
1 0
-1
-1
-2
x mm
-3
y mm
-2 -3
Simulated . Forecasted -
x mm
1.4 Simulated . Forecasted -
1.2
0.3
1 0.2
0.6 0.1
0.4
EAP(mv)
EAP(mv)
0.8
0.2 0
-0.1
-0.2 -0.4 -0.6 0
0
-0.2
0.5
1 Time(ms)
1.5
2
Fig. 2 Simulation and decomposition of 50 fibres for a test radius of 5 mm. No of iterations=5065 (simulated blue ‘.’, forecasted red ‘-‘). The top graph shows the positioning of the fibres (simulated blue ‘o’, forecasted red ‘+’).
Figure 3 illustrates the closeness of the coordinates between the simulated and final values as well as the overlap between the simulated and final waveform.
D. MUAP simulation for a 3-D model, reinnervation of 25 fibres in the right plane of the MU territory A MUAP simulation and decomposition for a 3-D model with a reinnervation of 25 fibres (from 50 to 75) for a test radius of 5 mm representing the case of a MUAP recorded in a corresponding case in myopathy was carried out. A solution was obtained within 5295 iterations with an error=7.88x10-9. Figure 4 illustrates the closeness between the simulated and final values as well as the overlap between the simulated and final waveform.
-0.3 0
0.5
1 Time(ms)
1.5
2
Fig. 3. Loss of half of the fibres. Simulation and decomposition of 25 fibres for a test radius of 5 mm. No. of iterations=2500 (simulated blue ‘.’, forecasted red ‘-‘). The top graph shows the positioning of the fibres (simulated blue ‘o’, forecasted red ‘+’). IV. CONCLUSIONS The MUAP represents the spatial and temporal summation of SFEAPS generated from the same motor unit. MUAP morphology is affected by structural reorganisation of the motor unit that takes place due to disorders affecting peripheral nerve and muscle. MUAP features extracted in the time domain like duration, amplitude and phases are extensively used by the neurophysiologist for the assessment of neuromuscular disorders. In this study, a model based decomposition of MUAPS into their constituent SFEAPS has been investigated. Initially, the two-dimensional mathematical model developed by Dimitrova and Dimitrov [1], [2] has been used to generate SFEAPS. The two dimensional model was then extended to a three-dimensional model. Typical MUAPS
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
122
M.G. Xyda, C.S. Pattichis, P. Kaplanis, C. Christodoulou and D. Zazula
These findings suggest that an “estimated” visualization of the structural organisation of the motor unit can be derived. This visualization gives to the neurophysiologist information about the number of fibres, fibre distribution and positioning and fibre diameter for the simulated cases investigated. A main limitation of the method proposed is that the initial values given should be very close to the targeted values for convergence to be achieved. Future work will investigate the usefulness of the proposed new method on MUAPS recorded from normal subjects and subjects suffering with neuromuscular disorders.
3 2 z mm
1 0 -1 -2 -3 4 2 0 -2 -4
y mm
-4
-2
2
0
4
x mm
ACKNOWLEDGMENT Simulated . Forecasted 1.5
This study was partly supported via the Cyprus-Slovenia CY-SLOV/0605 APASHM project, funded by the Research Promotion Foundation of Cyprus and the Corresponding Slovenian Research Authority. The authors would also like to thank Prof Dimitrova and Prof Dimitrov from the Bulgarian Academy of Sciences for their help at the early stages of her work in selecting and implementing the SFEAP model.
EAP(mv)
1
0.5
0
-0.5
REFERENCES -1
0
0.5
1 Time(ms)
1.5
2
Fig. 4. Reinnervation of fibres. Simulation and decomposition of 75 fibres
1.
for a test radius of 5 mm. Twenty five fibres were reinnervated in the right plane of the MU territory. No. of iterations=5295 (simulated blue ‘.’, forecasted red ‘-‘). The top graph shows the positioning of the fibres (simulated blue ‘o’, forecasted red ‘+’).
2.
recorded from normal and myopathic muscle have been simulated. The non linear least squares optimization procedure based on Levenberg-Marquardt algorithm has been used to obtain a solution to the MUAP decomposition problem i.e. fibre distribution and positioning and fibre diameter. Two-dimensional and three-dimensional models for normal MUAPS consisting of 1, 4, 5, 10 and 50 fibres respectively were developed and decomposed for a small recoding radius up to 5 mm. Also three-dimensional models have been developed and decomposed for the myopathic cases (loss of fibres, variability in the diameter and reinnervation). In all the models investigated, satisfactory results have been obtained within a rather limited number of iterations.
4.
3.
5.
6. 7. 8.
Dimitrova NA (1974) Model of the extracellular potential field of a single striated muscle fibre. Electromyogr Clin Neurophysiol 14:5366 Dimitrov GV (1987) Changes in the extracellular potentials produced by unmyelinated nerve fibre resulting from alterations in the propagation velocity of the duration of the action potential. Electromyogr Clin Neurophysiol 27:243-249 Graupe D, Vern B, Gruener G, et al. (1988) Decomposition of Surface EMG Signals into Single Fiber Action Potentials by Means of Neural Networks, IEEE Intern Symposium on Circuits and Systems 10:1008-1011. LeFever RS, Xenakis AP and De Luca CJ (1982) Procedure for Decomposing the Myoelectric Signal Into Its Constituent Action Potentials – Part II: Execution and Test for Accuracy, IEEE Trans Biomed Eng. vol. BME-29, 3:158-164 LeFever RS and De Luca CJ (1982) Procedure for Decomposing the Myoelectric Signal Into Its Constituent Action Potentials – Part I: Technique, Theory, and Implementation. IEEE Trans Biomed Eng. vol. BME-29, 3: 149-157 McGill KC and Dorfman LJ (1985) Automatic Decomposition Electromyography (ADEMG): Validation and Normadive Data in Brachial Biceps. Electroencephalogr Clin Neurophysiol 61:453-461 Schoonoven R, Stegeman DF, van Oosterom A et al. (1988) The Inverse Problem in Electromyography –I: Conceptual Basis and Mathematical Formulation. IEEE Trans Biomed Eng 35:769-776. Nandedkar SD, Sanders DB and Stalberg EV (1988) EMG of reinnervated motor units: a simulation study. Electroencephalogr Clin Neurophysiol 70: 177-184, 1988.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Model Based Decomposition of Muaps Into Their Constituent Sfeaps 9.
Dubowitz V, Brooke M (1973) Muscle Biopsy: A Modern Approach. W.B. Saunders, Philadelphia 10. Aquilonius S, Askmark H, Gilberg P, et al. (1984) Topographical localization of motor endplates in cryosections of whole human muscles. Muscle & Nerve 7:287-293 11. Gill PR, Murray W. and Wright MH (1981) The Levenberg– Marquardt Method.§4.7.3 in Practical Optimization, Academic Press, London, 1981, pp 136–137
123 Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Constantinos Pattichis Department of Computer Science, University of Cyprus Kallipoleos 75, P.O. Box 20537 CY1683 Nicosia Cyprus
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Non-invasive estimation of the degree of motor unit synchronization in the biceps brachii muscle A. Holobar1, M. Gazzoni1, D. Farina2, D. Zazula3, and R. Merletti1 1
2
LISiN, Politecnico di Torino/Department of Electronics, Torino, Italy Center for Sensory-Motor Interaction, Aalborg University/Department of Health Science and Technology, Aalborg, Denmark 3 University of Maribor/Faculty of Electrical Engineering and Computer Science, Maribor, Slovenia
Abstract— This paper addresses estimation of motor unit (MU) synchronization by means of surface electromyogram decomposition. Firstly, the so called convolution kernel compensation method for identifying the discharge patterns of individual motor units is briefly described. The method builds on independent component analysis framework and is, hence, highly robust to superimpositions of MU action potentials (AP). In this study, the method was tested with synthetic signals in presence of MU synchronization, when the MU discharges are not strictly independent anymore. The level of MU synchronization was measured by the cross-interval histograms of reconstructed MU discharge patterns. Two synchronization indices were computed: the average number of synchronized MU pairs per contraction and the average percentage of synchronized AP per synchronized MU pair. Results revealed that both indices vary substantially with the properties of reconstructed MUs, but can nevertheless be used as reliable estimators of MU synchronization. The method was then applied to experimental electromyographic signals, acquired during low force contractions of the dominant biceps brachii muscle. Up to 14 concurrently active MUs were identified. MU synchronization was observed in approx. 50% of MU pairs. It is thus concluded, that MU synchronization can be assessed by decomposing the surface electromyogram. Keywords— surface electromyogram, motor unit synchronization, convolution kernel compensation, cross-interval histogram, biceps brachii muscle.
I. INTRODUCTION The ability of motor units (MUs) to discharge asynchronously has been commonly recognized as crucial for producing smooth muscle forces. Moderate MU synchronization, however, has been experimentally confirmed by numerous studies, which were mainly based on intramuscular EMG [5, 6]. It has also been proven by indirect measures on surface EMG, but these measures are often affected by the volume conductor properties [8, 9]. Recently, high-density (HD) surface EMG electrode grids and multi-channel amplifiers have become available. This technology allows detection of hundreds of surface EMG signals from closely spaced electrodes over a single muscle [1]. Different decomposition techniques for identification of individual motor units with these recording sys-
tems have also been proposed. Holobar & Zazula [2] presented a Convolutive Kernel Compensation (CKC) decomposition approach. It has been proven that this technique extracts almost complete MU discharge patterns of relatively large samples of active MUs from high-density surface EMG. Its performance, however, was measured only in the case of isometric, low-force contractions, with uncorrelated MU discharge patterns [2], as assumed for the derivation of the CKC. Therefore, the aim of this study was to assess the capability of the CKC method to estimate the level of MU synchronization from the decomposition of the surface EMG. II. MATERIALS AND METHODS A. Synthetic surface EMG signals The simulations were based on a model of recruitment of a population of motor units [3] and a volume conductor model [4]. The volume conductor was cylindrical with bone, muscle, subcutaneous, and skin tissues. The simulations comprised four main steps: 1) determining the recruitment and discharge times of a population of motor neurons, 2) synchronizing the generated discharge patterns, 3) generating MU action potential (MUAP) and, 4) generating MUAP trains and summing them into surface EMG signals. Motor unit action potentials. The cylindrical volume conductor model consisted of bone, muscle, fat and skin layer. The simulation of the intracellular action potential was based on the analytical description of Rosenfalk [7]. A set of 200 MUs were simulated in a region of the muscle of 200 mm2 cross-section. The distribution of the MU locations was random and the fibers of a MU were randomly scattered in the MU territory which was circular, with a density of 20 fibers/mm2, and intermingled with fibers belonging to other MUs. Innervation numbers ranged from 25 to 2500 with an exponential distribution. The surfacerecorded MUAP comprised the sum of the action potentials of the muscle fibers belonging to the MU. The MUs had muscle fiber conduction velocities of 4 ± 0.3 m/s.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 109–113, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
110
The recording system was a grid of 13×5 electrodes of circular shape (1-mm radius) with 5-mm interelectrode distance in both directions. The center of the grid was located 15 mm from the innervation zone location. A bipolar recording was simulated for each longitudinal pair of consecutive electrodes, thus leading to 60 simulated signals. The signals were additionally corrupted by additive zeromean Gaussian noise, with 20-dB SNR. Discharge pattern. A contraction at 5% excitation level (constant over time) was simulated. The distribution of recruitment thresholds for the motor neurons was represented as an exponential function with many low-threshold neurons and progressively fewer high-threshold neurons. With this choice the number of active MUs at 5 % excitation was 73 out of 200. The discharge rate at recruitment and the peak discharge rate were 8 and 35 pulses per second (pps), respectively. The last MU was recruited at 80% of maximal excitation. Variability in discharge rate was modelled as a Gaussian random process with a coefficient of variation of 20%. Independently generated discharges of each MU were pair-wise synchronized with the nearest discharges of other MUs. Synchronization was performed in such a way that every MU served as a reference MU to which the discharges of SMU remaining MUs were adjusted. Firstly, for each reference MU, SMU MUs were randomly selected to be synchronized. Special care was paid to already assigned synchronization pairs (if MU n was chosen to be synchronized with MU m, then MU m was also chosen to be synchronized with MU n). For each selected MU, first-order recurrence times (i.e. the time distances of the reference MU discharge to the nearest discharge of selected MU) were measured. Finally, SAP discharges with the smallest recurrence times were chosen per each selected MU and aligned to the nearest discharges of the reference MU. In order to simulate physiological variability of MU synchrony, the synchronized MU discharges were not perfectly aligned. Instead, normally distributed time delay with mean of 0 ms and std. dev. of 1.5 ms was introduced [5]. The portion of synchronized MU discharges SAP ranged from 0 % to 20 % (in steps of 5 %), while SMU was set to 25%, 50% and 75% of all the active MUs, respectively. Ten simulation runs per each SAP and SMU pair were performed. B. Experimental protocol Five young healthy male subjects (age: 27.0 ± 2.1 yr; stature 1.79 ± 0.09 m; body mass 73 ± 8 kg) participated to the experiment. The local ethics committee approved the study and all subjects signed an informed consent form. Surface EMG signals were detected with a matrix of 61 electrodes arranged in 5 columns and 13 lines (without the
A. Holobar, M. Gazzoni, D. Farina, D. Zazula and R. Merletti
four corner electrodes). Inter-electrode distance was 5 mm. The electrode pins (diameter 1.27 mm; RS 261-5070, Milan, Italy) were telescopic to adapt to the skin surface. The matrix was connected to four 16-channel EMG amplifiers (LISiN; Ottino Bioengineering, Rivarolo (TO), Italy). Recordings were performed in single differential configuration during 5 min long isometric constant-force contractions (at 5 % and 10 % of the maximal voluntary contraction force, MVC) of the dominant biceps brachii muscle (the dominant arm of the subject was placed into the isometric brace at 120°). The EMG signals were amplified, band-pass filtered (3 dB bandwidth, 10 Hz-500 Hz), sampled at 2500 Hz, and converted to digital form by a 12 bit A/D converter. The force signal was measured by a torque sensor (mod. TR11, CCT Transducers, Torino, Italy), provided as feedback to the subject on an oscilloscope, and recorded for further analysis. For technical reasons, the EMG recordings were divided into 30 s long epochs. C. Signal decomposition and analysis The CKC method [2] was applied for identification of MU discharge patterns. The method fully automates the reconstruction of MU discharge sequences and is highly resistant to noise and MUAP superimpositions. The level of synchronization between the different pairs of identified MU was analyzed by cross-interval histogram [5], which measures distribution of the first-order forward and backward recurrence times (i.e. the time distances of the reference MU discharge to the nearest forward and backward discharge of other MU) [5]. In the case of no interdependence among both MUs, the cross-interval histogram should appear as a flat line, reflecting the uniform probability density function. This would imply the identified recurrence times have the equal probability to fall into any of the histogram bins: p = BW , HW
(1)
where p is the probability of occurrence in a given histogram bin, BW is the width of a histogram bin (2 ms in our case), while HW is the histogram width (i.e. time period spanned by the histogram). In our case, the histogram was limited to the interval [-mRT,+mRT], where mRT is maximum recurrence time of reference MU. By modelling the occurrence of a recurrence time in a selected histogram bin as a Bernoulli random event, the probability of having n or more occurrences in a bin may be expressed as follows [5]: Nr N ⎛ ⎞ P (n) = ∑ ⎜ r ⎟ pi (1 − p) N r −i , i =n ⎝ i ⎠
(2)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Non-invasive estimation of the degree of motor unit synchronization in the biceps brachii muscle
•
Sˆ AP = ⎣⎡ 2 ⋅ ( peak area − mean area ) / ( total area ) ⎦⎤ ⋅100 % (3)
where peak area stands for the total number of occurrences in a peak, mean area is the number of occurrences in the peak that are under the mean histogram level, while total area is the number of all occurrences in the histogram. Typical cross-interval histogram demonstrating the MU synchronisation is exemplified in the right plot of Fig. 3. III. RESULTS A. Synthetic signals On average, 5 ± 2 MUs were reconstructed from each set of simulated signals. The sensitivity and specificity of reconstructed MU discharge patterns are depicted in Fig. 1. Level of synchronization was quantified by the crossinterval histogram analysis. Observed synchronization peaks were centred on the zero-delay position, with the average peak width ranging from 2 ms at SAP = 5% to 6 ms at SAP = 20 %. Measured MU synchronisation index ŜMU, and AP synchronisation index ŜAP are depicted in Fig. 2. B. Experimental signals On average 8.8±4.4 (5 % MVC contraction) and 9.6±3.8 (10 % MVC contraction) concurrently active MUs were identified out of acquired signals (Table 1). This allowed for comparison of 45 concurrently active MUs pairs, on average. Statistically significant synchronization peaks were detected in 50 % of the investigated MU pairs. No statistically significant differences for the measured synchroniza-
No. of MUs Sensitivity [%]
• •
Specificity [%]
•
peak width defined as Np·BW, where Np is the number of neighbouring bins exceeding the 99 % confidence level; peak location: the time location of the median of all the occurrences in the peak; number of consecutive synchronous MU discharges; MU synchronisation index ŜMU, defined as the ratio between the number of MU pairs exhibiting synchronisation and the total number of tested MU pairs; and pulse synchronisation index defined as
SMU = 25 %
8 6 4 2 0 100
SMU = 50 %
SMU = 75 %
95 90
0
5
0
5
10
15
20
0
5
10
15
20
0
5
10
15
20
0
5
10
15
20
0
5
10
15
20
10
15
20
100
99.5
SAP [%]
SAP [%]
SAP [%]
Fig. 1 Number of MUs identified from synthetic SEMG signals (top panel), their sensitivity (center panel) and specificity (bottom panel) vs. the simulated levels of MU synchronization SMU and AP synchronization SAP. SMU = 25 %
80
ŜMU [%]
•
tion parameters were observed among different 30 s long signal epochs and also among different contraction levels (Kurskal-Wallis ANOVA test, p>0.05). Peak width ranged from 1 ms to 5 ms, with the mean value of 1.5 ms. The majority of the peaks were within 6 ms from the central, zero-delay position. Although significant, the number of synchronized MU discharges was generally small, varying between 3 % and 9 % of all MU discharges (Fig. 4, left bottom panel).
60
ŜAP [%]
where Nr stands for the total number of reference MU discharges. The testing for the existence of synchronisation can, hence, be performed by calculating the confidence level with which it can be established that the detected peak in the cross-interval histogram is not a random event. In this study the confidence level was set at 99 %. For each statistically significant peak, the following parameters were calculated [5]:
111
20 15 10 5 0 0
SMU = 50 %
SMU = 75 %
40 20 0
5
10
SAP [%]
15
20
0
5
10
SAP [%]
15
20
0
5
10
SAP [%]
15
20
Fig. 2 Estimated MU synchronization ŜMU (top panel) and estimated AP synchronization ŜAP (bottom panel) vs. the simulated levels of MU synchronization SMU and AP synchronization SAP.
IV. DISCUSSION Results on synthetic signals demonstrated that the CKC method is readily capable of decomposing surface EMG signals in the presence of MU synchronisation. No significant impact of the number of synchronized MUs and of the portion of synchronized APs on the decomposition was noticed (Fig. 1). On the other hand, high inter-trail variance of both synchronisation indices ŜMU and ŜAP was observed. This is due to the fact that only up to 10 superficial and large MUs were identified, while all other MUs were considered as background noise. Both performance indices
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
112
A. Holobar, M. Gazzoni, D. Farina, D. Zazula and R. Merletti
measure the level of synchronisation in the reconstructed subset of MUs only and do not always yield representative results. In addition, measured level of MU synchronization ŜMU depends also on the level of simulated AP synchronization. Nevertheless, both ŜMU and ŜAP indices can still be considered reliable indicators of MU synchronisation as they both consistently disclose the cases with no MU synchronisation present.
Table 1 Number of MUs (mean ± std. dev.), their MU synchronization index ŜMU and AP synchronization index ŜAP. The results are obtained from experimental SEMG recordings and averaged over all epochs.
10 %
5%
Contraction level [% MVC] No. MUs ŜMU [%] ŜAP [%] No. MUs ŜMU [%] ŜAP [%]
Subjects A
B
C
D
E
7
14
13
6
4
53 ±3 5± 2
46 ±5 4 ±1
44 ±7 4±2 8 46 ± 6 5±3
14
13
54 ±7 5± 3
41 ±3 4± 2
8 48 ±7 5± 2
5 46 ±4 5± 3
The results of this study demonstrated that the direct assessment of MU synchronization from surface EMG signals is possible. By identifying up to 14 concurrently active MUs per contraction in experimental conditions, synchronization in up to 91 MU pairs was studied. To our knowledge, no other study exists, which would noninvasively compare such a large number of concurrently active MUs.
ACKNOWLEDGMENT
25 20 15 10 5
-40 -20 0 20 40 recurrence time (ms)
V. CONCLUSIONS
MU 3(Nr=483) vs. MU 6(Nr=404)
25 20 15 10 5 -60
47 ±6 5 ±2
MU discharges
MU discharges
MU 1(Nr=518) vs. MU 8(Nr=404)
51 ±8 4± 2
In the case of experimental signals, moderate MU synchronisation was observed (Table 1). The confidence level was set to 99 %, implying that the probability of observing a histogram peak due to the random chance was less than 1%. Synchronisation peaks were detected in 50 % of all the tested MU pairs. Therefore, the identified synchronisation peaks were considered reliable indicators of true MU synchronisation. Another relatively strong evidence of MU synchronisation is presented in Fig. 4, where the typical distribution of peak locations in the histogram is depicted. If the detected peaks were due to the random chance, their locations would be uniformly distributed across the histogram. This is clearly not the case.
-60
60
-40 -20 0 20 40 recurrence time (ms)
60
Fig. 3 Cross-interval histogram without (left panel) and with statistically significant synchronization peak (right panel). Light and dark gray dashed lines represent 95 % and 99 % confidence levels, respectively. Nr stands for the number of MU discharges, identified from real surface EMG signals (Subject B, 5 % MVC contraction, epoch 1).
This work was supported by the European Commission within the Sixth Framework (Project Cybermans) and Marie Curie Intra-European Fellowships Action (DE MUSE, Contract No. 023537).
REFERENCES
25 20 15 10 5 0
No. of peaks 1
3 5 Peak Width (ms)
5
10 15 ŜAP [%]
7
20
20 15 10 5
2.
0
25
Synchronized discharges (%)
No. of peaks
No. of peaks
1. 50 40 30 20 10 0
-20 0 20 40 Peak Location (ms) 100 80 60 40 20 00 1 2 3 4 5 6 7 No. of consecutive sync. discharges
Fig. 4 Distribution of synchronization peak widths (upper left panel), locations (upper right panel), AP synchronization index (lower left panel) and number of consecutive synchronized MU discharges (lower right panel) averaged over all signal epochs (Subject B, 5 % MVC contraction).
3. 4. 5.
Merletti R, Parker PA(2004) Electromyography: physiology, engineering, and non-invasive applications, IEEE Press and John Wiley & Sons. Holobar A, Zazula D(2004) Correlation-based decomposition of surface EMG signals at low contraction forces, Med. & Biol. Eng. Compt, 42:487-496. Fuglevand AJ, Winter DA, Patla AE.(1993) Models of recruitment and rate coding organization in motor unit pools. J. Neurophysiol; 70:2470–88. Farina D, Merletti R (2001) A novel approach for precise simulation of the EMG signals detected by surface electrodes, IEEE trans. Biomed. Eng., 48:637-646. De Luca CJ, Roy AM, Erim Z(1993) Synchronization of Motor-unit Firings in Several Human Muscles, J. of Neurophysiol., 70:2010-23.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Non-invasive estimation of the degree of motor unit synchronization in the biceps brachii muscle 6. 7. 8.
Nordstrom MA, Fuglevand AJ, Enoka RM (1992) Estimating the strength of common input to human motoneurons from the cross-correlogram, J. of Physiol., 453:547-74. Rosenfalck P. (1969), Intra- and extracellular potential fields of active nerve and muscle fibers, Acta Physiol. Scand. Suppl., 47:239-246. Keenan K.G., Farina D., Merletti R., Enoka R.M. (2006) Amplitude cancellation reduces the size of motor unit potentials averaged from the surface EMG”, J. Appl. Physiol., 100: 1928-37.
9.
113
Keenan KG, Farina D, Meyer F, Merletti R, Enoka RM. (in press) Sensitivity of the cross-correlation between simulated surface EMGs for two muscles to detect motor unit synchronization, J. Appl. Physiol. Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Ales Holobar LISiN, Politecnico di Torino Via Cavalli 22/H Torino Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Abdominal EHG on a 4 by 4 grid: mapping and presenting the propagation of uterine contractions B. Karlsson1, J. Terrien1, V. Gudmundsson2, T. Steingrimsdottir2 and C. Marque3 1
Reykjavik University/Department of biomedical engineering, Reykjavik, Iceland 2 Landspitali University hospital/Dept. ob-gyn, Reykjavik Iceland 3 UTC/Department of biomechanics and biomedical engineering, Compiegne, France Abstract— Numerous studies have observed and analyzed the external electrical activity of the uterus associated with contractions and labor. Most of these studies have involved the use of only 3 to 5 electrodes and little effort has been made to investigate the electrical activity concurrently at different locations. In this paper we present the results from measurements of contractions in labor using a 16 electrodes grid. We tried out various methods of presenting and analyzing this data and found this to be a non-trivial task. Here we present both an animation of the evolution of the electric potential, as well as a temporal correlation presentation. The results from a limited sample are in many ways surprising and may provide a new insight in to possible mechanism underlying uterine contractions. Keywords— EHG, labor, uterus, mapping, propagation
I. INTRODUCTION Premature labor is one of the most important public health problems in Europe and other developed country as it represents nearly 7% of all births. It is the main cause of morbidity and mortality of newborns. Early detection of a preterm labor is important for its prevention for example in insuring tocolytic drug efficacy. Continuous efforts are made to find new biochemical or biophysical markers of preterm labor threat [1]. One of most promising is the analysis of the electrical activity of the uterus. Uterine electromyogram recorded externally in women, the so called electrohysterogram (EHG), has been proved to be representative of uterine contractility. The analysis of such signal may allow the prediction of a preterm labor threat as soon as 28 weeks of gestation (WG) [2-4]. However, the physiological phenomena underlying preterm labor remain badly understood. It is well known that the uterine contractility depends on the excitability of uterine myocytes but also on the propagation capability of local electrical activity to the whole uterus [5]. These two aspects of uterine contractions mechanisms, excitability and propagation, both influence the spectral content of EHG. EHG is mainly composed of two frequency components traditionally referred to as FWL (Fast Wave Low) and FWH (Fast Wave High) [5]. These frequency components may be
related to the propagation and the excitability of the uterus respectively. Recent studies on the early prediction of preterm labor focused on the analysis of FWH or simply the high frequencies of the EHG. If the above hypothesis of FWH being primarily related to the local excitability of the uterus is correct, the mechanisms of coordination and organization of the uterus as a whole has still not been fully understood or exploited in predicting labor. The propagation of the electrical activity of the uterus has been studied both at a cellular level and on the organ as a whole. The propagation at a cellular level shows complex activation pathways with the possible presence of multiple fronts or re-entry like in the heart [6]. Investigation of the propagation at the organ level has only been done using 3 to 5 recording electrodes. The propagation speed observed is very dependent on the species but the average estimated speed was typically superior to 2 cm/s as reported by [7, 8] for example. Planque describes a speed, calculated on abdominal EHG recorded in woman, of 2.18 cm/s and of a propagation in a descending direction in 87% of cases [9]. Duchêne et al. also noticed a constant chronogram of the activation pattern during labor in monkeys [10]. The main method of EHG propagation analysis has so far been linear intercorrelation. In reported work the intercorrelation coefficients calculated on EHG envelope are usually good (~80%) but the coefficients calculated on temporal signals are much lower. This could depict a good group propagation (sequential activations of several uterine regions) but not a strict linear propagation between each recording regions as frequently observed in striated muscles. Other more sophisticated analysis tools have been used but were unable to properly demonstrate linear propagation [9-11]. The aim of this paper is to study the propagation of the uterine electrical activity recorded on women during labor. We present a specific recording methodology using 16 monopolar electrodes and the results of preliminary measurements. We show examples and group propagation analysis tools.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 139–143, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
140
B. Karlsson, J. Terrien, V. Gudmundsson, T. Steingrimsdottir and C. Marque
II. MATERIALS AND METHODS A. Instrumentation and experimental protocol The measurements were performed by using a 16channel multi-purpose physiological signal recorder most commonly used for investigating sleep disorders (Embla A10). Reusable Ag/AgCl electrodes were used. The measurements were performed at the Landspitali University hospital in Iceland using a protocol approved by the relevant ethical committee (VSN 02-0006-V2). The subjects were healthy women in the first stages of labor having uneventful singleton pregnancies. After obtaining informed consent, the skin was carefully prepared using an abrasive paste and alcoholic solution. After that, the sixteen electrodes were placed on the abdominal wall according to Fig. 1 (interelectrode distance: 2.1 cm). The third electrode column was always put on the uterine median axis and the 10-11th electrode pair on the middle of the uterus (fundus to symphysis). Reference electrodes were placed on each hip of the woman. The signal sampling rate was 200 Hz. The recording device has an anti-aliasing filter with a high cut-off frequency of 100 Hz. The tocodynamometer paper trace was digitalized in order to facilitate the segmentation of the contractions. In this preliminary study, two women in spontaneous labor were enrolled at 37 and 39 WG. The recording duration was approximately 1 hour in both cases. 23 and 21 contractions were clearly identified on these. In our study, we considered vertical bipolar signals (BPi) in order to increase the signal to noise ratio. Our signals form thus a rectangular matrix of size 3 x 4. All the EHG bursts presented a good signal to noise ratio on all of the bipolar channels. All the EHG bursts were segmented manually with the help of the tocodynamometer trace.
1
5
1
9 3
B
B
P1
P4
2
6
B P10
1
1
0 B
4
B
P2
P5
3
7
B
B
P11
1
1 5
B P6
4
8
B
P8 1
P3
B
P7
B
After a manual segmentation of each contraction, the envelope of each bipolar EHG signal was calculated. The envelope was defined as the modulus of the analytical signal obtained by Hilbert transform. The different envelopes were then filtered by a moving average filter. Propagation animation: This involved representating the evolution of the envelope amplitude on each bipolar signal as a function of time. For each time n, the envelope amplitude was display in an arbitrary color scale. All amplitudes were normalized by the maximal amplitude obtained for each of the different channels. In order to obtain a smoother representation, the 3 x 4 amplitude matrix was interpolated to 9 x 13. To facilitate the interpretation of such representation, the gradient information (amplitude and angle) was superimposed. Correlation analysis: The most common way to analyze the delay between two signals, x(n) and y(n), is the use of the intercorrelation function Ø(k). This function presents a maxima for a value K0 corresponding to the delay T0 between the two signals. The maxima position gives information on the delay but also on the propagation direction. We could have three distinct situations: • If 0 < K0 < (N/2+1), then x(n) is in advance of: T0 = K0/Fs seconds on y(n) • If (N/2) < K0 < N then y(n) is in advance of: T0 = (N - K0)/Fs seconds on x(n) • If K0 = 0 then x(n) and y(n) appeared simultaneously Where Fs is the sampling frequency and N the length of each signal. The intercorrelation function can be obtain by the Fourier transform (FT) of each signal. The FT of the intercorrelation function is then:
Φ ( f ) = X ( f )Y * ( f ) Where X(f) and Y(f) are the FT of x(n) and y(n) respectively and * indicates complex conjugate. The intercorrelation function is then obtained by inverse Fourier transform. A normalization of Ø(k) by the energy of the two signals guaranties that this function is bounded between 0 and 1. All delays were calculated with the envelope of BP8 as reference signal x(n). The delay matrix were interpolated linearly in order to obtain a smoother representation.
B
P9
P12
1 2
B. Signal processing and animation tools
1 6
Fig. 1: Electrode configuration on the woman abdominal wall and position of bipolar signals Bpi.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Abdominal EHG on a 4 by 4 grid: mapping and presenting the propagation of uterine contractions
141
III. RESULTS A. Envelope amplitude animation The observation of the temporal animation of the envelope amplitude is realized by the interface presented Fig. 2. This interface shows the tocodynamometer trace, the mean EHG envelope and the envelope amplitude for each channel spatially. These animated maps of uterine electrical activity show a surprising amount of structure. At each location (electrode pair) several bursts of activity can be observed at different times in each contraction as the potential sweeps across the uterus. The animations are very complex to analyze. We have however been able to notice some particular situations. It is possible at times to observe ascendant activation patterns while for the majority of the contractions the activation patterns is descendant. In this situation, the uterine activity begins at the lower electrodes or those situated on one side and then propagate to the other electrodes. Several origins of the activity could often be observed (Fig. 3). Some of the individual envelopes do not present their maximal value at the maximal amplitude of the mean EHG envelope (Fig. 4). The maximum of the EHG envelopes are usually observed close to the maximum of the tocodynamometric trace but are not synchronized to each other. It indicates that, even in labor, a delay between each channel is observed. Moreover, rotating activation patterns have been noticed during the same contraction. For one contraction, the presence of a pace-maker-like node that apparently initiated the whole contraction was observed in that the electrical activity
T M
Fig. 3: Example of the presence of several origins of the activity and ascendant activation pattern. The gradient information is indicated by arrows.
Fig. 4: Envelope amplitudes obtained at the maximal value of the mean envelope. The gradient information is indicated by arrows.
begins on one EHG channel then propagates to other channels. Several animations can be found at http://www.ru.is/brynjar/ehg. B. Correlation analysis
B
B
B
B
B
B
B
B
B
B
B
B
Fig. 2: Visualization interface of the evolution of the envelope amplitude
The correlation analysis gives information on the global delay between all EHG channels during a contraction. The quality of the correlation between channels was also evaluated by calculating the correlation coefficients. This analysis permits to better reveal the presence of a pacemaker-like activity. A point of origin could be easily seen in Fig. 5, were an isolated area with a negative delay is observed. The corresponding correlation coefficient are relatively good indicating a true physiological phenomena (> 75 %).
of each EHG channel (BP1 to BP12). The upper trace represents the normalized tocodynamometer trace (Toco.) and the normalized mean envelope (MEnv.) calculated on each channel (x axis in second).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
142
B. Karlsson, J. Terrien, V. Gudmundsson, T. Steingrimsdottir and C. Marque
Our analysis of the evolution of the envelope amplitude during contraction showed a non linear propagation with complex activation patterns. Ascendant or rotating activation patterns were observed. It has been proposed that pace-maker cells are present preferentially near the uterine fundus [7]. It is however well known that all uterine cells can become a pace-maker cell. The electrical activity can thus be initiated anywhere on the uterus. The rotating pattern can be explained by re-entry phenomenon where the electrical activity returns to its initiation site after a certain time. This interpretation is unfortunately only qualitative as of yet. A more detailed characterization of the activation patterns has to be done in order to observe how it evolves as the uterus organizes itself from the quiet state during most of the pregnancy to the very active coordinated state during the final stages of labor. Intercorrelation analysis of the EHG envelopes is an example of how quantitative information can be extracted from this type of data. It indicates the global delays between the electrical burst present on each recording channel. The quality of the parameter can also be checked by the different correlation coefficients, providing a new way of identifying effective contractions. The study of the different delays obtained for one contraction reveals clearly the presence of a pacemaker-like initiator cells under the electrodes. We have not observed a constant chronogram or activation pattern for successive contractions like reported by [10]. However, in our study, the inter-electrode distance is shorter and thus could reflect a more local propagation pattern. Other correlation analysis tools have to be evaluated in order to extend our preliminary results and maybe to increase the accuracy of our analysis. For example, the linear correlation coefficients calculated on temporal EHG were very low [9], the use of non-linear correlation techniques may be more suitable for this signal. An other important perspective of this work is the confirmation or the rejection of the hypothesis, stated by Devedeux et al [5], on the link between FWL and the propagation on one hand and FWH and excitability on the other hand. Analysis of the correlation of the energy of each frequency component separately could answer this open question.
4.2
IV. DISCUSSION
6.3
Fig. 5: Estimated delays (second) obtained, for one contraction, by intercorrelation. A higher interpolation was realized to increase the readability.
energy of the 12 bipolar EHG permitted us to observe complex activation patterns. These animations and the use of EHG intercorrelation mapping may also show the presence of pace-maker activity or re-entry like those observed in the heart. Our methodology may eventually help to characterize various obstetrical situations and even provide tools useful in managing preterm labor. REFERENCES 1. 2.
3.
4.
5. 6. 7.
V. CONCLUSIONS In this paper, we present a recording for device with 16 monopolar electrodes. The possibility of a mapping the electrical activity of the uterus is likely to give new information on the propagation of activity in the organ. The animation of the spatial and temporal evolution of the
8. 9.
J. Terrien, C. Marque, and G. Germain, "What is the future of tocolysis?," Eur J Obstet Gynecol Reprod Biol, vol. 117 Suppl 1, pp. S10-4, 2004. H. Leman, C. Marque, and J. Gondry, "Use of the electrohysterogram signal for characterization of contractions during pregnancy," IEEE Trans Biomed Eng, vol. 46, pp. 1222-9, 1999. R. E. Garfield, H. Maul, L. Shi, W. Maner, C. Fittkow, G. Olsen, and G. R. Saade, "Methods and devices for the management of term and preterm labor," Ann N Y Acad Sci, vol. 943, pp. 203-24, 2001. R. E. Garfield, H. Maul, W. Maner, C. Fittkow, G. Olson, L. Shi, and G. R. Saade, "Uterine electromyography and lightinduced fluorescence in the management of term and preterm labor," J Soc Gynecol Investig, vol. 9, pp. 265-75, 2002. D. Devedeux, C. Marque, S. Mansour, G. Germain, and J. Duchene, "Uterine electromyography: a critical review," Am J Obstet Gynecol, vol. 169, pp. 1636-53, 1993. W. J. Lammers, "Circulating excitations and re-entry in the pregnant uterus," Pflugers Arch, vol. 433, pp. 287-93, 1997. R. Caldeyro-barcia and J. J. Poseiro, "Physiology of the uterine contraction," Clin. Obstet. Gynecol., vol. 3, pp. 386408, 1960. E. E. Daniel and S. A. Renner, "Effect of the placenta on the electrical activity of the cat uterus in vivo and in vitro," Am J Obstet Gynecol, vol. 80, pp. 229-44, 1960. S. Planque, "Contribution a l'etude de la propagation des signaux electrohysterographiques," in Genie biomedical. Compiegne: Universite de technologie de Compiegne, 1990.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Abdominal EHG on a 4 by 4 grid: mapping and presenting the propagation of uterine contractions 10. J. Duchene, C. Marque, and S. Planque, "Uterine EMG signal: Propagation analysis," presented at Annual International Conference of the IEEE EMBS, 1990. 11. S. Mansour, "Etude de l'electromyographie uterine : caracterisation, propagation, modelisation des transferts," in Génie biomedical. Compiegne: Universite de technologie de Compiegne, 1993.
143
Author: Jeremy Terrien Institute: Street: City: Country: Email:
School of Science and Engineering, R.U. Kringlan 1 103 Reykjavik Iceland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Detection of contractions during labour using the uterine electromyogram D. Novak1, A. Macek-Lebar1, D. Rudel2 and T. Jarm1 1
University of Ljubljana, Faculty of Electrical Engineering, Laboratory of Biocybernetics, Ljubljana, Slovenia 2 MKS Electronic Systems, Rozna dolina C. XVII/22b, 1000 Ljubljana
Abstract — This paper describes two simple algorithms for detection of uterine contractions during labour using the uterine electromyogram recorded from the abdominal surface. The location of a contraction is extracted from the signal’s energy using either an amplitude- or derivative-based algorithm. For our recordings, these algorithms managed to correctly locate the majority of contractions, with an average success rate of 87.4% for the derivative-based algorithm and 85.6% for the amplitude-based algorithm. Keywords — Uterus, contractions, electromyogram, signal processing.
I. INTRODUCTION Several methods for obtaining information concerning uterine contractions are in use today, but many of them have serious disadvantages. Intrauterine pressure catheters are invasive and can increase the risk of infection. Tocodynamometers are noninvasive, but fairly inaccurate [1]. The uterine electromyogram, on the other hand, is measured noninvasively and contains information about uterine activity. Not only does electrical activity increase as pregnancy progresses, changes also occur in the frequency domain. It has been proposed that observation of the EMG during labour may even allow prediction of preterm labour and abnormal birth [1,2]. We focused on a simpler task: creating an algorithm that can detect the contractions in a recording automatically. II. SIGNAL RECORDING AND PROCESSING The study involved 12 women undergoing normal labour. Intrauterine pressure (IUP) and the uterine electromyogram obtained from the abdominal surface in transverse and longitudinal directions were recorded during labour using the method described in [4]. The sampling frequency was 18.2 Hz. Five of the recordings were discarded due to major measurement errors. The initial parts of the recording (very early labour) and the parts close to birth were not analyzed at this stage. Two types of electromyographic activities for abdominal recording are observed: slow wave and fast wave. The slow wave's frequency content ranges from 0.014 Hz to 0.033
Hz, but it is generally obscured by the many artifact components. The fast wave is easier to discern and is located between 0.1 Hz and 3 to 5 Hz [2,3], so the signal was filtered with a fourth-order Butterworth band-pass filter. We found that the filter's lower cutoff frequency strongly affected contraction detection, so several different values were tested. Setting the lower cutoff frequency to 0.1 Hz was too low to filter out most of the low-frequency component, so the tested values were between 0.4 Hz and 2.5 Hz. The upper cutoff frequency of the band-pass filter was arbitrarily set at 4.5 Hz. It did not have a major effect as long as it was not set too low; lowering it from 4.5 Hz to 3.5 Hz did not cause a noticeable difference. III. DETECTION OF CONTRACTIONS We chose two methods of detecting contractions: detection of peaks in the signal's energy and detection of changes in the signal's mean or median frequency. Analysis of the signal's energy was done by computing the energy of a signal on a certain interval as the sum of the squares of all signal values on that interval. The length of this interval strongly affected detection, so intervals of 0.2, 1, 2, 4 and 8 seconds were tested, and 1 second was eventually selected as the most suitable interval length. The centers of two consecutive intervals were separated by one second. We found that two contractions can vary in peak energy by a factor of as much as 10, so the signal energy was examined on a logarithmic scale for easier detection. The logarithm of signal energy (log(EEMG)) was again smoothed using a fourth-order Butterworth low-pass filter (with different tested cutoff frequencies) for easier detection. All these steps are shown on Fig. 1. Analysis of the signal's mean and median frequencies was done by calculating the mean and median frequencies on a certain interval using the power spectral density (PSD) which was estimated with the periodogram method. The interval widths and the tested cutoff frequencies of the band-pass filter were the same as those used in calculation of signal energy. Subjective evaluation of both log(EEMG) and mean/median frequency showed that both could be used for detection of contractions. However, contractions seemed to
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 148–151, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Detection of contractions during labour using the uterine electromyogram
149
became lower than the negative value of the threshold. This was the moment when the contraction began to »wear out«. Afterwards, the algorithm searched for the time when the absolute value of the derivative was once again smaller than the threshold. This was the moment when the contraction ended. For an example of a detected contraction, see Fig. 2. B. Amplitude-based detection
Fig. 1 Two uterine contractions. a – intrauterine pressure, b – raw EMG, c – energy of EMG, d - log(EEMG) before smmothing, e - log(EEMG) after smoothing
stand out more prominently in log(EEMG), so we decided to create a detection algorithm based on signal energy in the time domain. Since contractions appeared as peaks in the signal energy, two different criteria could be used for detection: the amplitude and the derivative of log(EEMG). Thus, we built two different algorithms: one based on the amplitude and the other on the derivative. A. Derivative-based detection The first derivative of log(EEMG) is expected to be extremely positive at the beginning of the contraction and extremely negative toward the end of the contraction. It was calculated as the difference of two consecutive values of log(EEMG). Then, we also calculated its absolute value. The threshold for the beginning of the contraction was set as the average value of the absolute derivative (over the entire signal) multiplied by a constant. Once the first derivative became larger than this value, the beginning of a contraction was registered. The algorithm then searched for a time when the value of the derivative
Fig. 2 log(EEMG) as a function of time. The derivative-based algorithm locates the beginning of the contraction, the moment when the contraction begins to »wear out« and the end of the contraction (the 1st, 2nd and 3rd marking, respectively).
In log(EEMG), the peaks that occur during contractions are not all the same size; some are significantly higher than others. These differences can be greatly reduced by subtracting the moving average from log(EEMG). Intervals of several different lengths were tested for the calculation of the moving average. After subtracting the moving average, the algorithm calculated a threshold as a percentage of the maximum value of log(EEMG) over the entire signal. The beginning of a contraction was the point when log(EEMG) became larger than this threshold. The contraction ended once log(EEMG) once again became lower than this threshold. C. Error checking After the algorithm extracted the location of the contractions from the recording, it checked its own results for errors. The algorithm discarded contractions that were too short, joined two contraction intervals together if they occurred immediately one after another, and (for the derivative-based algorithm) discarded intervals on which the difference between the maximum and minimum value of log(EEMG) was too small. D. Evaluation of the algorithm's performance To determine the optimal settings for detection of contractions, the algorithm's results need to be evaluated. As contractions are easy to extract from the recording of intrauterine pressure, the best way to evaluate the algorithm's effectiveness is to compare the contractions found in the EMG with the contractions found in the IUP. To extract the location of the contractions from the IUP, the signal was first smoothed with a Butterworth low-pass filter, and then a simple amplitude-based algorithm was used for detection. We considered a contraction to have been successfully detected if either 75% of the contraction interval from the IUP lay inside the contraction interval from the EMG or 75% of the interval from the EMG lay inside the interval from the IUP. Additionally, if the interval from the EMG was more than twice as long as the interval from the IUP, the contraction was regarded as incorrectly detected.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
150
D. Novak, A. Macek-Lebar, D. Rudel and T. Jarm
When comparing the results of our algorithm with the IUP, five parameters were defined:
•
• • • •
The derivative-based algorithm also used the following parameters:
•
The actual number of contractions based on IUP. (N) True Positives (TP): Correctly detected contractions. False Negatives (FN): Missed contractions. False Positives (FP): Incorrectly detected nonexistent contractions. Double Positives (DP): Two contractions detected as one.
Examples of detection errors are shown in Fig. 3. Double positives were counted separately since they may be a natural phenomenon: a period of time when the uterus's electrical activity remained high even between contractions, blurring two contractions into one in the EMG. However, any triple positives were most likely the result of an ineffective detection algorithm and were counted as three false negatives. For evaluation of the algorithm's performance we defined an additional value, the relative error rate, as: ERR = ( FN + FP + DP) / N
(1)
In our experiments, when we searched for the optimal settings of the detection algorithm, we chose those that resulted in the lowest error rate for a particular recording. First, we experimented with each parameter to find a range of values for that parameter that resulted in an acceptable error rate. Then we selected a number of possible values from this range. The algorithms were tested with every possible combination of these values in order to determine the optimal settings. The possible values used for both algorithms were as follows: •
Lower cutoff frequency of the filter for the raw EMG: Approximately [0.45, 0.675, 0.9, 1.125, 1.35, 1.575, 1.8, 2.025, 2.25] Hz
• •
Cutoff frequency of the low-pass filter used to smooth the log(EEMG): Approximately [0.025 0.035] Hz
Threshold for the detection of the beginning of the contraction: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8] times the mean value of the absolute derivative Minimum difference between the maximum and minimum value of log(EEMG) on the interval of the contraction: [0.025, 0.05, 0.1, 0.15] times the maximum value of log(EEMG) in the entire recording
The amplitude-based algorithm also used the following parameters: • •
Length of interval used for calculation of the moving average of log(EEMG): [25, 50, 100, 200] seconds. Threshold for the detection of the beginning of the contraction: [0.3, 0.34, 0.38, 0.42, 0.46, 0.50, 0.54, 0.58] times the maximum value of log(EEMG) in the entire recording
Since the characteristics of the uterine EMG change with time, the seven recordings available were split into shorter ones (approximately 2.3 hours each), giving us ten recordings from seven different people. The algorithms were tested separately on each of these recordings. IV. RESULTS When using the optimal settings for each recording, both algorithms successfully detected a majority of the contractions (see Table 1). Table 1. Results using the best parameters for each recording
person recording
Fig. 3 Examples of detection errors when using the derivativebased algorithm. a – IUP, b – raw EMG, c - log(EEMG) with true positives and detection errors marked.
1-1 1-2 2-1 2-2 3-1 4-1 5-1 6-1 7-1 7-2 average
minimal ERR derivative-based amplitude-based 0.21 0.31 0.19 0.19 0.08 0.10 0.12 0.18 0.10 0.05 0.19 0.20 0.08 0.13 0.04 0.05 0.11 0.07 0.14 0.16 0.126 0.144
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Detection of contractions during labour using the uterine electromyogram
151
The optimal settings varied greatly from recording to recording. We attempted to determine a combination of values that would be effective for a majority of recordings. The most effective combination would be the one that resulted in the lowest average error rate for all of the recordings. However, we were unable to find a combination of values that would provide an average ERR lower than 22.6% with either of the two algorithms. V. DISCUSSION A. Factors that affect the algorithm's effectiveness Measurement errors: The effectiveness of an automated contraction detection algorithm depends strongly on the quality of the recordings. Although any signals with obvious measurement errors were discarded prior to testing of the algorithm, we did not check the recordings for minor errors, and even small measurement errors can cause a noticeable change in the EMG energy. Physiological differences: Due to physiological differences, it is probably impossible to find fixed parameter values that would work effectively for all subjects. A possible solution would be to introduce a learning phase where the algorithm determines the most effective settings for each subject. B. Comparison of the amplitude-based and derivative-based algorithms At this moment it is difficult to say whether one algorithm is superior due to the relatively small amount of recordings used in the evaluation. For the derivative-based algorithm, the detected contraction intervals were often very long, with one beginning almost at the same time as the previous one ended (see Fig. 4). The amplitude-based algorithm produced narrower intervals, but had a larger amount of false negatives. A combination of the derivativeand amplitude-based approach may be the best solution. C. Conclusion Our algorithm can successfuly detect a large amount of uterine contractions from the electromyogram using a relatively simple approach. Modifications could likely increase its usefulness even further. For instance, an
Fig. 4 A comparison of the derivative- and amplitude-based algorithm's results. a – IUP, b – raw EMG, c - log(EEMG) with the contractions detected by the derivative-based algorithm, d - log(EEMG) with the contractions detected by the amplitude-based algorithm.
adaptive algorithm could detect contractions »on-line« during labour. In situations where invasive methods cannot be used to detect contractions, the uterine EMG can be a useful tool.
REFERENCES 1. 2.
3. 4.
Garfield R.E. et al. (2002) Uterine Electromyography and Lightinduced Fluorescence in the Management of Term and Preterm Labor. J Soc Gynecol Investig, vol. 9, no. 5, pp 265-275. Maner W., Garfield R.E. (2007) Identification of Human Term and preterm Labor using Artificial Neural Networks on Uterine Electromyography Data. Ann Biomed Eng. , vol. 35, no. 3., pp 465473. Khalil M., Duchene J. (2000) Uterine EMG Analysis: A Dynamic Approach for Change Detection and Classification. IEEE Trans Biomed Eng., vol. 47, no. 6, pp 748-756. Pajntar M. et al. (1987) Electromyographic observations on the human cervix during labor. Am J Obstet Gynecol, vol. 156, no. 3, pp 691-697.
Author: Institute: Street: City: Country: Email:
Alenka Macek-Lebar Fakulteta za elektrotehniko Trzaska cesta 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluating Uterine Electrohysterogram with Entropy J. Vrhovec1,2, A. Macek Lebar2, D. Rudel1 1
MKS Electronic Systems, Rozna dolina C. XVII/22b, 1000 Ljubljana University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, 1000 Ljubljana, Slovenia
2
Abstract— In this report, we evaluate complexity of the uterine EHG during labor by estimation of entropy. Two methods for entropy eva7luation have been chosen: approximate entropy and sample entropy. Our observation, based on three labors, that have been randomly chosen from our database, show that sample entropy indicates the course of labor. The values of sample entropy are higher during the latent phase then during the active phase of the labor. The complexity of uterine EHG signal is reduced in the active phase of labor. Sample values are also reduced after Oxytocin dose. Keywords— uterine electrohysterogram, approximate entropy, sample entropy.
I. INTRODUCTION
II. METHODS Approximate entropy (ApEn) and sample entropy (SampEn) were chosen, because they have been successfully applied for analyzing biological signals such as heart rate, blood pressure, electrocardiography, electroencephalography and electromyography [5, 6]. It was shown that ApEn can predict the sudden infant death syndrome (SIDS) [5]. SampEn analysis successfully showed the episodes of neonatal sepsis [6]. A. Approximate entropy ApEn was introduced as a quantification of complexity in sequences and time series data, initially motivated by applications to relatively short, noisy data sets [7, 8, 9]. Regular signals are expected to have low ApEn values [13], while complex ones take on higher ApEn values.
Physiological signals have a wide variety of forms. To describe them traditional feature measures typically extract amplitude and frequency information. This makes comparison of signals which have different bandwidths difficult. Given a signal{x(n)}={x(1), x(2), …, x(N)}, where N is When visually inspecting signals, one of the first impresthe total number of data points, ApEn algorithm can be sions they give to the observer is their complexity. Some summarized as follows: signals seem to vary more then others. Some appear extremely random while others seem to demonstrate a reap- I. Form N-m+1-vectors X(1) to X(N-m+1) define by : pearance of certain patterns at various intervals. In medical X(i)=[x(i), x(i+1), …, x(i+m-1)],i=1,2,…,N-m+1. Vector length m research signal variability or complexity has been correlated is known as the embedded dimension. These vectors represent m with physiological conditions. Direct assessment of signals consecutive values of the signal, commencing with the i-th point. complexity/variability thus offers certain advantages in II. Calculate the distance between X(i) and X(j), clinical research. d=[X(i), X(j)], as the maximum absolute difference between Electrohysterogram (EHG) is noninvasive method for retheir respective scalar components, where k=0, …, m-1. cording electrical activity of uterine muscles from the abdominal surface. Through the years different research d X (i ), X ( j ) = max x(i + k ) − x( j + k ) . (2) groups used different processing of uterine EHG signals. k =1, 2 ,…, m Amplitude distribution, power spectrum and its mean and/or median frequency of the uterine activity during the contrac- III. The probability of finding a vector X(j) within the distance r ·SD from the template vector X(i) is estimated by: tion was calculated [1]. The analysis, of contraction segments or bursts were made. In analysis the power density N − m +1 1 spectrum and peak frequency of the power density spectrum C rm (i ) = Θ(r ⋅ SD − d X (i ), X ( j ) ). (3) were used [3]. Possible nonlinear nature of the contraction N − m + 1 j =1 segments or bursts was tested with different methods [2]. In this report, we evaluate complexity of the uterine EHG Where Θ is the Heaviside function (Θ(z≥0)=1) and during labor by estimation of entropy. (Θ(z<0)=1), SD is the standard deviation of the given signal x(n) and r is a tolerance window. Recommended values of r are between 0.1 and 0.25 [5, 6, 7].
[
(
]
∑
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 144–147, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
)
[
]
Evaluating Uterine Electrohysterogram with Entropy
145
IV. With taking the naturals logarithms of conditionals probabilities of finding a vector X(j) within the distance r ·SD from the template vector X(i) and averaging them:
φ m (r ) =
N − m +1 1 ln(C rm (i )) . ∑ N − m + 1 i =1
Brm =
(8) m
(4)
VIII. We increase the dimension to m+1 and calculate Ar (i ) :
Arm (i ) =
V. Finally the approximate entropy is defined as:
ApEn(m, r , N ) = φ m (r ) − φ m +1 (r )
1 N −m m ∑ Br (i ) . N − m i =1
(5)
This expression can be interpreted as the average of the naturals logarithms of the probabilities that vectors, which are close in m points, remain close when they are extended to m+1 points.
N −m 1 ∑ Θ(r ⋅ SD − d [X (i ), X ( j )]) (9) N − m − 1 i =1, j ≠i m
IX. Calculate Ar as:
Arm =
1 N −m m ∑ Ar (i ) N − m i =1
(10)
X. Finally the sample entropy is defined as: B. Sample entropy SampEn is the negative natural logarithm of the probability that two sequences similar for m points remain similar at the next point, where self-matches are not included in calculating the probability [10, 11, 12]. Thus, a lower value of SampEn also indicates more self-similarity in the time series. SampEn is largely independent of record length and displays relative consistency under circumstances where ApEn does not [10]. In addition to eliminating self-matches, the SampEn algorithm is simpler then the ApEn algorithm, requiring one-half as much time to calculate. Formally, given N data points from a time series {x(n)}={x(1), x(2), …, x(N)}, to define SampEn, one should follow these steps: 1. Form N-m+1 vectors X(1), …, X(N-m+1) defined by X(i)=[x(i), x(i+1), …, x(i+m-1)], for 1 ≤ i ≤ N-m+1. Those vectors represent m consecutive values of the signal, commencing with the i-th point. Calculate the distance between X(i) and X(j), d=[X(i), X(j)], as the maximum absolute difference between their respective scalar components:
d [ X (i ), X ( j )] = max ( x(i + k ) − x( j + k ) ). k =1, 2 ,…, m
(6)
VI. For a given X(i), count the number of j (1 ≤ j ≤ N-m, i ≠ j), such that the distance between X(i) and X(j) is less than or equal to r·SD:
Brm (i ) =
N −m 1 ∑ Θ(r ⋅ SD − d [X (i ), X ( j )]) (7) N − m − 1 j =1, j ≠i m
VII. Calculate Br as:
⎛ Am SampEn(m, r , N ) = − ln⎜⎜ rm ⎝ Br
⎞ ⎟⎟ . ⎠
(11)
C. Uterine EHG Uterine EHG measuring protocol is described in details in article Electromyographic observations on human cervix during labor [14], therefore just basic features are given. In this report we studied three different labors, which were randomly chosen from data base [14]. EHG activity of uterus was detected using three surface skin Ag-AgCl disc electrodes. It was measured in two directions longitudinally and transversely. The sampling frequency was 18,2 Hz. The majority of uterine EHG activity lies in between 0.1 Hz and 3 Hz [16]. Therefore digital filtering (Butterworth) was applied. Every frequency outside the mentioned window is treated as a noise. To calculate ApEn and SampEn on appropriate number of points we decrees the sampling rate by keeping every second sample starting with the first sample. ApEn and SampEn were calculated on 4500 data points, therefore their values are available every 8.2 minutes All data processing was made in Matlab. III. RESULTS SampEn values calculated from uterine EHG during three different labors were in general lover then the values that we gain with ApEn. Also the values that we gain with ApEn were much more spread out then the values that we gain with SampEn. We came to conclusion, as it was already written in literature [10, 12], that SampEn gives better results. The results that are described later on are gain with SampEn.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
146
Fig. 1 Values of SampEn during the labor. The zone in the middle represents border between latent (on the left side of zone) and active (on the right side of zone) phase
J. Vrhovec, A. Macek Lebar, D. Rudel
Fig. 2 Values of SampEn during the active phase of the labor
A. Case 1; normal labor: latent and active phase In figure 1 the values of SampEn during the labor are shown. The border between the latent end the active phase of the labor was determined by obstetrician as 4 centimeters cervical dilatation. The border is marked with the zone in the middle of the figure 1. On x-axes is time in minutes, while on y-axes are the SampEn values. The complexity of the EHG signal is changed with time. In the latent phase the values of SampEn are higher and more spread out then later on in the active phase. The lower are the SampEn values, less complex is the signal. The SampEn values in active phase are visibly reduced in the course of time. The signal in this phase becomes more foresee. B. Case 2; normal labor: active phase In figure 2 SampEn calculated from uterine EHG of active phase of normal labor is shown. On x-axes is time in minutes, while on y-axes are the values of sample entropy. The complexity of the EHG signal is visibly reduced in the course of time. C. Case 3; labor: influence of oxytocin on entropy In figure 3 the values of SampEn during the labor which ended with C section are shown. On x-axes is time in minutes, while on y-axes are the values of SampEn. The border between the latent and the active phase of the labor was determined by obstetrician as 4 centimeters cervical dilatation. The border is marked with the zone in the middle of
Fig. 3 Values of SampEn during the labor. The zone in the middle separates the latent (on the left side of zone) and the active (on the right side of zone) phase. The lines mark the time, when the specific dose of oxyitocin was beginning to be given to the patient. The given dose is written on the line the figure 3. Oxytocin is the most freguently used ecbolic agent. Oxytocin activates process, that result in contraction of uterine muscles. The dose and the time, when oxyitocin was given to the patient, is marked in figure 3 with the line and the dose given at that time is written on the line. Oxytocin has caused more predictable contractions of uterine muscles and consequently the values of SampEn are visibly reduced in the course of time after oxytocin was given.
CONCLUSIONS Entropy, as a measure of complexity/variability of uterine EHG signals, indicates the course of labor. Lower
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluating Uterine Electrohysterogram with Entropy
values of sample entropy correspond to reduced complexity. The complexity of uterine EHG activity during normal labor is reduced in the course of time. Results of our study show that in the latent phase the values of SampEn are usually over 0.1 and more spread out. When the active phase is reached the trend of SampEn values becomes reasonable and the values of SampEn drop under 0.1. The oxytocin has an influence on activities, which affect the values of SampEn. SampEn values are reduced after oxytocin dose.
147 7. 8. 9.
10.
ACKNOWLEDGMENT The study was supported by Slovenian Research Agency and Ministry of Higher Education, Science and Technology.
REFERENCES 1. 2. 3. 4. 5. 6.
Iams J.D et all, (2002), Frequency of uterine contractions and the risk of spontaneous preterm delivery, N Engl J Med, Vol. 346, pp. 250 - 255 Radhakrishnan N et all, (2000), Testing for nonlinearity of the contraction segments in uterine electromyography, Int. J. Bifurcat. Chaos, Vol. 10, pp. 2785 – 2790 Doret M etall, (2005), Uterine Electromyography Characteristic for Early Diagnosis of Mifepristone-Induced Preterm Labor, The Am. J. Obstet. Gynecol., Vol. 105, pp. 822-830 Garfield RE et all, (2002),Uterine Electromyography and Light-Induced Fluorescence in the Management of Term and Preterm Labor, J Soc Gynecol Investig, Vol 9, pp. 265-275 Pincus SM (1993), Heart rate control in normal and abortedSIDS infants,” Am J Physiol., Vol.264, pp. R638-46 Lake DE et all, (2002) Sample entropy analyses of neonatal heart rate variability, Am J Physiol., Vol 283, pp. R789-R797
11. 12. 13. 14. 15. 16.
Pincus SM (1991) Approximate entropy as a measure of system complexity,” Proc. Natl. Acad. Sci. USA, Vol. 88 pp. 2297–2301 Rezek I.A et all, (1998) Stochastic Complexity Measures for Phsiological Signal Analysis, IEEE Trans. on BME, Vol.45(9), pp1186-1191 Hornero R et all, (2005) Interpretation of Aproximate Entropy: Analysis of Intracranial Pressure Approximate Entropy During Acute Intracranial Hypertension, IEEE Trans. on BME, Vol.52, pp1671-1680 Richman JS et all, (2000) Physiological time-series analysis using approximate entropy and sample entropy, Am J Physiol Heart Circ Physiol, vol.278, pp. H2039-H2049 Abasolo D. et all, (2006) Entropy analysis of the EEG background activity in Alzheimer’s disease patients, Physiol. Meas., Vol. 27, pp 241-253 Govindan RG et all, (2006) Revisiting sample entropy analysis at www.elsevier.com/locate/physa. Rezek IA et all, (1998), Stochastic Complexity Measures for Phsiological Signal Analysis, IEEE Trans. on BME, Vol. 45 Pajntar M, et all, (1987),Electromyographic observations on the human cervix during labor, Am. J. Obstet. Gynecol, Vol. 156, pp. 691-697 Leman H et all, (1999), Use of the Electrohysterogram Signal for Characterization of Contractions During Pregnancy, IEEE Trans. on BME, Vol.46, pp1222-1229 Jezewski J. et all, (2005), Quantitative analysis of contraction patterns in electrical activity signal of pregnant uterous as an alternative to mechanical approach, Phisiol. Meas.,Vol26, pp 753-767 Author: Institute: Street: City: Country: Email:
Jerneja Vrhovec Faculty of Electrical Engineering Trzaska 25 Ljubljana Slovenija
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of adaptive filtering methods on a 16 electrode electrohysterogram recorded externally in labor J. Terrien1, C. Marque2, T. Steingrimsdottir3 and B. Karlsson1 1 2
Reykjavik University/Department of biomedical engineering, Reykjavik, Iceland UTC/Department of biomechanics and biomedical engineering, Compiegne, France 3 Landspitali University hospital/Dept. ob-gyn, Reykjavik, Iceland
Abstract— Mapping of uterine contractions by recording the electrohysterogram (EHG) in many places on the abdominal wall is a new way of investigating the electrical activity of the uterus. Good spatial resolution is an important issue for this technique to provide new information. The use of monopolar recordings is an obvious way of increasing resolution, in spite of their having a lower signal to noise ratio (SNR) than bipolar measurements. We explored the use of the LMS and RLS filter as well as Laplacian filtering for the increase in monopolar EHG SNR. The best results for monopolar signals were obtained using the RLS algorithm but the SNR is still lower than that obtained on bipolar signals. The resulting EHG signals are nevertheless sufficiently improved to clearly identify EHG bursts during contractions. A precise selection of the different methods parameters, as well as an increase in the number of studied contractions, has to be done in order to confirm these preliminary results. Keywords— EHG, labor, uterus, monopolar recording, adaptive filtering
I. INTRODUCTION The electrical activity of the uterus or electrohysterogram (EHG) has been shown to have a predictive value for labor [1, 2]. This work has almost exclusively used localized measurements of EHG and focused on the higher frequency content of the EHG signal, often thought to be associated with the excitability of uterine cells. The propagation of the EHG and thus the good synchronization of the whole organ is also an important factor in contraction efficiency. But little effort has been put in to investigate this aspect of uterine contractions. To investigate the propagation of contractions it is natural and obvious to measure EHG on a grid of electrodes placed on the expectant mother’s abdomen. In order to obtain a precise mapping of the uterine electrical field during contractions, a high spatial resolution is needed. The total number of electrodes is however limited by the abdominal surface. We have made measurements using a 4x4 electrode grid and in this paper describe how we propose to improve spatial resolution of the measurements in the presence of considerable noise.
The analysis and characterization of electrophysiological signals is often difficult due to their usually low signal to noise ratio (SNR). It is all the more so since we are dealing with low amplitude phenomena and/or deep potential sources. As a result, electrophysiological signal are usually recorded in a bipolar manner, meaning that two electrodes are placed fairly closed together and the potential difference between them recorded. The resulting signal contains information on the variation in potential at both locations. In our application, the spatial resolution is important and the number of electrodes is limited. The use of monopolar recordings (potential measured compared to a remote and/or electrically inert patient ground) improves spatial resolution at the cost of a much reduced SNR. It is then necessary to correctly filter these signals without loss of pertinent information. The type of filter or filtering methods which are appropriate are highly dependent on the noise and signal characteristics. The worst situations are encountered when we have overlapping spectra or none stationary noises. In these situations classical linear filters can not be used. Specific filtering methods, such as wavelet and adaptive filtering, have been developed for these situations. The EHG is a noisy signal even recorded by bipolar electrodes [3]. The typical noises in the EHG are maternal and fetal electrocardiograms (ECG), abdominal muscle electromyogram (EMG), maternal and fetal movement artifacts and electronic noises from the surrounding electronic devices. Wavelet filtering has been used successfully on bipolar EHG for removing maternal and fetal ECG as well as stationary electric noises [3]. However, wavelet filtering assumes that noises are of low amplitude when compared to the signal of interest. The noises in monopolar EHG are not stationary and usually of high amplitude and thus can not be rejected by classical filters or wavelet filtering. This has lead us to the conclusion that using adaptive filter on monopolar EHG may be appropriate and advantageous. In this paper, we explore the possibility of using adaptive filter in order to obtain high EHG SNR in signals recorded externally on women during labor. We used two well known adaptive filter algorithms (LMS and RLS) and calculated their performance for various filter orders. The
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 135–138, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
136
J. Terrien, C. Marque, T. Steingrimsdottir and B. Karlsson
SNR for the two filters obtained were then compared with those obtained by bipolar recording and a Laplacian filter. II. MATERIALS AND METHODS A. Instrumentation and experimental protocol The measurements were performed by using a 16channel multi-purpose physiological signal recorder most commonly used for investigating sleep disorders (Embla A10). Reusable Ag/AgCl electrodes were used (interelectrode distance: 2.1 cm). The measurements were performed at the Landspitali University hospital in Iceland using a protocol approved by the relevant ethical committee (VSN 02-0006-V2). The subjects were healthy women in the first stages of labor having uneventful singleton pregnancies. After obtaining informed consent, the skin was carefully prepared using an abrasive paste and alcoholic solution. After that, the sixteen electrodes were placed on the abdominal wall according to Fig. 1. The third electrode column was always put on the uterine median axis and the 10-11th electrode pair on the middle of the uterus (fundus to symphysis). Reference electrodes were placed on each hip of the woman. The signal sampling rate was 200 Hz. The recording device has an anti-aliasing filter with a high cutoff frequency of 100 Hz. The tocodynamometer paper trace was digitalized in order to facilitate the segmentation of the contractions. In this preliminary study, two women in spontaneous labor were enrolled at 37 and 39 WG. The recording duration was approximately 1 hour in both cases. We have clearly identified 11 and 6 contractions on these recordings. All the EHG bursts were segmented manually with the help of the tocodynamometer trace. 1
5 BP1
2
BP4 6
BP2 3
10
7
BP8
BP6 8
BP11 15
BP9 12
• An adjustable coefficient numerical filter • An algorithm of coefficient modification based on an optimization criterion The reference signal, n(t), the filter output, y(t), as well as the desired signals, d(t), are used in the algorithm of modification of the filter coefficients. The coefficient adaptation is performed so as to make the output of the filter converge toward the desired signal. The output of the signal is given by y(t) = n(t) x W(t), where W(t) are the filter coefficients at time t. The optimization is made in order to minimize the error e(t) = d(t) – y(t) in accordance to the chosen adaptation algorithm. It has to be noticed that a certain iteration number is necessary for the filter to converge to the optimal solution or steady state. Several adaptation algorithms exist. The most popular ones are LMS (Least Mean Square) and RLS (Recurrent Least Square) algorithms. LMS and RLS algorithms are both stochastic-gradient algorithm. A precise description of the mathematical formulation and numerical implementation of these filters is described in [4]. The final recursive formulas of the LMS and RLS algorithm are: LMS : Wi = W i −1 + μ d i* [n ( i ) − d i W i −1 ] i ≥ 0 , W −1 = initial guess RLS : ⎡ λ −1 Pi −1 d i* d i Pi −1 ⎤ Pi = λ −1 ⎢ Pi −1 − ⎥ 1 + λ −1 d i Pi −1 d i* ⎦ ⎣ W i = W i −1 + Pi d i* [n ( i ) − d i W i −1 ], i ≥ 0 −1
Ι and where 0 << λ ≤ 1
BP10 14
11
Adaptive filter: By definition an adaptive filter is a numeric system where the coefficients are modified as a function of the external system inputs. It is constituted of two distinct parts:
P−1 = ε
13 BP7
BP5
BP3 4
9
B. Theory
BP12 16
Fig. 1: Electrode configuration on the woman abdominal wall and position of bipolar signals BPi.
Where ε is a regularization factor, λ a forgetting factor and μ a step size. Laplacian fiter: If we considere the signal matrix Si,j, the Laplacian filtering is performed as follow: Si,j = Si,j – ¼ (Si+1,j + Si-1,j + Si,j+1 + Si,j-1). On the border of the signal matrix, we used only the available signals usually used in the Laplacian. For the monopolar signal #2, we used only the signals 1, 3 and 6 for example. The Laplacian filter was used in order to understand if the noise, present on monopolar signal, is identical on each channel surrounding the channel of interest. Under the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of adaptive filtering methods on a 16 electrode electrohysterogram recorded externally in labor
we obtained more and more divergences (lower SNR than the initial SNR) of the LMS filter with the increase in the filter order. The RLS filter seems to be a better algorithm with a better stability with respect to the filter order. 0.4 0.2 0 -0.2
1 0
20
40
60
80
100
120
140
160
0
20
40
60
80
100
120
140
160
0
20
40
60
80
100
120
140
160
0
20
40
60
80
100
120
140
160
0
20
40
60
80
100
120
140
160
0.05 0 -0.05
2
0.02
3
0 -0.02 0.02
4
0 -0.02 0.02
5
0 -0.02
Fig. 2: Raw monopolar EHG (1), bipolar signal (2), Laplacian results (3), RLS results (4) and LMS results (5).
7 6
** **
** **
5 SNR (dB)
hypothesis that EHG is a stochastic zero mean signal, the mean of the surrounding channels corresponds only to nonpropagating common noises. If it is the case, the results of the Laplacian (substracting this common noise to the signal) and of the adaptive filtering should be the same. The computation of the Laplacian filter is however less time consuming and could be of interest for routine processing. Comparison of methods: In this work we try to define the best filter order. The parameters ε, λ and μ were considered to be constant. We used ε = 0.001, λ = 1 and µ = 0.1. The initial guess of filter coefficients was always W-1 = 0. The reference signal of the adaptive filters was define as the mean EMG of the four vertical monopolar EMG of the current EMG channel, signals used for the Laplacian filtering. In order to obtain the best filter order, we evaluate the resulting median SNR calculated on each contraction with the filter lengths {4; 8; 12; 16; 20; 24; 28; 32; 36; 40; 48; 56}. For each contraction and each EMG channel, the SNR was estimated by computing the energy of the base lines present before and after the EHG burst, and the energy of the burst. The obtained SNR were compared with those obtained with bipolar signal (vertical differentiation, Fig. 1) and after Laplacian filtering. In an other way, the adaptive filter could take into account any transformations (delay, attenuation, ...) of the non-propagating noises from one channel to an other which is not possible to do by using Laplacian. For statistical comparison, we used the sign test with a minimal significant level of 0.05.
137
4
LMS
3
RLS
2 1 0
III. RESULTS
8
12
16
20
24
28
32
36
40
48
56
Filter order
Fig. 3: Median SNR and associated quartiles obtained for LMS and RLS adaptive filter in function of the filter order. All comparisons were significant at 5 %. ** indicates a significant difference of 1 %.
10 9 8
**
7 SNR (dB)
The monopolar signals present a poor SNR (median SNR = -3.98 dB). The main noises are high frequency electronic noise and probably electronic impulses from the injection pump (Fig. 2). The vertical differentiation or bipolar signals present a higher SNR (median SNR = 7.78 dB). The signals obtained by horizontal differentiation present practically the same SNR (7.5 dB). The Laplacian filter gives positive SNR but lower than those of bipolar signals (median SNR = 2.97 dB). The median SNR of bipolar signals and those obtained after Laplacian filtering are presented Fig. 4. There is a significant difference of 1% between the two methods. The results of the adaptive filters, in function of the filter order, are presented Fig. 3. A significant difference was obtained for all filter lengths. The RLS filter gives higher SNR than the LMS method (p ≤ 0.01). The RLS filter is less sensitive to the filter order than LMS filter. The highest SNR was obtained for a RLS filter of order 4. A decrease in the median SNR with the increase in the order of the LMS filter was noticed. The analysis of the results showed that
4
6 5
**
4 3 2 1 0 1
2
3
Fig. 4: Median SNR and associated quartiles obtained with bipolar signals (1), after Laplacian filtering (2) and RLS filtering (3). ** indicates a significant difference of 1 %.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
138
J. Terrien, C. Marque, T. Steingrimsdottir and B. Karlsson
The comparison of the median SNR obtained by RLS filtering, bipolar differentiation and Laplacian filtering is presented Fig. 4. The RLS SNR are significantly lower than those obtained by bipolar differentiation (p ≤ 0.01). There is no statistical difference between RLS filtering results and Laplacian filtering ones (p = .14). The RLS algorithm seems however to give qualitatively higher SNR than Laplacian filter. An example of the result of the different methods is presented Fig. 2. IV. DISCUSSION The EHG is a noisy electrophysiological signal even recorded in a bipolar configuration. Monopolar EHG recording are usually used internally were SNR is much better, like for example described by [5, 6]. We have explored the possibility of using adaptive filters in order to obtain high SNR for monopolar EHG. The use of adaptive filters has the advantages of dealing with non-stationary noises of possibly high amplitude. First, we compared the well known LMS and RLS adaptive filters. The RLS algorithm seems to be the best method when applied on monopolar EHG. This method gives higher SNR and is less sensitive to the filter order. Several occurrences of divergence have been observed with LMS filters of high order. The RLS SNR are not significantly different from those obtained with Laplacian filtering. The SNR obtained with RLS and Laplacian filter are however significantly lower than bipolar ones. The difference between adaptive and Laplacian filtering could be explained by the fact that the Laplacian filtering always removes the signal obtained by the average signal of surrounding channels as opposed to adaptive filters that remove only the parts of the original signal that correlate to the reference signal. The noises present on each channel are therefore not always the same but have a similar characteristic form locally. Adaptive filter can take into account such local transformation of a global (spatially average) noise. Adaptive filtering by RLS algorithm, in our case, is therefore the best of the tested method when applied to monopolar EHG. A part of noises can also be stochastic and then more cancelled by the averaging procedure of the Laplacian filtering as well as by the definition of the reference signal of adaptive filter when compared to vertical differentiation (bipolar signals). A bad estimation of common mode noise and non-zero mean noise can also explain the lower SNR of Laplacian and adaptive filter. Our results have been obtained with constant filter parameters except for the filter order. The selection of the best filter parameters remains to be done in a manner similar to the one used to select the filter order. We have not
studied the filter convergence to a steady-state but the segmentation of EHG burst, with a long baseline before it, insures a convergence of the filter during the electrical burst. We saw that the results vary highly with the contraction considered imposing the use of non-parametric statistical test due to non-gaussian distributions of SNR values. An increase of the number of analyzed segments can give more relevant results. The evaluation of more sophisticated adaptive filter can also give higher SNR. A better definition of the reference signal can also be a key for the increase of the results quality. V. CONCLUSIONS In this paper, we explored the use of adaptive filters for the pre-processing of monopolar EHG recorded externally in woman. We investigated the performances of the LMS and RLS algorithms. The RLS filtering algorithm gives higher SNR than LMS or Laplacian ones. In spite of a significantly lower median SNR obtained with RLS filter, as compared to the bipolar differentiation ones, the results are good enough to clearly identified EHG burst on each recording monopolar channel. An increase of the number of studied contractions has to be done in order to confirm these preliminary results. The use of other adaptive filtering algorithms, as well as an optimization of all filter parameters which have been used, can further increase the obtained EHG SNR.
REFERENCES 1. 2. 3. 4. 5. 6.
Garfield, R.E., et al., Uterine electromyography and light-induced fluorescence in the management of term and preterm labor. J Soc Gynecol Investig, 2002. 9(5): p. 265-75. Maner, W.L., et al., Predicting Term and Preterm Delivery With Transabdominal Uterine Electromyography. Obstet Gynecol, 2003. 101(6): p. 1254-1260. Leman, H. and C. Marque, Rejection of the maternal electrocardiogram in the electrohysterogram signal. IEEE Trans Biomed Eng, 2000. 47(8): p. 1010-7. Sayed, A.H., Fundamentals of adaptive filtering. 2003, Hoboken, New Jersey: John Wiley& Sons. Duchêne, J., C. Marque, and S. Planque. Uterine EMG signal : Propagation analysis. in Annual International Conference of the IEEE EMBS. 1990. Lammers, W.J., Circulating excitations and re-entry in the pregnant uterus. Pflugers Arch, 1997. 433(3): p. 287-93. Author: Jeremy Terrien Institute: Street: City: Country: Email:
School of Science and Engineering, R.U. Kringlan 1 103 Reykjavik Iceland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Predictive value of EMG basal activity in the cervix at initiation of delivery in humans D. Rudel1, G. Vidmar2, B. Leskosek2 and I. Verdenik3 1
MKS Electronic Systems Ltd., Ljubljana, Slovenia University of Ljubljana, Faculty of Medicine, Institute of Biomedical Informatics, Slovenia 3 University Medical Centre, Department of Obstetrics and Gynecology, Ljubljana, Slovenia
2
Abstract— We present efforts to objectively assess cervical ripeness in humans. The hypothesis was that the cervical EMG basal activity might reflect readiness of the cervix for delivery. 47 women at initiation of delivery were involved in the study. EMG parameters – amplitude (URMSA) and frequency contents (MFA) – were related to Cumulative Bishop Score values (CBS) assessed by an obstetrician in each woman at the labor onset. The results show that the parameters are predictive of the CBS, both correlating negatively with the CBS value. Hence, EMG parameters have potentials to become objective indicators in assessment of cervical ripeness in humans. This would empower an obstetrician in his/her decisions how to further conduct the labor. Keyword — EMG, cervix, labor, Bishop Score, ripeness
I. INTRODUCTION Recent research on the human cervix [1,2,3,4,5,6] and the animal cervix [7,8,9,10] has recognized that the cervix is an active organ that plays an important role in the process of pregnancy and labor. The cervix prepares itself for labor during the process of ripening [11,12]. Failure of the cervix to ripen at term may be followed by failure to progress in labor. At labor, before a regimen of conducting the labor is selected, the obstetrician assesses cervical ripening or preparedness for induction. In the absence of a ripe (favorable) cervix, steps are to be taken for its preparation. The Bishop scoring system [13] is used in many delivery rooms for quantifying and scoring cervical ripeness. It is based on the assessment of cervical physical properties. Attributes that advocate for cervical ripeness are an effaced, dilated, favorable cervix with its canal os directed as much forward as possible. Each of the attributes is scored, whereby the sum of scores ranges from 0 to 13. Scores for unripe cervices range from 3 to 6, and for ripe cervices from 7 to 12. For decades, methods to quantitatively evaluate ripeness throughout labor have been sought. Different non-invasive and invasive technical solutions have been proposed, among them a combination of EMG methods, dilatation and IUP in
humans [3] and cows [9]. In this paper, an attempt is described to predict cervical ripeness from EMG only. It has been proved that in humans [1,2,3,4,5,14,15,16] and in animals [7,8,9,10] EMG signals derived from the cervix reflect electrical activity of smooth muscle cells in the cervix. The activity is different in women at different stages of the cervical ripeness at the onset of labor [17]. When labor progresses and the cervix ripens, EMG activity changes in its pattern and the EMG content thus probably (at least partly) reflects changes in the status of cervical ripeness [1,3,4,5,17]. The importance of EMG basal activity registered in the cervix at the onset of labor was therefore hypothesized to potentially reflect the level of cervical readiness for successful labor and thus the level of its ripeness. The EMG basal activity of the smooth muscle tissue in the cervix is defined as EMG activity registered in the periods when there are no locally produced bursts in EMG activity [4,5] and the contribution of uterine corpus EMG activity is minimized (i.e., there are no uterine contractions). The paper is aimed at relating Bishop Score as being a measure of cervical ripeness to EMG parameters calculated from the EMG basal activity signal, in order to see if the EMG parameters have any predictive value for cervical ripeness. II. METHODS A. Study population Forty-seven primiparous healthy women at term undergoing induction of labor with amniotomy and subsequent oxytocin infusion were included in the study of cervical electromyographic activity (EMG). EMG and intrauterine pressure were registered electronically throughout the whole latent and active phase of labor without major artifacts. Clinical data for each labor (Bishop Score, duration of the latent phase) were collected from the patient's labor documentation. The National Medical Ethics Committee approved the study and informed consent was obtained from each woman before being enrolled in the study.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 131–134, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
132
B. EMG and IUP measurements procedure Patient preparation and EMG and IUP measurements procedure are described in details elsewhere [4]. To record the EMG activity of smooth muscle tissue at the exterior wall of the cervix, two fetal spiral steel electrodes (Hewlett Packard 15130A) were inserted directly into the cervical tissue from the vaginal side in the outer aspect of the cervix, circumferential to the cervical canal [1]. A reference electrode was attached to the woman's thigh. EMG signals were amplified (differential amplifier A=2000) and filtered (0,03 Hz – 5 Hz). Intrauterine pressure (IUP) was measured to identify uterine contractions. EMG and IUP signals were registered on the monitor chart recorder and digitally stored (12-bit A/D conversion, 20 Hz sampling rate). The registration of EMG and IUP began approximately 10 minutes after amniotomy. C. EMG signal processing and analysis Each EMG record of the selected labors was peer reviewed for the quality of the recording. One 20-minute measurement interval was selected from each labor record containing no major artifacts in EMG. The interval was to start as close to the onset of labor as possible. Within the selected 20-minute intervals, periods with no bursts in EMG activity were visually selected for further processing. Selected periods classified as basal EMG activity ranged from 3.3 to 13.0 minutes in total duration. The selected EMG recordings were filtered digitally (Butterworth band pass filter 0.3 Hz - 3.0 Hz) and processed in time and frequency domain. EMG signals were visually assessed to identify characteristic EMG patterns at different stages of cervical ripeness. The Root Mean Square of the EMG signal voltage (URMS) and median frequency (MF) of the EMG signal were calculated for each selected interval, their average values (URMSA, MFA) determined and Power Spectrum Density (PSD) plotted. For the statistical analyses, the following parameters were used: cumulative Bishop Score (CBS), values of Bishop Score components (see Section III.B), the results of EMG signal processing (URMSA and MFA), time to delivery (in minutes) and number of contractions (during the selected 20-minute interval). The ability of EMG characteristics to predict the CBS, time to delivery and number of contractions was tested using weighted multiple linear regression (WLS) with URMSA and MFA as the independent variables and cases weighted by total duration of the selected EMG intervals without bursts. The statistically significant association among CBS, URMSA and MFA was visualized using 3D-scatterplot with local regression smoother (with Epanechnikov kernel).
D. Rudel, G. Vidmar, B. Leskosek and I. Verdenik
Exact binomial logistic regression was used for testing the association of EMG characteristics with individual components of the Bishop score, which were dichotomized for the purpose. Statistical analyses were performed using SPSS for Windows 14.0.2 (SPSS Inc., Chicago, IL, 2004) and Cytel Studio 7.0.0 (Cytel Software Corp., MA, 2005). III. RESULTS A. Results of visual assessment EMG signals of four selected labors are presented (Fig. 1) as exhibiting typical patterns of EMG activity during ripening process. The EMG record of an unripe cervix (Fig. 1, top row) is dense (MFA > 1 Hz) with hindered EMG bursts recorded at uterine corpus mechanical contractions. With progression of ripening, EMG pattern changes in the periods between EMG bursts. The EMG signal amplitude in those periods becomes lower and less dense and the EMG bursts at contractions more outstanding (Fig. 1, middle two rows). The process of reduction of EMG activity between uterine corpus mechanical contractions progresses until there is almost no EMG activity between EMG bursts between consecutive uterine contractions (Fig. 1, bottom row). Changes in EMG signal frequency contents are adequately reflected in corresponding PSD as presented in Figure 2. The EMG of an unripe cervix (Figure 2a) has three frequency groups: one around 2.4 Hz, the other around 1.2 HZ and the third below 1 Hz. With ripening of the cer-
Fig. 1: EMG as derived from the cervix at an initiation of a delivery in 4 women at different stage of the cervical ripeness: unripe cervix (phase 1), partially ripe cervix (phase 2 and 3) and ripe cervix (phase 4). The EMG amplitude scale differs between phases.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Predictive value of EMG basal activity in the cervix at initiation of delivery in humans
a
b
c
d
133
Fig. 2: Power Spectrum Density charts corresponding to the EMG signals from Figure 1: a) unripe cervix; b) ripening cervix; c) ripening cervix; d) ripe cervix. Solid line represents PSD during EMG bursts, dashed line between bursts. Note that a) has a different horizontal scale, and d) has a different vertical scale.
vix, the first group diminishes, the second is still well expressed and the first increases in its power contents (Figure 2b and 2c). In a ripe cervix, only the first group of frequencies below 1 Hz is present (Figure 2d). B. Results of statistical analysis Statistical analysis showed that the average EMG amplitude URMSA and average median frequency MFA as the characteristic parameters of the selected intervals of the EMG basal activity are predictive of the cumulative Bishop Score CBS (p=0.017 for the model; adjusted R2=0.131). As illustrated in Fig. 3, both URMSA and MFA are negatively associated with Bishop Score (β =-0.364, p=0.013 for URMSA; and β =-0.045, p=0.045, for MFA). The cumulative Bishop Score value is high when both URMSA and MFA have low values and decreases when with increasing URMSA and/or MFA. No statistically significant association could be found among URMSA and MFA on one hand, and the individual Bishop Score components (p-values are for the model from LR test): cervical channel dilatation DILA, dichotomized as 2-3 vs. 0-1: p=0.104; cervical effacement EFFA, 2-3 vs. 01: p=0.105; and cervical consistency CONS, 2 vs. 0-1: p=0.311. Similarly, no statistically significant association was found with time to delivery (p=0.816 for the model), or with number of contractions (p=0.475 for the model). IV. DISCUSSION The focus of our study was on the cervical EMG basal activity. Statistical analysis confirms the observations resulting from Fig. 1 and Fig. 2. As seen from Fig. 3, the
Fig. 3: Association of EMG characteristic parameters URMSA and MFA with Cumulative Bishop Score. Local regression smoother is superimposed on the point-cloud. cervical EMG basal activity parameters (URMSA, MFA) relate to the Bishop cumulative value. At the onset of an induced labor, the average cervical EMG signal amplitude (URMSA) and the average median frequency (MFA) are negatively associated with the cumulative Bishop Score. Consequently, an obstetrician may expect a low cumulative Bishop Score value for the labor when EMG signal is of high amplitude (e.g., URMSA > 50 µV) and/or having high median frequency value (e.g., MFA >> 1 Hz). When presented on a monitor with a standardized scale, the EMG signal would be of high amplitude and of a dense trace. Conversely, high cumulative Bishop Score value is expected when both the EMG amplitude and the EMG median frequency have low values (e.g., URMSA < 25 µV and MFA < 0.5 Hz). In that case, the EMG signal would be of low amplitude and its polarity would change slowly. The results are in line with our previous findings [4,5,6,17]. V. CONCLUSIONS At the onset of an induced labor, EMG activity registered in the periods when there are no uterine contractions and no bursts in the cervical EMG signal is considered the EMG basal activity of the cervix. Its average amplitude (URMSA) and average median frequency (MFA) are negatively associated with the cumulative Bishop Score as assessed by an obstetrician at a digital check of the cervical (un)ripeness at the beginning of the labor. High URMSA and high MFA advo-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
134
D. Rudel, G. Vidmar, B. Leskosek and I. Verdenik
cate for low Bishop Score values, while low URMSA and low MFA advocate for high Bishop Score values. It may be concluded that to a certain extent, EMG signal parameters URMSA and MFA reflect the stage (level, status) of cervical ripeness and thus the readiness of the cervix for successful labor. Hence, an adequately processed cervical EMG signal, when visually presented to an obstetrician in a delivery room, could help him/her to better assess cervical ripeness at the onset of labor, thus supporting him/her in deciding how to proceed in conducting the labor.
8. 9.
10.
11.
ACKNOWLEDGMENT The study was supported by the Ministry of Science and technology of the Republic of Slovenia (grants L3-7365, J38759, J3-5342, J3-2361). The authors express their gratitude to Mr. Darko Oberzan, MKS Ltd. Ljubljana, for his help in EMG signal processing.
12.
13. 14.
REFERENCES 1. 2. 3.
4. 5. 6.
7.
Pajntar M, Roskar E, Rudel D. (1987) Electromyographyc observations on the human cervix during labor. Am J Obstet Gynecol 156(3): 691-697 Pajntar M. (1994) The Smooth Muscles of the Cervix in Labor. Eur J Obstet Gynecol Reprod Biol 55: 9-12 Olah KS. (1994) Changes in cervical electromyographic activity and their correlation with the cervical response to myometrial activity during labor. Europ J Obstet & Gynaecol Rprod Biol 57:157-9 Rudel D, Pajntar M. (1999) Active Contractions of the Cervix in the Latent Phase of Labor. Br J Obstet Gynaecol, 106: 446-52 Rudel D, Pajntar M. (1999) Contractions of the Cervix in the Latent Phase of Labor. Contemp Reviews in Obstet Gynaecol 11(4):271-9 Pajntar M, Leskosek B, Rudel D, Verdenik I. (2001) Contribution of cervical smooth muscle activity to the duration of latent and active phases of labor. Br J Obstet Gynaecol 108:1-6 Toutain PL, Garcia-Villar R, Hanzen C, Ruckebusch Y. (1983): Electrical and mehanical activity of the cervix in the ewe during pregnancy and parturition. J Reprod Fertil 68: 195-204.
15.
16.
17.
Garcia-Villar R, Toutain PL, Ruckebusch Y. (1984) Pattern of electrical activity of the ovine uterus and cervix from mating to parturition. J Reprod Fertil 72:143-52 Breeveld-Dwarkasing VN, Struijk PC, Lotgering FK, Eijskoot F, Kindahl H, van der Weijden GC, Taverne MA. (2003) Cervical dilatation related to uterine electromyographic activity and endocrinological changes during prostaglandin F(2alpha)induced parturition in cows. Biol Reprod. 68(2):536-42 Cavaco-Goncalves S, Marques CC, Horta AE, Figueroa JP. (2006) Increased cervical electrical activity during oestrus in progestagen treated ewes: Possible role in sperm transport. Anim Reprod Sci. 93(3-4):360-5 Uldbjerg N, Ulmsten U, Ekman G. (1983) The ripening of the human uterine cervix in terms of connective tissue biochemistry. In: Pitkin RM, Scott JR, Ulmsten U, Ueland K eds. Clin Obstet Gynecol No.1, Vol. 26. Philadelphia: Harper & Raw 14-26 Garfield RE, Saade G, Buhimschi C, Buhimschi I, Shi L, Shi SQ, Chwalisz K. (1998) Control and assessment of the uterus and cervix during pregnancy and labor. Hum Reprod Update 4(5):673-95 Bishop EH. (1964) Pelvic scoring for elective induction. Obstet Gynecol 24: 226 Serr DM, Porath-Furedi A, Rabau E, Zakunt H, Mannor S. (1968) Recording of electrical activity from the human cervix. J Obstet Gynaecol Br Cmwlth 75: 360-3 Hofmeister JF, Slocumb JC, Kottmann LM, Picciottino JB, Ellis DG. (1994) A Noninvasive Method for Recording the Electrical Activity of the Human Uteus in Vivo. Biomed Instrum Technol 28: 391-404 Leskosek B, Pajntar M, Rudel D. (1998) Time/frequency analysis of the uterine EMG in pregnancy and parturition in sheep. In: Magjarević R, ed. Biomedical measurement and instrumentation – BMI'98. Proc Vol 3, 8th Int IMEKO TC-13 Conf Measurement in Clinical Medicine & 12th Int Symp Biomed Eng Dubrovnik. Zagreb: KoREMA, 2003, pp 106-9 Pajntar M, Verdenik I. (1995) Electromyographic activity in cervices with very low Bishop score during labor. Int J Gynecol Obstet 49: 277-81
Drago Rudel MKS Electronic Systems Rozna dol. C.XVII/22b SI-1000 LJUBLJANA, SLOVENIA
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Uterine Electromyography in Humans – Contractions, Labor, and Delivery R. E. Garfield and W. L. Maner University of Texas Medical Branch/Reproductive Science, Galveston, Texas, USA Abstract— Today’s maternal/fetal monitoring lacks the capability to diagnose labor and predict delivery. The objective of this work was to demonstrate that uterine electromyography (EMG) is proven to be a viable alternative to current monitoring techniques. Uterine EMG was monitored noninvasively and trans-abdominally from pregnant patients using surface electrodes. Several aspects of the uterine EMG were investigated: contraction plotting, diagnosing labor, and predicting delivery. Contractions were seen to correspond well with tocodynamometer- (TOCO-) plotted contractions. As well, increases in electrical activity were indicative of labor and imminent delivery. Uterine EMG could be a valuable tool for obstetricians if implemented on a routine basis in the clinic. Keywords— Uterus, electromyography, EMG, labor, diagnosis, prediction.
mometer (TOCO) has routinely been used in the clinic to measure contractions [13], it has been shown to have limited predictive capability [14]. Root-mean-square (RMS) signal processing has been established as a standard method for plotting signal amplitude changes [15]. Spectraltemporal mapping has been used successfully to identify spectral changes that occur in biological electrical signals [16, 17]. II. OBJECTIVES • •
I. INTRODUCTION Labor is the physiologic process by which a fetus is expelled from the uterus, and is defined loosely as regular uterine contractions accompanied by cervical effacement and dilation.[1] Preterm labor, defined as labor before 37 weeks’ gestation, is the most common obstetric complication and occurs in about 20% of pregnant women. In the United States alone, 10% of the 4 million infants born each year are premature. [2 and 3] At $1500 a day for neonatal intensive care, this constitutes a national health care expenditure well over $5 billion. [4] In addition, preterm labor accounts for 85% of infant mortality and 50% of infant neurologic disorders. Current tocolytic therapy has not decreased the rate of preterm delivery. It is argued that the failure of the current strategies to decrease the rate of preterm labor might be because once preterm labor is finally diagnosed, any therapeutic benefit is lost or temporary. Therefore, one of the keys to treating preterm labor would be early detection or prediction. What is called for is a better method of monitoring patient uterine contraction activity. Previous studies have established that the electrical activity of the myometrium is responsible for myometrial contractions [5, 6]. As well, extensive studies have been done in the last 60 years to monitor uterine contractility using the electrical activity measured from electrodes placed on the uterus [7-9]. However, more recent studies indicate that uterine EMG activity can actually be monitored accurately from the abdominal surface [10-12]. Although tocodyna-
•
To determine if uterine contraction events plotted using uterine electromyography (EMG) data, correlate with TOCO-plotted contraction events. To compare uterine electromyography of labor patients to ante partum patients. To determine whether delivery can be predicted using transabdominal uterine electromyography. III. MATERIAL AND METHODS
•
•
323 contractions vs. no-contraction events were observed from ten term-pregnant women, all of whom ultimately delivered spontaneously. Uterine EMG was measured non-invasively from the abdominal surface of each patient for 30 minutes. TOCO was used simultaneously to measure uterine contractions. The STM and RMS methods were applied to the uterine EMG data to generate contraction curves similar to TOCO “bellshaped” curves. Correspondence between the raw uterine EMG bursts and the uterine contractions plotted by the various methods was established by looking for temporal overlap of the events. Fifty patients (group 1: labor, n = 24; group 2: ante partum, n = 26) were monitored using transabdominal electrodes. Group 2 was recorded at several gestations. Uterine electrical ‘‘bursts’’ were analyzed by powerspectrum from 0.34 to 1.00 Hz. Average power density spectrum (PDS) peak frequency for each patient was plotted against gestational age, and compared between group 1 and group 2.
A total of 99 patients were grouped as either term (37 weeks or more) or preterm (less than 37 weeks). Uterine
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 128–130, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Uterine Electromyography in Humans – Contractions, Labor, and Delivery
129
electrical activity was recorded for 30 minutes in clinic. EMG "bursts" were evaluated to determine the PDS. Measurement-to-delivery time was compared with the average power density spectrum's peak frequency. Receiver operating characteristic (ROC) curve analysis was performed for 48, 24, 12, and 8 hours from term delivery, and 6, 4, 2, and 1 day(s) from preterm delivery. IV. RESULTS Kappa inter-rater agreement was excellent (0.823) between EMG, TOCO, RMS and STM. Significant correlation was found between all plots. There was no significant difference in the percentage of burst/contraction events plotted by EMG, RMS, and STM compared to TOCO (Fig. 1 EMG: 114.32 ± 18.86 %; OCO: 100.00 ± 0.00 %; MS: 109.18 ± 17.05 %; STM: 102.73 ± 8.31 %). To Group 1 was significantly higher than group 2 for gestational age (39.87±1.08 vs. 32.96±4.26 weeks) and average PDS peak frequency (Fig. 2 - 0.51±0.10 vs. 0.40±0.03 Hz). The power density spectrum peak frequency increased as the measurement-to-delivery interval decreased. ROC curve analysis gave high positive and negative predictive values for both term and preterm delivery (Table 1).
Fig. 2 At term, the average PDS peak frequency was significantly higher for the 24-or-fewer-hours-to-delivery group than for the more-than-24-hours-to-delivery group, whereas at preterm, the average PDS peak frequency was significantly higher in the 4-or-fewer-days-to-delivery group than in the more-than-4-days-to-delivery group (Fig. 3).
Table 1 Labor
PPV
NPV
SENS
SPEC
GS
P
Term
.854
.889
.918
.625
1 day
< 0.01
Preterm
.857
.886
.600
.969
4 days
< 0.01
Fig. 1
Fig. 3
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
130
R. E. Garfield and W. L. Maner
V. CONCLUSIONS Uterine EMG bursts correspond strongly to TOCO contraction plots. EMG-generated contraction plots (using RMS or STM) are statistically indistinguishable from TOCO contraction plots. So, for pregnant patients exhibiting myometrial activity, uterine EMG could be used in place of TOCO in the clinic for plotting contractions. Uterine EMG in antepartum patients is significantly lower than in laboring patients delivering <24 hours from measurement, giving uterine EMG the capability to diagnose labor in the clinic. Moreover, trans-abdominal uterine EMG predicts delivery within 24 hours at term and within 4 days preterm. This methodology offers many clinical advantages and benefits to obstetricians that are not currently available to them with presently used uterine monitoring systems. Supported by NIH R01-HD037480.
REFERENCES 1. 2. 3. 4.
5. 6.
E.R. Norwitz, J. Robinson and J.R.G. Challis , The control of labor. N Engl J Med 341 (1999), pp. 660–666. U.S. Preventive Services Task ForceGuide to clinical preventive services: An assessment of the effectiveness of 169 interventions, Williams & Wilkins, Baltimore (1989). S.J. Ventura, J.A. Martin, S.C. Curtin and T.H. Matthews, Report of final natality statistics. Monthly Vital Stat Rep 45 (1997), p. 12. E.R. Brown and M. Epstein , Immediate consequences of preterm birth. In: F. Fuchs and P.G. Stubblefield, Editors, Preterm birth: Causes, prevention, and management, Macmillan Publishing, New York (1984), pp. 323–354. Marshall JM. Regulation of the activity in uterine muscle. Physiol Rev 1962; 42:213-27 Kuriyama H, Csapo A. A study of the parturient uterus with the microelectrode technique. Endocrinology 1967; 80:74853.
7. 8. 9.
10. 11.
12. 13. 14.
15. 16. 17.
18.
Devedeux D, Marque C, Mansour S, Germain G, Duchene J. Uterine electromyography: A critical review. Am J Obstet Gynecol 1993; 169:1636-53. Wolfs GMJA, Van Leeuwen. Electromyographic observations on the human uterus during labor. Acta Obstet Gynecol Scand Suppl 1979; 90: 1-61. Figueroa JP, Honnebier MB, Jenkins S, Nathanielsz PW. Alteration of 24-hour rhythms in the myometrial activity in the chronically catheterized pregnant rhesus monkey after 6hours shift in the light-dark cycle. Am J Obstet Gynecol 1990; 163: 648-54 Garfield RE, Buhimschi C, Control and assessment of the uterus and cervix during pregnancy and labour. Hum Reprod Update. 1998 Sep-Oct;4(5):673-95. Buhimschi C, Garfield RE. Uterine activity during pregnancy and labor assessed by imultaneous recordings from the myometrium and abdominal surface in the rat. Am.J. Obstet Gynecol 1998; 178:811-22. Garfield RE, et al, Instrumentation for the diagnosis of term and preterm labour. J. Perinat Med 1998; 26; 413-436. Newman RB. Uterine contraction assessment. Obstet Gynecol Clin North Am. 2005 Sep;32(3):341-67. Maul H, Maner WL, Olson G, Saade GR, Garfield RE. Noninvasive transabdominal uterine electromyography correlates with the strength of intrauterine pressure and is predictive of labor and delivery. Garrison LA, Lamson TC, Deutsch S, Geselowitz DB, Gaumond RP, Tarbell JM. An in-vitro investigation of prosthetic heart valve cavitation in blood. J Heart Valve Dis. 1994 Apr;3 Suppl 1:S8-22; discussion S22-4. Makfarlane P.W. A comparison of different processing techniques for measuring late potentials. Theproceedings of the international symposium on high-resolution ECG. Yokohama. Japan. July 3 - 1994 - p.136. Time domain-analysis of the signal averaged electrocardiogram: reproducibility of results. European Heart J.,1992.,V.13., Abstract suppl., p.646.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Analyzing Distributed Medical Databases on DataMiningGrid© Vlado Stankovski1, Martin Swain2, Matevz Stimec3 and Natasa Fidler Mis3 1
Faculty of Civil and Geodetic Engineering, University of Ljubljana, Ljubljana, Slovenia 2 University of Ulster, Coleraine, Northern Ireland, United Kingdom 3 University Children’s Hospital, University Medical Centre, Ljubljana, Slovenia
Abstract— Hospitals throughout Europe hold vast amounts of data in the form of patient records. Performing on-the-fly analyses of these data and their actual transformation into information and knowledge may help improve medical procedures, treatments or prevent illnesses. Grid technology has recently emerged to address the needs for efficient and effective exploitation of heterogeneous and geographically distributed resources, such as large and distributed data, open source or proprietary programs for data analysis, massive storage devices and high-performance computers. A de facto standard framework for building grid environments is the Open Grid Service Architecture (OGSA) and the corresponding Web Service Resource Framework (WSRF). The Globus Toolkit version 4, is a fully WSRF-compliant grid middleware, which addresses the needs for secure, flexible, interoperable and seamless use of grid resources. The DataMiningGrid© (www.datamininggrid.org) system was recently built on top of existing Globus technology inter alia to address the requirements of a community of medical users and enable them to perform on-the-fly analysis of geographically distributed medical databases. DataMiningGrid© is a set of grid services and user-friendly workflow editing and managing tools, which facilitate manipulation of distributed data, registering, discovery and use of grid-enabled statistical and data mining programs, their execution in the grid environment and a provenance tracking mechanism. The software is now freely available at SourceForge.net under Apache License V2. The present work illustrates the use of the DataMiningGrid© system to perform analysis of nine regional medical databases in Slovenia. Keywords— grid, distributed, database, data mining, medicine
I. INTRODUCTION Many medical studies are more relevant if conducted for a greater geographic area. Facilitating on-the-fly analyses of distributed medical data and their actual transformation into information and knowledge may help improve medical procedures, treatments or prevent illnesses. Statistical and data mining programs have become the de facto technology to address the arising medical data analysis and interpretation tasks. Because of the complexity and rising demands for distributed and heterogeneous resources, modern applications are increasingly operating in distrib-
uted computing environments over widely dispersed geographic locations. Facilitating secure and privacy-preserving data analysis in such environments requires novel techniques and systems. Grid technology [1] has recently emerged to address the needs for efficient and effective exploitation of heterogeneous and geographically distributed resources, such as large and distributed data, open source or proprietary programs for data analysis, massive storage devices and highperformance computers. A grid computing architecture facilitates the distribution of process execution and resource sharing across computational resources that are geographically widely dispersed. A de facto standard framework for building grid environments is the Open Grid Service Architecture (OGSA) [1] and the corresponding Web Service Resource Framework (WSRF). The Globus Toolkit version 4 (GT4) [2], is a fully WSRF-compliant grid middleware, which addresses the needs for secure, flexible, interoperable and seamless use of grid resources. The DataMiningGrid © 2007 system has recently been designed [3] and developed in order to meet the requirements of modern and distributed data mining scenarios. Based on the Globus Toolkit and other open technology and standards, the DataMiningGrid system provides tools and services facilitating the grid-enabling of various applications (including data mining and statistical applications) without major intervention on the application side. In fact, DataMiningGrid© is a set of grid services and workflow editing and managing tools which facilitate manipulation of distributed data, registering, discovering and use of statistical and data mining programs, their execution in the grid environment and a provenance tracking mechanism. The goal of this particular study was to address the requirements of a community of nutritionists in Slovenia by helping them perform on-the-fly analysis of geographically distributed medical databases while paying special attention to security and privacy issues. The purpose of the nutritionist study was to investigate the incidence of endemic goitre and supply with iodine in Slovenian children entering high school.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 166–169, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Analyzing Distributed Medical Databases on DataMiningGrid©
II. METHODOLOGY The community of nutritionists was led by the Pediatric clinic of the University Medical Centre in Ljubljana. They defined the following scientific objectives of the study: 1. to study the prevalence of insufficient supply with iodine, which is manifested as goitre, among children entering high school in 9 regions of Slovenia, 2. to find out in which regions of Slovenia the prevalence of insufficient supply with iodine, manifested as goitre, is especially high, 3. to determine, if there is a significant difference in nutritional habits between the children with deficient supply with iodine and their peers from the control group from the same region in Slovenia, who have adequate iodine supply, and 4. to compare nutritional habits of Slovenian children entering high school with recommended Central European reference values. Approximately four thousand children were recruited in nine Slovenian regions and the data was collected in regional (relational) databases. See Fig. 1 for illustration of the procedure. The data contained, for an example, anthropometric parameters, age, sex, height, weight, residential area and residential environment, and clinical estimation of the thyroid gland size according to the criteria of the World Health Organization (WHO, UNICEF, ICCIDD). Since the data resides in several regional hospitals and the analysis of the data is to be conducted at the University Children’s Hospital in Ljubljana (University Medical Centre Regular systematic medical examination before entering high school (conducted in nine regions of Slovenia) Thyroid ultrasound, urinary iodine, TSH, thyroid hormones, thyroid auto antibodies
Urinary iodine
Nutritional counselling Anthropometrical measurements, nutritional diary and nutritional questioner Evaluation of nutrition and comparison to the nutritional recommendations Goitre + lowered concentration of urinary iodine: treatment with potassium iodine
Fig. 1 Data collection methodology
167
in Ljubljana), it was decided to use the DataMiningGrid© system in parallel to their existing system. The collected data was to be analyzed by statistical and data mining approaches. In this respect, the study was not concerned with massive data and a high-performance computing approach was not needed, nevertheless, the databases existed in several schemas and were geographically distributed. According to the original procedure, data was collected in nine regions of Slovenia in parallel. Then the data were physically transferred on weekly bases to the University Medical Centre in Ljubljana for analysis. The regional researchers brought their data on a CD or a memory stick. Technologically speaking, it would have been much easier to collect the information via the Internet, but, this was considered insecure. Development of Web applications for secure upload of data should involve the use of security standards, nevertheless, in many medical scenarios, the data should actually stay where it has been collected i.e. there is an important requirement that the data should not move within the network. It is only acceptable to move summarized data or information. Key concerns of the Ethical Committee of the Medical Faculty at the University of Ljubljana for the study “Endemic goitre and supply with iodine in Slovenian children entering high school”, were the following: (a) personal data has to be de-identified for analysis, (b) identification data has to be stored in a separate database, independently accessible and secure, and (c) national and international law on ethical issues has to be followed, wherever applicable. Based on this study, we developed use-cases, which are typical for the medical domain and derived technical and none technical requirements that were then used in the design and development process for the overall DataMiningGrid© system. We also designed and developed nine regional relational databases based on MySQL technology. These databases were gradually filled in with data, while the DataMiningGrid system was used to dynamically access and query these databases and further perform data analysis. III. THE DATAMININGGRID© SYSTEM The DataMiningGrid© system was designed and developed to accommodate data mining and statistical applications from a wide range of platforms, technologies, application domains and sectors in contrast to other grid-based systems, such as Weka4WS [4] for example, which is restricted to Weka-only [5] data mining applications. It is designed around the principles of Service Oriented Architectures [3] and implements many independent grid services, each capable of carrying out a set of predefined tasks. The integration of all these services into one system with a
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
168
Vlado Stankovski, Martin Swain, Matevz Stimec and Natasa Fidler Mis
Fig. 2 DataMiningGrid workflow for the medical study which is based on Triana Workflow Editor and Manager variety of end-user clients, results in a highly modular, reusable, interoperable, scalable and maintainable system. The DataMiningGrid system provides a flexible user interface based on Triana Workflow Editor and Manager (see Fig. 2) [6]. The workflow client is used to manipulate data resources, select applications and orchestrate all the grid services needed to execute these applications. To use the DataMiningGrid system, an end-user must have the Triana workflow editor installed and a valid DataMiningGrid certificate to establishing secure communication. The implemented data grid services facilitate integration of distributed databases, ‘on-the-fly’ data formatting to support data mining operations, performing data assays or summaries of accessed data sets, and the extension of Data Services to provide additional data preparation and transformation operations. In addition, remote file browsing and transfer utilities provide easy user access to file-based applications and are implemented using the Java CoG Kit. An important feature of the system is that it allows users to analyze their data without the need to install grid middleware or any software at the location where their data resides. Many users will have data sets that they wish to analyze, which are not already exposed as data resources on the DataMiningGrid. The DataMiningGrid is based on ex-
isting open technology such as Globus Toolkit 4 [1,2], Triana [6], OGSA-DAI [7, 8], and GridBus [8]. IV. DATA SIDE CLIENT – JOD-INTEGRATION CLIENT A special JodIntegration client was constructed using the OGSADAI-WSRF-2.2 Application Programming Interface [7,8]. It accesses data from multiple (two) Jod medical databases, prepares that data for cross-validation and converts it into the Weka Arff format. OGSA-DAI clients such as this one have been described in more detail elsewhere [7,8]. Here we give a brief overview of the developed client: 1. The user defines an SQL query to be executed against multiple Jod medical databases that reside in Slovenia. 2. This query is executed against one Jod database in order to retrieve the JDBC metadata describing the data returned by the query. 3. A temporary database table is created in a MySQL database residing at the University of Ulster. The columns of this table correspond to the columns defined by the metadata returned in the previous step. 4. The Jod medical databases in Slovenia are queried in turn. The results of these queries are each delivered to a
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Analyzing Distributed Medical Databases on DataMiningGrid©
169
separate data stream, which will be received by the data resource in Ulster. 5. The data resource in Ulster receives the data streams from Slovenia, and loads this data into the temporary database table that was previously created for this purpose. 6. The data in the temporary table is divided into test and training data sets for cross validation and converted to Weka Arff format. 7. The temporary table is dropped from the database in Ulster.
has been highlighted in a recent white paper by the European Commission entitled “Towards the Synergy between Research in Medical Informatics and Bioinformatics”. The DataMiningGrid software is now freely available under the Apache License 2.0 at SourceForge.net. It is proposed that grid technology is used as basis for the development of Medical Data Mining Grids (MEDIGRIDs).
As a result of executing this client, a set of training and test data sets are written on the user’s machine (for data mining purposes!), i.e. the machine where the OGSA-DAI client, or the corresponding Triana unit was executed. The files need to be transferred to a grid node with a functioning GridFTP [2] server before additional DataMiningGrid resources can process them. Following this part, the end-users continue executing data mining applications on the generated data and performing analysis.
We acknowledge the cooperation of all DataMiningGrid partners and collaborators in the DataMiningGrid project. This work was supported by the European Commission FP6 grant DataMiningGrid, Contract No. 004475.
ACKNOWLEDGMENT
REFERENCES 1. 2.
V. CONCLUSIONS The DataMiningGrid© system was presented and its use in the medical domain. Key features of DataMiningGrid© include flexibility, extensibility, scalability, efficiency, conceptual simplicity and ease of use. The system has been developed and evaluated on the basis of a diverse set of use cases from different sectors in science and technology. It may be expected that the presented grid-based approach will open new ways for performing on-the-fly analysis of medical data from medical hospitals throughout Europe. In this study, the DataMiningGrid was used integrate data from geographically distributed relational databases and then to perform analysis of the collected data by using powerful Grid computing facilities. Some other scientific challenges, which are also tackled by the DataMiningGrid project are research in fast distributed text classification, distributed content retrieval based on similarity analysis, automatic construction of ontologies and intelligent datamining-based Grid monitoring tasks. The undergoing research is also addressing the synergy between bioinformatics and medical informatics. This critical new development
3.
4. 5. 6. 7.
8. 9.
Open Grid Services Architecture (OGSA), http://www.globus.org/ogsa/ Foster I, Globus Toolkit Version 4 (2005) Software for ServiceOriented Systems, in Jin H, Reed D, and Jiang W, (editors): NPC 2005, LNCS 3779, pp 2-13 Stankovski V, May M, Franke J, Schuster A, McCourt D, Dubitzky W (2004) A Service-Centric Perspective for Data Mining in Complex Problem Solving Environments, HR Arabnia and J Ni (editors). Proc of Int’l Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'04), II, pp780-787 GridLab, Weka4WS at http://grid.deis.unical.it/weka4ws/ Witten IH, Frank E (2005) Data Mining: Practical machine learning tools and techniques. 2nd Edition. Morgan Kaufmann, San Francisco. Triana at http://www.trianacode.org/ Antonioletti M, Atkinson M, Baxter R, Borley A, Chue Hong NP, Collins B et al. (2005) The design and implementation of Grid database services in OGSA-DAI. Concurrency and Computation: Practice and Experience, 17(2-4), pp. 357-376 OGSA-DAI project Web site at www.ogsadai.org.uk/ under documentation GridBus Service Broker, a grid scheduler for computational and data grids, www.Gridbus.org/broker/ Author: Vlado Stankovski Institute: Street: City: Country: Email:
Univeristy of Ljubljana, FGG-KGI Jamova cesta 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Availability Humanization - The Semantic Model in Occupational Health M. Molan1 and G. Molan2 1
University Clinical Centre, Institute for Occupational Health, Poljanski nasip 58, SI-1000 Ljubljana, Slovenia 2 HERMES SoftLab Research Group, Litijska 51, SI-1000 Ljubljana, Slovenia
Abstract— To connect work place with worker perception of work a special availability humanization expert model was developed (AH-model). The basics for the model development were the expert knowledge from the area of the occupational health work and the organizational psychology, human factors and ergonomics. The AH-model has been developed to support estimations of workloads, overloadness, fatigue and burn out at the working places with dominant psychical loads. With implementations of the AH-model, evaluations of implemented humanization measures are possible. The heart of the model is selfperception of actual availability. For this purpose the tool for self-estimation of actual availability has been developed. The tool and its application in the model has been validated in different working environments in Slovenia on the numerous of 3000 workers. The model development has started 14 years ago. It has been improved with the new knowledge and experiences from the implementation. The model is formalized with the graph theory and the database is constructed with implementation of ORM technology. The AH-model, together with the QAA tool for actual availability estimation, is the complex method for use in estimations of humanization measures implementation.
For this purpose particular expert models describing relation between work and worker were defined to determine the effects of work on worker and to support the evaluation of humanization interventions at the work place. Elements of the relations "work" to "worker" are various, spread and there is a huge mass of unorganised data. The process in the relation "work" to "worker" is very complex one. To describe this process the expert model reflecting activities in the relation should be a useful tool to reduce the perceived stress of human factors specialist, ergonomist and occupational health personal in the evaluation of their interventions on the relation "work" to "worker". A. The research goal The main research goal of our work was development of expert model as a tool to support the evaluation process in the relation "work" to "worker". Expert model should be a tool to describe the process, to facilitate the evaluation of humanization measures implementation and to support the decision making process in evaluation of humanization interventions.
Keywords— human factors, AH-graph, model semantic, AH-semantic, AH-model
I. INTRODUCTION In the occupational health due to the mass of unorganised, spread non-homogenous data it is difficult to estimate results of interventions in the relation "work" to "worker". The relation "work" to "worker" is the focus of interest in the occupational health. The most important activity of occupational health experts is reduction of negative impacts of working environment on worker. The process of reduction of negative impacts of work on worker is called humanization. At the working environment with objectively measurable work loads and negative impacts of working environment on worker measurement of humanization intervention results is trivial. In the modern working environments with dominantly psychical work loads and service, work measurements of humanization interventions demand the alternative approach.
II. METHOD To achieve the goal - development the expert model for application in the occupational health, to describe relation "work" to "worker" the development process has passed the following steps. A. 1st step: the description of the relation "work" to "worker" According to the expert knowledge in the area of occupational health, ergonomics, work psychology and human factors the relation "work" to "worker" should be described with the following axioms B. AH-model axioms Axiom 1: Work as the activity performed by worker is a composite of working environment influences on worker, influence of technology on worker, influence of organisa-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 162–165, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Availability Humanization - The Semantic Model in Occupational Health
tion relations at the work place and influence of available human recourses (workers abilities). Axiom 2: All those four group of impact factors determine the composite work load. The composite work load describes influences of impact factors on worker manifested with workers perception of them. Axiom 3: Work load influence in the relation "work" to "worker" on worker. That relation is described with workers perception of work in the form of work load. Work load is according to this axiom workers perception of impact factors influence Axiom 4: Work load determines workers availability. With increasing of work load the availability of worker decreases. Higher level of availability reflects lower level of work load. Worker perception of his/her availability is his/her self estimation of availability to deal with work load and perform the task. Axiom 5: Worker perception of his/her availability, determine his/her performance. According to the axiom worker performance is directly determined with worker availability. Axiom 6: Worker perception of his/her availability determines his/her perception of health. Health, defined according to WHO definition, is according to the axiom directly determined with worker availability. Axiom 7: Worker performance is manifested in results of work, which have economic value. According to the axiom it is possible to economical evaluate worker performance. Decrease of performance means lower economical results and means costs. Axiom 8: Decrease of worker health should be economical validated. According to axiom decrease of worker health means costs. Axiom 9: The main focus of all activities is to shape the work load in the relation "work" to "worker" within limits which determine adequate level of worker availability. Axiom 10: Adequate level of worker availability should be according to the axiom kept with shaping of workload with intervention in working environment, in technology, in organization and in human recourse in the form of humanization interventions. Axiom 12: Humanization interventions should be economical validated. Humanization interventions cause costs.
163
D. 2nd step: formalisation of the relation "work" to "worker" Elements of the relation work to worker should be presented with the graph. Elements from the space "work to worker" are presented with vertices. There are the following vertices in our 4partite graph. E. Description of graph partitions Graph partitions are: 1st partition: work (E, T, O, HR) 2nd partition: worker (PS, PF, GF, MO, VI, MD, ST) 3rd partition: output (P, H) 4th partition: humanization (HM) W - work vertices: • • • •
E - ecology T - technology O - organization HR - human recourse
AA - worker vertices: Actual availability - self perception of worker - worker vertices: • • • • • • •
PS - psychical fatigue PF - physical fatigue GF - general fatigue MO - motivations VI - vigilance MD - mood ST - perceived stress W - output vertices:
• •
P - performance H - health W - humanization vertex:
•
HM - humanization mesures
C. Description of axioms According to the proposition all costs should be compared. Costs of performance decrease and costs of health decrease are compared with costs of humanization interventions. Balance of compared costs is the ground of all activities in the relation "work" to "worker". Fig. 1 AH-Graph, 4-partite graph
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
164
M. Molan and G. Molan
F. 3rd step determination of the syntax
I. AH-model interventions
All elements of the space "work" to "worker" are integrated on the basis of graph in the model. According to the key elements we called the model AH - availability humanization model. On the basis of AH graph conceptual data model with implementation of ORM technology was developed. Concept of the model is realised in the relational database. For research purposes prototype of the database for SQL was developed. Functioning of the developed database was verified on the real data. For AA data collection a special tool QAA (Questionnaire for actual availability) has been developed. The questionnaire has been validated on 3000 Slovenian workers in different working environments. On the collected data the database function was validated.
AH-semantics offers possibilities to describe interventions at the work place. Intervention 1: Investments in human resource in the form of individual therapeutic support. Intervention 2: Organization of work with precise definition of organizational structure Intervention 3: Individualized education and training for particular job. Intervention 4: Adaptation of working environment with introduction of good ergonomic practise. Intervention 5: Stimulation of team work and social support at the working groups.
G. 4th step introduction of models semantic All elements and connections of the AH model have interpretations which determine semantics. Special interpretation is focussed to the semantic of the humanization interventions. According to the expert knowledge of human factors experts 4 groups of elements integrated in the AH-model (presented with 4 partitions of the AH-graph) are related. H. AH-model rules All those relations and impacts are interpreted in the AHsemantics. On the basis of functioning database for AHmodel analyses of real working environment is possible. Rule 1: Work components (E, T, O, HR) influence on worker and his/her level of actual availability (PS, PF, GF, MO, VI, MD, ST). Rule 2: Performance of worker and worker health depends on worker actual availability. Decrease of worker performance and decrease of worker health increase costs of the system. Rule 3: The goal of the management of each system is reduction of costs and increase of the profit. Rule 4: Four groups of elements integrated in the AHmodel, presented with the 4th partition, are humanization measures. Rule 5: Humanization measures are interventions at the work. Humanizations measures are investments and they are costs for the system. Rule 6: Cost of humanization interventions are compared with costs of worker performance decrease and costs of worker health decrease. Rule 7: The goal of all activities is humanization of work to reduce work load and to increase perceived level of actual availability.
III. CONCLUSSION To estimate interventions at the work, at the human recourse activities and in the working environment a model describing relation "work" to "worker" was determined. The AH model development process passed a few development steps. The key elements on which development process was grounded have been: • • • •
human factors expert knowledge introduction of graph theory formal determination of AH model syntax interpretation of the relations "work" to "worker" with semantics
The output is the expert AH model, usable in human recourse management activities and in occupational health, to support evaluation of humanization interventions at the work place. Developed AH-model is the expert model for implementation in occupational health and for implementation in human factor activities. Together with the tool Questionnaire of Actual Availability (QAA), which has been developed as a part of the AH-model, it is the expert model of application in the area of occupational health. In preventive medicine, where occupational health belongs, there are huge mass of non-organized spread data with great variability. Worker should be, according to his/her role in the occupational health activities, healthy person. He/she has not as much interests like an ill person to participate in data collection. Due to these facts, development of expert models to support evaluation process and decision making in the area of human factors and in occupational health, is of great interest.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Availability Humanization - The Semantic Model in Occupational Health
165
4.
MOLAN, Marija, MOLAN, Gregor. Availability assessment for performance prediction in the real working situation. V: Proceedings of the IASTED International Coference on Applied simulation and modelling: June 28-30, 2004, Rhodes, Greece. Anaheim; Calgary; Zürich: International Association of Science and Technology for Development Press, 2004, pp. 560-565 ORM, Object role modeling, The official site for conceptual data modeling.http://www.orm.net/ Halpin, T.A. 2001a, Information Modeling and Relational Databases, Morgan Kaufmann Publishers, San Francisco (www.mkp.com/books_catalog/catalog.asp?ISBN=1-55860-672-6). Skof M. and Molan G., 1992, Human availability and system's performance in systems of high technologies. In Organization and information systems 2 by Z. Kaltneker (eds.) (Moderna organizacija, Kranj), 237–248 MOLAN, Marija, MOLAN, Gregor. Model povezave razpolozljivosti in delovnega okolja = Model for connection of actual availability and workload. Sanitas et labor, 2001, year 2, no. 1, pp. 27-41.
AH-model is focused to the key elements of the relation "work" to "worker". It is focused to worker self-perception of his/her actual availability on which depend the whole relation "work" to "worker" and to humanization. The most important mission of humanization interventions is the increasing of human actual availability at the work. Availability and humanization have to be tightly integrated and connected. The main interests of all intervention at the work are described with relation availability and humanization. The output is humanized worker friendly work. Developed AH-model presents integration of human factors expert knowledge and knowledge of information experts.
5. 6. 7.
8.
REFERENCES 1.
2. 3.
Molan M. Molan G., Expert model for prediction of human performance. In: Axelsson J, Bergman B, Eklund J, Editors. Proceedinsg of the international conference on TQM and human factors – towards successful integration. Vol 2; 1999 Jun 15-17 Linkoeping. Microsoft SQL Server 2005 Express Edition, http://www.microsoft.com/sql/editions/express/default.mspx Skof M. and Molan G., 1991, Human potential capacity, real capacity, and performance : models and connections. In Use of probabilistic safety assessment for operational safety by IAEA (IAEA, Vienna), 766–770.
Author: Dr. Marija Molan Institute: Street: City: Country: Email:
University Clinical Centre, Institute for Occupational Health Poljanski nasip 58 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
GIFT: a tool for generating free text reports from encoded data Silvia Panzarasa1, Silvana Quaglini2, Mauro Pessina3, Anna Cavallini4, Giuseppe Micieli5 1 3
Consorzio di Bioingegneria e Informatica Medica, Pavia, 2 Dept. of Computer Science and System, University of Pavia, TSD Project, Milan, 4 IRCCS Foundation “C. Mondino”, Stroke Unit, Pavia, 5 IRCCS “Humanitas”, Rozzano (MI), Italy
Abstract— The benefits of the electronic patient record versus the traditional paper-based clinical chart have been widely illustrated in the medical informatics literature. However, hospital information systems are often used only to administrative purposes (admission/discharge) and most of the clinical information is still paper-based. There is a resistance of healthcare operators to shift from paper to computer, and there are many reasons for this behaviour, that will be discussed in the paper. One of these reasons, that is also the focus of our work, is that while formulating the data model, physicians are often fostered (by computer scientists) to reduce free text data. In other words, computerisation of the medical record is often offered as an opportunity to “encoding” information as much as possible, with the promise that this will facilitate further statistics and data sharing. However, in some cases physicians perceive this encoding as a constraint and a limitation. In this paper we discuss this issue and we illustrate a solution devised for a Stroke Unit. Keywords— Electronic Patient Record, Clinical Practice Guideline, Encoding, Natural Language Generation.
I. INTRODUCTION Electronic Patient Record (EPR) is expected not only to store patients’ care history, but also to support clinical and administrative work and communication with multiple sources of medical information and knowledge. Examples of these sources are laboratory and radiology information systems (inside the same hospital), other EPRs (from different hospitals), and knowledge-based systems, i.e. software tools supporting physicians during the daily management of the patients’ careflow. Moreover, EPR is the basis for exchanging opinions with colleagues and for data transferring to general practitioners (GPs), once the patient is discharged. Need of data sharing with so heterogeneous frameworks rises several challenges to the design of the EPR data model and user's interface. For clinical routine and communication with “human agents”, of course users prefer systems based on free text data entry, being the solution closest to the usual paperbased documents approach and face to face communication. Such systems give the user flexibility and freedom to represent patient’s condition in the desired order and granularity.
Conversely, their narrative structure is hardly accessible for “software agents”, e.g. computer systems for decision making and statistical analysis, for which structured information is preferred. A great opportunity to bridge this gap is offered by emerging techniques of natural language processing [1]. Moreover, combining progress in speech recognition with availability of portable, mobile, and wireless devices, will offer the opportunity of entering information everywhere and in every moment [2,3,4]. However, according to Baud et al. [5] "automatic encoding of procedures and diagnoses by computer directly from free text is not yet a solved problem, without human intervention or validation". In particular, these techniques are not powerful enough to extract standardized and structured data from free-text in situations in which knowledge about presence or absence of a certain finding or disease is life-threatening. Opposite to the natural language processing approach, there is the possibility of encoding all the information entered in the EPR. Proposed standards are SNOMED (www.snomed.org) and LOINC (www.regenstrief.org/ medinformatics/ loinc) for terminologies, and HL7 (www.hl7.org) and OpenEHR (www.openehr.org) for data exchange; in last years there is an intention emerging to harmonise the proposed new standards. Despite clear benefits to data sharing with software agents, the “encoding” approach is still viewed as time consuming and restrictive in the daily clinical activity. There are three main problems. First, the richness and variety of medical concepts are currently a major barrier to formulating a widely accepted and standardized clinical vocabulary suitable for encoding patient specific information. Second, even where standards exist, it is not easy to find the correct term within them: for example, encoding pathologies through the International Classification of Diseases (ICD) requires navigating a complex hierarchy. Third, physicians need to produce textual information, mainly to be printed for communication purposes, for example the discharge letter. Thus, if encoding is adopted, also Natural Language Generation (NLG) techniques will be probably necessary [6,7,8,9]. The goal of our work is to reach a compromise between needs for encoding and needs for expressiveness, i.e. to obtain an EPR system which is able both to dialog with software agents, such as decision support systems, and to produce free text reports for communication, administrative
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 152–156, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
GIFT: a tool for generating free text reports from encoded data
153
and legal purposes. We illustrate our solution for a Stroke Unit, where a computerised version of the SPREAD clinical practice guideline (CPG) for stroke management [10] was implemented. Of course there is a need for integration with the EPR, in order to both generate suggestions according to the available patient data and verify the physicians' adherence to them. II. THE CLICNICAL APPLICATION: STAGE STAGE (Stroke Active Guideline Evaluation) is an Italian project dealing with the evaluation of the impact, in terms of physicians' compliance and patient outcomes, of the application of the clinical guideline edited on March 2005 for stroke prevention and management [10]. The project involves twenty Neurological Units in Italy which are using the same EPR, for six months without- and for the further six months with a decision-support integrated module. The idea is to evaluate physicians' compliance to the guideline, before and after providing the EPR with functionalities such as automatic generation of guideline suggestions, reminders and patients' status alerts. To obtain this goal the clinical chart has been integrated with a Careflow Management System (CfMS) developed on the basis of the SPREAD guideline [11,12]. The aim of data-entry consequently is twofold: to store patient data and to provide enough information to evaluate the adherence to the guideline (during the whole project period) and to generate decision support (during the second project phase). Before starting the project we analysed the existing EPR and its user interface: the aim was to verify the presence of the data necessary to generate guideline suggestions. Thus, the data model has been updated both by increasing information and by changing the nature of the existing information, mainly shifting from free text to encoding. The next paragraphs focus on this update, and on efforts done to meet the users’ requirements. The stroke unit EPR was WINCARE® implemented by TSD-Projects. III. WORKING ON THE EXISTING EPR In the previously implemented EPR, data-entry was mainly limited to free-text forms, as shown in the upper part of Figure 1, and encoding only concerned few specific information. Therefore EPR purpose was just to limit paperbased communication of data and improving retrieval of clinical charts. The advantage was that information was ready to be printed as-it-was for summaries, discharge letters, etc.
Fig. 1 - Shifting from free-text to encoded data entry: the picture refers to past and recent clinical histories
It is clear that no integration with the system for the guideline management was possible because of almost complete lack of structured data. In fact information stored by this EPR did not let a proper interpretation of the “rules” embedded in the SPREAD guideline: for example, still referring to the upper part of Figure 1, automatic evaluation of the eligibility of a patient to thrombolytic treatment was impossible, because it requires very clear information about clinical history. A. Encoding information To add the decision-support functionalities, reengineering of both structure and interface of the EPR was performed. A deep examination of the guideline allowed to determine the minimum data set required for implementing all the recommendations. Some of them are very crucial, for example the administration of r-tPA, the above mentioned thrombolytic drug. This treatment can save the patient's life if it is administered within few hours from the ischemic stroke symptoms onset, but its management requires both neurologists' skill and very accurate analysis and interpretation of the patient's history, because it has several contraindications. The major risk is fatal bleeding. Consequently, we must be able to capture or to exclude by sure these contra-indications from EPR. For this reason (there are many other similar situations in the guideline), to avoid misinterpretation, most of data concerning patient's history have been encoded. When possible, existing standard classifications, such as ICD9-CM for the pathologies, have been used. As shown in lower part of Figure 1, the result is a strongly structured EPR. Associated to each encoded item, the user interface allows entering free text notes, in order to
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
154
Silvia Panzarasa, Silvana Quaglini, Mauro Pessina, Anna Cavallini, Giuseppe Micieli
input details that could hardly be encoded, but that are not important to decision support purposes. With the new EPR version, the interpretation of clinical guideline rules is easier and safe: for example, contraindications to thrombolysis are retrieved using specific ICD9 codes and automatic calculation can be done to know if the therapeutic time-window is still open. On the other hand, some problems have been highlighted during the pilot utilisation phase: 1) it was difficult to navigate ICD9 efficiently, despite usual facilitation for keyword- and codebased search and 2) information was no more immediately available for producing nice printed reports. Concerning the first problem, about 50 past clinical charts have been analysed in order to find the most common pathologies reported in the history of patients and their relatives. In such a way an initial list of "frequent items" has been created. Then, as long as new data are entered, this list is continuously updated by an automatic algorithm, and diversified according to sex and age. The list is shown as the first choice, and ICD9 will be accessed only if the required pathology is not in the list. Next paragraph illustrates the solution to the second problem. IV. AUTOMATIC GENERATION OF REPORTS To meet physicians' needs to quickly produce printed reports in natural language we developed a module for Generation of Inferable Free Text (GIFT). This module has been integrated within Wincare® and activated whenever the user completes a form and click on the button “Generate Report”. The idea was to create a generic module able to produce textual reports starting from a set of encoded data in whichever Wincare® form (but it should be easy to integrate GIFT into another EPR software). Code descriptions are combined with associated notes (if any), according to the underlying Italian grammatical rules, taking into account patient's and relative's gender, to generate the correct words, and suitable introduction and conjunction sentences. Once generated, the report may be edited for adjustments. Instead of creating ad-hoc report for every form, we classified Wincare forms into two groups: linear and grid forms, because of their different structure requiring different management of report creation. An example of a linear form is the Recent History in which there is a fixed-length sequence of encoded data items. An example of grid form is the Past History, in which the number of encoded items is different from patient to patient. We modeled all the information needed for the text generation within a set of relational tables (GIFT DB). The most important one is Coded-Fields, containing:
• ID: an unique id for the record; • FORM-NAME: the name of the Wincare form (for example RECENT_HISTORY); • TAG: the name of the tag associated to the Wincare field, needed for retrieving the field value; • LABEL: the label of the field; • TEXT: field-related text, which will be part of the generated report; • ORDER: concatenation order of the field in the report • TYPE: pointer to the table TIPOLOGY in order to manage punctuation marks and/or unit of measure (ANTE and POST); • INPUT: refers to the table TEXT_M_F which manages word generation according to the patient's gender (remember Italian language is very sensitive to gender, and gender-related endings are many and various); • SENTENCE: refers to the table SENTENCES. The presence of the character * at the beginning of the sentence in TEXT indicates that the sentence must be itemized in the report. According to data shown in Figure 2 and taking into account data inserted by the user in the recent history form, the report generated by GIFT will be “…The systolic blood pressure measured is: 140 mmHg. Patient’s medical history revealed motor deficit and language disturbances. She has pace maker....”. For the tag SYS_PRESSURE, using information stored in TYPE, GIFT simply concatenates the TEXT with the value inserted by the user (e.g. 140) taking into consideration punctuation mark (ANTE field “:”) and unit of measure (POST field “mmHg”). As said before, for tags MOTOR and LANGUAGE, GIFT replaces the initial character * with “Patient’s medical history revealed” taken from Sentences and then attaches the TEXT. Eventually, for PACE_MAKER, GIFT uses information stored in Text_M_F in order to put the correct subject for the sentence according to the patient's gender and puts only the text “has pace maker” because the value selected by the user was YES. In this case the table Sensible_Values has been
Fig. 2 – Attribute values in some rows of the GIFT DB.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
GIFT: a tool for generating free text reports from encoded data
used: it has been created to meet physicians' need of a simple and smooth report. For each TAG in Coded_Fields, GIFT checks whether the value selected by the user belongs to the VALUE column, in this case: a) if the value is ABSENT or UNKNOWN the corresponding TEXT will not be inserted into the final report (ANTE and POST have null value); b) if the values is YES or PRESENT, the character “+” in the ANTE fields means that GIFT will only insert the TEXT, without adding the "yes" value, since the text itself is in affirmative form. GIFT has been integrated in Wincare and interaction is as follows: a) Wincare calls GIFT passing six parameters (the name of the form; the connection string to GIFT DB; the patient’s gender “M” or “F”; a reference to the form in order to retrieve the value inserted by the users; actions to be done - e.g. “ORDER BY” and/or “GROUP BY”; fields on which the actions must be executed; b) GIFT's output parameter is a string containing the produced report, which can be visualized by Wincare through its interface. The behaviour of the software module is the same in the case of linear and grid form, but for the latter GIFT has to make additional operations stated by the parameters actions and fields. Table 1 shows the performance of the system for a particular patient's history. Grey font in the "recent patient history" indicates the portion of text that has not been reported in the computer-generated report. Post-hoc analysis showed that in most cases these portions are simply redundancies, or unnecessary details. On the contrary, the "past patient history" generated by the computer is in general more extensive, mainly because it reports the full ICD9 definitions of pathologies, and not only abbreviations or suggestive symptoms, as physicians often do. This has been considered an improvement, because abbreviations are not standard and are often misunderstood. A preliminary evaluation on 10 patient’s reports has been done in a similar way and used as a feedback for further refinement. The most appreciated features have been lack of ambiguity and chronological order of the patient’s related events. Moreover, GIFT DB ensures great flexibility in report production: in principle, every user could decide his own report content and format. V. CONCLUSIONS This work shows that it is possible to mediate between needs for encoding and needs for textual reporting from EPR. We are aware that NLP is an alternative to our approach, but we want to stress the absolute need, in our application, of retrieving the correct and complete information about the patient history and objective examination. In fact, this information is used by critical suggestions generated by
155
the decision support system. The project is in its first phase: we plan to collect information about the proportion of "missing" text that need to be added by hand-editing the automatic report and, in general, about users' satisfaction. Table 1 Comparison between hand-written and computer-generated reports Hand-written by physician RECENT PATIENT HISTORY This morning, at about 8:00 the patient manifested mouth deviation at left and dysartry. GP has been called that found deficit of the VII inferior right nerve and sent the patient to the emergency room of the OSM. Then she has been transferred to the stroke unit with the suspect of cerebral ischemia. Computer-generated RECENT PATIENT HISTORY On 28/1/2005, at 8:00 am the patient manifested mouth deviation at the left side and dysartry. She called her GP and then she was transferred to the stroke unit. Hand-written by physician PAST CLINICAL HISTORY In 1978 TVP, in 1989 inferior AMI. Antiplatelet and antihypertensive therapy. Hypertension detected in 1988, therapy unknown. In the same year, appearance of angina and sensation of missing heart beat. In 2002, she had an event of “mental confusion” associated to dizziness. She underwent cerebral TC which revealed left cerebellum ischemia Computer-generated PAST CLINICAL HISTORY Patient was affected: In 1978 by venous thrombosis and embolism; in 1988 by essential hypertension, therapy unknown, and by Angina Pectoris, with sensation of missing heart beat; in 1989 by acute myocardial infarction. Antiplatelet and antihypertensive therapy. In 2002 by cerebral artery occlusion, with an event of “mental confusion” associated to dizziness. Cerebral TC: left cerebellum ischemia
REFERENCES 1.
Baud R, Ruch P. The future of natural language processing for biomedical applications. Int J Med Inform. 2002 Dec 4;67(1-3): 2. Rodriguez MD, Favela J, Martinez EA, Munoz MA. Location-aware access to hospital information and services. IEEE Trans Inf Technol Biomed. 2004 Dec;8(4):448-55 3. Bergesron B. Pervasive clinical computing: data acquisition. MedGenMed. 2003 Dec 09;5(4):27. 4. Giorgino T, Azzini I, Rognoni C, Quaglini S, Stefanelli M, Gretter R, Falavigna D. Automated spoken dialog system for hypertensive patient home management; International Journal of Medical Informatics 2004 Apr 5. Baud RH, Weber P, Lovis Ch, Coding in context. PCSE 2001 Brugge 6. Huske-Kraus D. Text generation in clinical medicine--a review. Methods Inf Med. 2003;42(1):51-60. 7. Apkon M, Singhaviranon P. Impact of an electronic information system on physician workflow and data collection in the intensive care unit. Intensive Care Med. 2001 Jan;27(1):122-30. 8. Lovis CL, Lamb A, Baud R, Rassinoux AM, Fabry P, Geissbuhler A. Clinical documents: attribute-values entity representation,context, page layout and communication. AMIA Annu Symp Proc. 2003;:396-400 9. Johnson KB, Cowan J. Clictate: a computer-based documentation tool for guideline-based care. J Med Syst. 2002 Feb;26(1):47-60. 10. SPREAD (Stroke Prevention and Educational Awareness Diffusion) "Ictus cerebrale: linee guida italiane di prevenzione e trattamento", marzo 2003, www.spread.it
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
156 11.
Silvia Panzarasa, Silvana Quaglini, Mauro Pessina, Anna Cavallini, Giuseppe Micieli
Panzarasa S, Maddè S, Quaglini S, Pistarini C and Stefanelli M. Evidence-based careflow management systems. Journal of Biomedical Informatics, 35, 123 - 139, (2002) 12. Micieli G, Cavallini A, Quaglini S. Guideline Compliance Improves Stroke Outcome- A Preliminary Study in 4 Districts in the Italian Region of Lombardia. Stroke, 33, 1341 - 1347,(2002)
Author: Silvia Panzarasa Institute: Street: City: Country: Email:
Consorzio di Bioingegneria ed Informatica Medica (UPIT) via Ferrata 1 Pavia ITALY
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Supporting Factors to Improve the Explanatory Potential of Contrast Set Mining: Analyzing Brain Ischaemia Data N. Lavrac1,2, P. Kralj1, D. Gamberger3 and A. Krstacic4 1
Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia University of Nova Gorica, Vipavska 13, 5000 Nova Gorica, Slovenia 3 Rudjer Boskovic Institute, Bijenicka 54, 10000 Zagreb, Croatia 4 University Hospital of Traumatology, Draskoviceva 19, 10000 Zagreb, Croatia 2
Abstract— The goal of exploratory pattern mining is to find patterns that exhibit yet unknown relationships in data and to provide insightful representations of detected relationships. This paper explores contrast set mining and an approach to improving its explanatory potential by using the so called supporting factors that provide additional descriptions of the detected patterns. The proposed methodology is described in a medical data analysis problem of distinguishing between similar diseases in the analysis of patients suffering from brain ischaemia. Keywords— Exploratory data analysis, contrast set mining, subgroup discovery, supporting factors, brain ischaemia
I. INTRODUCTION Data analysis in medical applications is characterized by the ambitious goal of extracting potentially new relationships from data, and providing insightful representations of detected relationships. Methods for symbolic data analysis are preferred since highly accurate but non-interpretable classifiers are frequently considered useless for medical practice. The task of descriptive induction is to construct patterns or models describing data properties in a symbolic, human understandable form. Descriptive induction methods subgroup discovery [1], contrast set mining [2] and emerging patterns [3] are specifically designed to extract patterns (in the form of rules) from class labeled data. Unlike methods for inducing classification models (such as decision tree induction [4] and classification rule learning [5]), the patterns discovered by descriptive induction methods represent individual chunks of knowledge and are appropriate for being interpreted one-by-one. The descriptive induction task is not concluded when individual rules are discovered. A property of the discovered rules is that they contain only the minimal set of principal characteristics of the target class that distinguish the target class examples (positive examples) from the control set (negative examples). For interpretation and understanding purposes other properties that support the detected rules are
also relevant. In subgroup discovery these properties are called supporting factors. They are used for better human understanding of the principal factors and for the support in the decision making process [6]. A special data mining task dedicated to finding differences between contrasting groups is contrast set mining [2]. In our recent work [7] we have shown the similarity of contrast set mining and subgroup discovery and proposed a method for contrast set mining through subgroup discovery. The focus of this paper is to extend the concept of supporting factors from subgroup discovery to contrast set mining. We present our approach on the problem of discriminating between two groups of ischaematic brain stroke patients: patients with thrombolic stroke and those with embolic stroke. This paper is organized as follows: Section II introduces the brain ischaemia data analysis problem. Section III presents the subgroup discovery approach to contrast set mining, including the results on the brain ischemia data. Section IV presents the statistical approach to discovering supporting factors in subgroup discovery and its adaptations to contrast set mining, as well as the results and the medical interpretation of the discovered contrast sets from brain ischaemia data. II. THE BRAIN ISCHAEMIA DATA ANALYSIS PROBLEM A stroke occurs when blood supply to a part of the brain is interrupted, resulting in tissue death and loss of brain function. Thrombi or emboli due to atherosclerosis commonly cause ischemic arterial obstruction. Atheromas, which underlie most thrombi, may affect any major cerebral artery. Atherothrombotic infarction occurs with atherosclerotic involving selected sites in the extracranial and major intracranial arteries. Cerebral emboly may lodge temporarily or permanently any where in the cerebral arterial tree. They usually come from atheromas (ulcerated atheroscleritic plaques) in extracranial vessels or from thrombi in a damaged heart (from mural thrombi in atrial Fibrillation). Atherosclerotic or hypertensive stenosis can also cause a
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 157–161, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
158
N. Lavrac, P. Kralj, D. Gamberger and A. Krstacic
stroke. Embolic strokes, thrombolic strokes and stokes caused by stenosis of blood vessels are categorized as ischaemic strokes. 80% of all strokes are ischaemic while the remaining 20% are caused by bleeding [8]. The brain ischaemia database, that is the focus of our analysis, consists of records of patients who were treated at the Intensive Care Unit of the Department of Neurology, University Hospital Center "Zagreb", Zagreb, Croatia, in year 2003. In total, 300 patients are included in the database: • •
209 patients with the computed tomography (CT) confirmed diagnosis of brain stroke: 125 with embolic stroke, 80 with thrombolic stroke, and 4 undefined. 91 patients who entered the same hospital department with adequate neurological symptoms and disorders, but were diagnosed (based on the outcomes of neurological tests and CT) as patients with transition ischaemic brain attack (TIA, 33 patients), reversible ischaemic neurological deficit (RIND, 12 patients), and severe headache or cervical spine syndrome (46 patients).
Patients are described with 26 descriptors representing anamnestic, physical examination, laboratory test and ECG data, and their diagnosis. III. CONTRAST SET MINING THROUGH SUBGROUP DISCOVERY
A data mining task devoted to finding differences between groups is contrast set mining (CSM). It was defined by Bay and Pazzani [2] as "finding conjunctions of attributes and values that differ meaningfully across groups". If was later shown that contrast set mining is a special case of a more general rule discovery task [5]. Finding all the patterns that discriminate one group of individuals from all other contrasting groups is not appropriate for human interpretation. Therefore, as is the case in other descriptive induction tasks, the goal of contrast set mining is to find only the descriptions that are "unexpected" and "most interesting" to the end-user [2]. On the other hand, a subgroup discovery (SD) task is defined as follows: Given a population of individuals and a property of those individuals that we are interested in, find population subgroups that are statistically "most interesting", i.e., are as large as possible and and have the most unusual statistical (distributional) characteristics with respect to the property of interest [1]. Putting these two tasks in a broader rule learning context, note that there are two main ways of inducing rules in multi-class learning problems: learners either induce the rules that characterize one class compared to the rest of the data
(the standard one-versus-all setting, used in most clasification rule learners), or alternatively, they search for rules that discriminate between all pairs of classes (known as the round robin approach used in classification rule learning, proposed by [9]). Subgroup discovery is typically performed in a one-versus-all rule induction setting, while contrast set mining implements a round robin approach (of course, with different heuristics and goals compared to classification rule learning). Even though the definitions of subgroup discovery and contrast set mining seem different, the tasks are compatible [7]. From a dataset of class labeled instances (the class label being the property of interest) by means of subgroup discovery [1] we can find contrast sets in a form of short interpretable rules. Note, however, that in subgroup discovery we have only one property of interest (class) for which we are building subgroup descriptions, while in contrast set mining each contrasting group can be seen as a property of interest. It is easy to show that a two-group contrast set mining task CSM(G1;G2) can be directly translated into the following two subgroup discovery tasks: SD(Class = G1 vs. Class = G2) and SD(Class = G2 vs. Class = G1). And since this translation is possible for a two-group contrast set mining task, it is - by induction - also possible for a general contrast set mining task. Our experiments show that the round robin approach is not appropriate when looking for characteristic differences between two similar diseases if data about normal (healthy) people is also available. The reason is that the algorithm could – by coincidence – find features that distinguish between two diseases but are at the same time characteristic for normal people. Therefore we use a one-versus-all approach which is standard in subgroup discovery. To find characteristics of the embolic patients we perform subgroup discovery on the embolic group compared to the rest of the patients (thrombolic and those with a normal CT). Similarly, when searching for characteristics of thrombolic patients, we compare them to the rest of the patients (embolic and those with a normal CT). In this setting, we ran the contrast set mining experiment with the Orange [10] implementation of the Apriori-SD subgroup discovery algorithm [11] with the following parameters: minimal support = 15%, minimal confidence = 30%, k = 5. The results are displayed in Figures 1 and 2. Strokes caused by embolism are most commonly caused by heart disorders. The first rule displayed in Figure 1 has only one condition confirming this medical knowledge as atrial fibrillation (af = yes) as an indicator for brain stroke. The combination of features from the second rule also shows that patients with antihypertensive therapy (ahyp = yes) and antiarrhytmic therapy (aarrh = yes), therefore patients with heart disorders are prone to embolic stroke.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Supporting Factors to Improve the Explanatory Potential of Contrast Set Mining: Analyzing Brain Ischaemia Data
Fig. 1 Characteristic descriptions of embolic patients displayed in the bar chart subgroup visualization: on the right side the positive cases, in our case embolic patients, and on the left hand side the others - thombolic and normal CT.
Fig. 2 Characteristic descriptions of thrombolic patients displayed in the bar chat subgroup visualization
Thrombolic stroke is most common with older people, and often there is underlying atherosclerosis or diabetes. In the rules displayed in Figure 2 the features presenting diabetes do not appear. The rules rather describe patients without hart (or other) disorders but with elevated dyastolic blood pressure and fibrinogen. High cholesterol, age and fibrinogen values appear characteristic for all ischeamatic strokes. IV. SUPPORTING FACTORS Exploratory pattern discovery is not concluded when individual rules are discovered. The interpretation and insightful knowledge discovery is the goal that needs to be further perused. As shown in the previous section, some rules can be interpreted directly. But the discovered rules contain only a minimal set of principal differences between the detected subset of target (positive) and the control (negative) class examples – in our case up to four features per rule. For a domain expert, in our case a medical doctor, the information about other characteristics that support and enforce the discovered patterns is very relevant. A. Supporting factors in subgroup discovery In subgroup discovery the factors that appear in subgroup descriptions are called the principal factors, while the additional properties that are also characteristic for the detected subgroup are called supporting factors. They are used for
159
better human understanding of the principal factors and for the support in the decision making process [12]. The supporting factors detection process is for every detected subgroup repeated for every attribute separately. For numerical attributes their mean values are computed while for categorical attributes the relative frequency of the most frequent or medically most relevant category is computed. The mean and relative frequency values are computed for three example sets: for the subset of positive examples that are included into the pattern, the set of all positive examples, and finally for the set of all negative examples (the control set). The necessary condition for an attribute to be potentially used to form a supporting factor is that its mean value or the relative frequency of the given attribute value must be significantly different between the target pattern and the control example set. Additionally, the values for the pattern must be significantly different from those in the complete positive population. The reason is that if there is no such difference then such a factor is supporting for the whole positive class and not specific for the pattern. The statistical significance between example sets can be determined using the Mann-Whitney test for numerical attributes and using the chi-square test of association for categorical attributes. A practical tutorial on using these tests can be found in [13] (Ch. 11a and 8, respectively). The decision which statistical significance is sufficiently large can depend on the medical context. We set the cut-off values at P < .01 for the significance of the difference with respect to the control set and P < .05 for the significance with respect to the positive set. B. Supporting factors for contrast sets Even though contrast set mining and subgroup discovery are very similar, there is a crucial difference between these two data mining techniques: in subgroup discovery there is only one property of interest and the goal is to find characteristics of the individuals that have this property of interest. In contrast set mining there are several groups of individuals and the goal is to find differences between the individuals belonging to these groups. Therefore the notion of supporting factor from subgroup discovery can not be directly adopted in the contrast set mining situation. We propose and show in our experiments a way of extending the supporting factors from subgroup discovery to contrast set mining. Instead of presenting to the domain expert only the supporting factors for the positive class, we also show the distribution (for discrete) or the average (for numeric) attributes appearing in the supporting factor for the negative set and for the entire positive set. This is similar to the work presented in [14], but the methodology pro-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
160
N. Lavrac, P. Kralj, D. Gamberger and A. Krstacic
posed here is tailored for helping explaining contrast sets. Since the interpretation of all the patterns discovered and presented in Section III is out of the scope of this paper, we focus only on two contrast sets: Contrast set 1: (TPr=0.4, FPr=0.14) ahyp=yes & aarrh=yes → class=emb Contrast set 2: (TPr=0.56, FPr=0.2) age>66 & trig>1 & af=no & acoag=no → class=thr The first of the selected contrast sets is intuitive to interpret since both primary factors are treatments for cardiovascular disorders. The supporting factors for this set are shown in Table 1. We can see that the supporting factors (including two primary factors) for this contrast set are all about cardiovascular disorders and therefore they substantiate the original interpretation. It is therefore legitimate to say that embolic stroke patients are patients with cardiovascular disorders while cardiovascular disorders are not characteristic for thrombolic stroke patients. Table 1 Supporting factors for contrast set 1
V. CONCLUSIONS We have generalized the notion of supporting factor form subgroup discovery to contrast set mining. We have applied the proposed methodology of supporting factors for contrast set mining in the analysis of the brain ischemia domain and have achieved interpretable and useful contrast set. The experiments show how much benefit can be gained from such in depth analysis. The presented approach to the detection of supporting factors enables in depth analysis. This approach nicely supplements contrast set mining and can be also easily implemented in domains with a very large number of attributes (e.g. gene expression domains).
REFERENCES 1.
CS1
thrombolic
embolic
fo high
0.82
0.73
0.76
af = yes
80%
13%
53%
ahyp = yes
100%
81%
70%
aarrh = yes
100%
19%
45%
Table 2
factors and help the interpretation to move from speculation toward legitimate conclusions.
Supporting factors for contrast set 2 CS2
embolic
thrombolic
age high
74.2
69.85
69.29
chol high
6.30
5.69
6.59
fibr high
5.25
4.51
4.85
fo low
0.64
0.76
0.73
af = no
100%
47%
88%
smoke = no
73%
46%
55%
The second selected contrast set is vague and is not directly connected with medical knowledge. High age and triglyceride values are characteristic for thrombolic stroke, but the boundary values in the contrast set are not high. The rest of the features in this contrast set say no atrial fibrillation and no anticoagulant therapy: again nothing specific. The supporting factors for this set are shown in Table 2. The supporting factors include high cholesterol and fibrinogen, low fundus ocular and non smoker. These patients are old and they do not have cardiovascular disorders. These examples indicate how supporting factors enforce the primary
S. Wrobel (1997) An algorithm for multi-relational discovery of subgroups. In Proc. of the First European Conference on Principles of Data Mining and Knowledge Discovery, 1997, pp. 78–87, Springer 2. Bay S D, Pazzani M J (2001) Detecting group differences: Mining contrast sets. Data Min. Knowl. Discov., 5(3):213– 246, 2001. 3. Dong G, Li J (1999) Efficient Mining of Emerging Patterns: Discovering Trends and Differences. In Proc. of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, 1999, pp 43-52 4. Quinlan J R (1993) C4.5: Programs for Machine Learning, Morgan Kaufman Publishers Inc 5. Clark P, Niblett T (1989) The CN2 induction algorithm. Machine Learning, 3(4):261–283, 1989. 6. Gamberger D, Lavrac Nada, Krstacic G (2003) Active subgroup mining: a case study in coronary heart disease risk group detection. Artif. intell. med.. [Print ed.], 2003, vol. 28, pp. 27-57. 7. Kralj P, Lavrac N, Gramberger D, Krstacic A (2007) Contrast Set Mining through Subgroup Discovery: Applied to Brain Ischaemina Data. In proc. of the11th Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2007, in press. 8. Victor M, Ropper A H (2001) Cerebrovascular disease. In Adams and Victor's Principles of Neurology, 2001, pp. 821924 9. Fürnkranz J (2001) Round robin rule learning. In Proc. of the 18th International Conference on Machine Learning, 2001, pp 146-153 10. Demsar J, Zupan B, Leban G (2004) Orange: From Experimental Machine Learning to Interactive Data Mining, White Paper (www.ailab.si/orange), Faculty of Computer and Information Science, University of Ljubljana.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Supporting Factors to Improve the Explanatory Potential of Contrast Set Mining: Analyzing Brain Ischaemia Data 11. Kavsek B, Lavrac N (2006) APRIORI-SD: Adapting association rule learning to subgroup discovery. Appl. artif. intell., 2006, pp.543-583 12. Gamberger D, Lavrac N, Krstacic G (2003) Active subgroup mining: a case study in coronary heart disease risk group detection. Artif. intell. med., 28:27-57 13. Lowry R (2007) Concepts and applications of inferential statistics. http://faculty.vassar.edu/lowry/webtext.html
161
14. Lavrac N, Cestnik B, Gamberger D, Flach P (2004) Decision support through subgroup discovery: three case studies and the lessons learned. Mach. learn. [Print ed.], 2004, vol. 57, pp. 115-143. Author: Institute: Street: City: Country: Email:
Petra Kralj Jozef Stefan Institute Jamova 39 1000 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A simple DAQ-card based bioimpedance measurement system T. Zagar1 and D. Krizaj1 1
Faculty of electrical engineering, Laboratory for bioelectromagnetics, University of Ljubljana, Slovenia
Abstract— A custom made DAQ-card based bioimpedance measurement system is presented. The signals are processed by the digital lock-in technique. The system was tested on an electrical model of skin with underlying tissues over a frequency range of 20 Hz to 1 MHz. The measurements performed directly with the DAQ-card are compared to the measurements with the instrumentation amplifier interface. The highest achieved accuracy without special calibration and compensation is about 0.5 % for the impedance magnitude and 0.02° for the impedance phase angle in the low frequency region, whereas in the high frequency region the respective values are approximately 1 % and 1°.
In this paper a design of a single channel multi frequency impedance measuring device based on the digital lock-in and the undersampling is described. The presented design is conceptually nothing new, however, the aim of the study is to present a custom made bioimpedance measuring system utilizing the devices that can be found more often than not in every laboratory and to evaluate its accuracy on a simple electrical model of skin.
Keywords— bioimpedance, electrical model, lock-in technique, measurement system
A. Employed hardware
I. INTRODUCTION There are various impedance measurement devices to choose from when measuring bioimpedance, however, some of the commercially available equipment is not suitable for direct measurement on biological samples [1]. Additionally, most of the bioimpedance measuring devices in the market are designed for specific purpose and can hardly be used for some experiments, therefore a need to design a custom device arises. Moreover, for many basic laboratory experiments there is no need to use a high-tech maximal accuracy bioimpedance measuring device – often a custom made solutions are even better suited in terms of their flexibility and adaptability to specific requirements of the problem. Electrical impedance is by definition a ratio of alternating voltage and current expressed mathematically in the complex notation, however, despite the apparent simplicity in definition a variety of methods and approaches exist to measure it. An often used method of bioimpedance measurement is a four-electrode method [2]. The current is injected into the sample through one pair of electrodes and the other pair of electrodes is used to measure the resulting voltage drop. If no current flows through the voltage measurement electrodes there is also no voltage drop across these electrodes and the measured voltage is the same as the voltage under the electrodes. Further, the devices usually employ some sort of demodulation method to measure the amplitude and the phase of a signal, which is nowadays mostly done by digital signal processing.
II. MEASUREMENT SYSTEM OVERVIEW
No application specific hardware was designed for this study. The voltage was measured directly by the USB DAQ card (NI-USB6211) and the current was also measured by the DAQ card as a voltage drop on a resistor R of a nominal value 47 Ω (Fig. 1). This two voltage measuring channels were configured to operate in a differential mode. The excitation voltage was set to a fixed value of 1.25 V and generated by an Agilent 33220A signal generator. The frequency of the generator was controlled trough the USB link with a personal computer. B. Digital lock-in technique Lock-in amplifies are commonly used to detect minute signals buried in noise [3]. The principle of the lock-in technique is quite straightforward: if the input signal vi is given by (1):
vi = A sin(ωt + ϕ ) + n(t ) ,
(1)
where n(t) is a random (white) noise, the amplitude (A) and the phase angle can be obtained by multiplication with a reference sine and cosine function of the same frequency ( ωr) as the input signal. The output of such multiplication is (2):
A [cos(ϕ ) − cos(2ωr t + ϕ )] + n(t ) sin(ωr t ) 2 (2) A = [sin(ϕ ) + sin(2ωr t + ϕ )] + n(t ) cos(ωr t ) 2
vo sin = vo cos
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 182–185, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A simple DAQ-card based bioimpedance measurement system
183
If the output is averaged over an integer number of periods the result equals (3):
A A cos(ϕ ) + n 2 2
ωr
A A = sin(ϕ ) + n 2 2
ωr
vo sin = vo cos
sin(ϕ n ) ,
(3)
cos(ϕ n )
where An and ϕn are the amplitude and the phase angle of the spectral component of noise n(t) at frequency ωr. A signal has to be sampled over at least one complete period to get the accurate results when averaging, which means that the number of samples (N) satisfies condition (4), where k is an integer (denoting the number of sampled periods), fs is the sampling frequency and f is the frequency of the sampled signal:
N=
kf s . f
(4)
In this case either fs, f, or both of the frequencies, respectively, have to be adjusted to fulfill the condition (4). However, if the number of samples is high enough condition (4) can be violated and the average value of the signal still gets close to its true value. This is the case with presented design where the number of samples was set to 50,000. C. Sampling frequency In the presented system a fixed sampling frequency of 50 kHz was used for the frequencies above 2.5 kHz. A sampling frequency of 1 kHz was used for the frequencies below 2.5 kHz due to an inadequate settling time in the low frequency region when testing the system on the model. The number of samples in this case was set to 2000. D. Electrical model of skin and underlying tissues The system was tested on a simple electrical model of skin with underlying tissues as shown in Fig. 1. The values for the elements were chosen on the basis of previous experience in a way that the impedance of a model when measured at two terminals approximately suits the impedance of the living human skin measured at two terminals. The impedance Ze represents joined
Fig. 1 Measurement system with a simple model of skin and deeper viable tissues
electrode impedance and impedance of stratum corneum. The value for Re2 was set to 10 Ω and suits the absolute value of the impedance obtained with a precision LCR-meter when measuring two Ag/AgCl cup electrodes (cup 8mm in diameter, filled with an electroconductive gel) placed in a direct contact with each other. The value for Re1 was set to 10 kΩ and suits the value [4] obtained after several strippings of stratum corneum. This is also in accordance with [5], where the low-frequency impedance varied from 10 kΩ to 1 MΩ. A value for Ce was chosen to be 10 nF. The impedance Z represents deep viable tissues. The chosen values agree with a fact that the main electrical impedance resides in the stratum corneum while the impedance of the other layers is several orders of magnitude lower [6]. The values were: R1 = R2 = 33 Ω and Cz = 100 nF. III. RESULTS First the impedance Z (marked in Fig. 1) was measured without the Ze impedances and compared to the value obtained with an LCR-meter (Fig. 2).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
T. Zagar and D. Krizaj 0.4
70
0.2
60 70
magnitude, Ω
magnitude error, %
184
0 -0.2
50 40
65 2
-0.4 1 10
2
10
3
4
10 10 frequency, Hz
5
10
30 1 10
6
10
0.2
0
-0.2
-0.4 1 10
3
10 2
10
10 3
4
3
4
10 10 frequency, Hz
5
10
6
10
5 phase angle, °
phase error, °
amplifier ref. value without amp.
2
10
3
4
10 10 frequency, Hz
5
10
6
10
0 -5 -10 -15 -20 1 10
amplifier ref. value without amp. 2
10
10 10 frequency, Hz
Fig. 3 Impedance Z
Fig. 2 Comparison of the impedance Z measured with the DAQ-card to the
5
10
6
10
measured as shown in Fig. 1
measurement with the LCR-meter
To get an accurate value for the impedance phase angle the sampling time between the channels (conversion time) has to be accounted for (Fig. 5). There is another possible source of error in the presented model: from Fig. 1 it can be concluded that there are relatively high common mode voltages present in the system. The voltage difference to be measured is approximately 3 mV (when a voltage of 1 V is applied to the system) and is floating on the voltage of almost 0.5 V, which is about 170times higher. An amplifier with a high common mode rejection is required to amplify this signal successfully. At least in the low-frequency region the CMRR of the used DAQcard is high enough (100 dB from DC to 60 Hz) and should reject the common voltage.
magnitude error, %
3 without amp. amplifier
2 1 0 -1 -2 1 10
2
10
3
4
10 10 frequency, Hz
5
10
6
10
6 phase error, °
There is an offset in the low-frequency region of impedance magnitude, which can be compensated. The phase error from about 100 Hz to 400 Hz is the error of the LCRmeter. In the medium frequency region the error gets higher, however, this is due to the voltage dependant capacitance of the capacitor Cz used in the electrical model: the absolute value of the impedance Z measured with the LCR-meter at 1.0 V and at 0.1 V (rms values) was different by approximately 10 % and the voltage when measuring with the DAQ-card was about 0.08 V (rms) and was not controlled to be set to the same value as with the LCRmeter measurement. Further, the behavior of the designed system when the impedance of the electrodes and the skin is present was tested. The results are shown in Fig. 3. The sampling frequency was set to 50 kHz for the entire frequency span. Quite large discrepancies can be noticed in the low frequency region when measuring directly with a DAQ-card. This is due to an inadequate settling time. When measuring two channels the device has to switch a multiplexer, usually made of switched capacitors. The settling time of the channel increases if the source impedance is high. The medium frequency region follows the reference value rather well (the reference value was the value of the impedance Z measured without the Ze impedances directly with a DAQ-card, c.f. Fig. 2). In addition, Fig. 4 shows the resulting error when the sampling frequency was set as described in the sampling frequency section (II C). In this case the low-frequency error when measuring directly with a DAQ-card is almost completely vanished.
without amp. amplifier
4 2 0 -2 1 10
2
10
3
4
10 10 frequency, Hz
5
10
6
10
Fig. 4 Comparison of the reference value for the impedance Z to the measurement with an instrumentation amplifier interface and to the measurement performed directly with a DAQ-card
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A simple DAQ-card based bioimpedance measurement system
185
100
IV. CONCLUSIONS
50
phase angle, °
0
-50
-100
-150
-200 1 10
2
10
3
4
10 10 frequency, Hz
5
10
6
10
This paper shows that it is possible to achieve a reasonable accuracy in bioimpedance measurement with a very simple setup despite the severe measurement conditions demonstrated in the electrical model. It has to be notified that absolutely no calibration procedure was performed and still the results were satisfactory, however, a calibration should be performed for the reliable measurements. Moreover, to increase the accuracy especially in the high frequency region some sort of compensation is necessary. When measuring with a specific type of DAQ-card a special attention should be paid to assure adequate settling time for the measured signals.
Fig. 5 Phase angle of the impedance Z if a delay between the channels is
REFERENCES
not taken into account
To lower the source impedance of the voltage measured by channel 1 (c.f. Fig. 1) and improve the performance of the system an instrumentation amplifier interface between the model and the DAQ-card was used (Fig. 3, solid line). A similar solution to improve the measurement accuracy was proposed by [7]. A high speed FET-input instrumentation amplifier (INA111) was employed with a gain set to about 10. The amplifier gain/phase characteristics were recorded for the treated frequency span and taken into account when calculating the impedance. The results in the low frequency region were significantly improved (the achieved accuracy is about 0.5 % and 0.1° for the impedance magnitude and the phase angle, respectively), however, the error in the high frequency region gets larger (about 2.5 % for the impedance magnitude and 4° for the impedance phase angle). In comparison to the measurement performed directly with the DAQ-card the results obtained with an amplifier interface were slightly better in the low-frequency region and worse in the high frequency region, where the accuracy of a direct measurement was about 1 % and 1° for the impedance magnitude and phase angle, respectively.
1. 2. 3. 4. 5. 6. 7.
Dudykevych T, Gersing E, Thiel F et al (2001) Impedance analyser module for EIT and spectroscopy using undersampling. Physiol Meas 22 (1):19-24 Geddes LA (1996) Who introduced the tetrapolar method for measuring resistance and impedance? IEEE Eng Med Biol 15 (5):133-134 Grimnes S and Martinsen O G (2000) Bioimpedance and Bioelectricity Basics Academic Press. pp. 188 ISBN 0-12-303260-1 Yamamoto T, Yamamoto Y (1976) Electrical-properties of epidermal stratum-corneum. Med Biol Eng 14 (2):151-158 Rosell J, Colominas J, Riu P et al (1988) Skin impedance from 1 Hz to 1 MHz. IEEE T Bio-Med Eng 35 (8):649-651 Pliquett F, Pliquett U (1996) Passive electrical properties of human stratum corneum in vitro depending on time after separation. Biophys Chem 58 (1-2):205-210 Gersing E (1991) Measurement of electrical-impedance in organs measuring equipment for research and clinical-applications. Biomed Tech 36 (1-2):6-11 Author: Tomaz Zagar Institute: Street: City: Country: Email:
University of Ljubljana, Faculty of electrical engineering Trzaska cesta 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Benefits and disadvantages of impedance-ratio measuring method in new generation of apex-locators T. Marjanovic1, Z. Stare1 1
Faculty of Electrical Engineering and Computing, Department of Electronic Systems and Information Processing, Zagreb, Croatia
Abstract— One of the most critical procedures for a successful endodontic treatment is determination of root canal length. Electronic devices called apex-locators have been used for working length measurement for more than forty years. First apex-locators used direct current, but had many drawbacks. In order to improve the measurement procedure, direct current was replaced by alternating current and impedance was measured on single frequency in order to calculate the position of apical constriction. Problems with single frequency apex-locators occurred due to the presence of electrolytes in the canal. Thus, new generation of locators has been introduced – impedance is now measured on two frequencies and the position of apical constriction is calculated from corresponding impedance ratio. The aim of this paper is to examine properties of the impedance-ratio method and to compare it with single frequency measurements. Experiment was carried out in-vitro. Exact position of apical foramen on each tooth was measured with microscope. Then the teeth were placed in a freshly mixed alginate, commonly used in such measurements. Impedances were measured on frequencies commonly used by apex-locators, and with Kerr file positioned at apical foramen at several positions above and under it (in steps of 0.25 mm). Measurements were performed in dry canal and in canal filled with electrolytes usually used in endodontic treatment. Sensitivity on different type of electrolytes and sensitivity on electrode displacement were calculated for single frequency and frequency-ratio technique in order to investigate the benefits of each. By comparing them with calculated variation coefficient of raw measurements we concluded that the frequency-ratio method (used in new generation of apex-locators) is more robust to electrolytes, but its sensitivity decreases in normal condition of the dry canal. For achieving full credibility of imposed conclusion, in-vivo verification should also be performed.
constriction (also called minor foramen) is the narrowest part of root canal, 0.5 to 0.8 mm from the apical foramen (major foramen), depending on tooth type and age [1]. The apical constriction, also described as the cementodentinal junction, is the anatomical and histological landmark where the periodontal ligament begins and the pulp ends. It represents a potential natural barrier between the contents of the canal and the apical tissues (Schilder 1967) and it is generally accepted that the preparation and obturation of the root canal should be at, or short of, the apical constriction. The success of the whole treatment depends on the accuracy in determining the position of apical constriction. Today, the only known accurate way of determining it is to perform measurement after the tooth extraction. Therefore research and improvement of measuring methods is highly desirable. The most common way of root canal length determination is by using electronic methods. Many studies report on the accuracy achieved by the new generation of electronic apex-locators as well as their extended measurement capabilities in the presence of electrolytes (Fouad et al. 1993, Frank & Torabinejad 1993, Mayeda et al. 1993, Kobayashi 1995). Moreover, it is known that radiographic methods of apical constriction determination are less accurate than the electronic method (Stein & Corcoran 1992) while apical constriction is often short of anatomical apex of tooth (seen in radiograph) [1]. Electronic apex-locators work on the principle of bioimpedance measurement on one or more frequencies. One (neutral) electrode is placed on oral mucosa and the
Keywords— root canal length, electronic apex locator, impedance-ratio measuring method, electrolytes in endodontic, electrode displacement sensitivity
I. INTRODUCTION The success of endodontic treatment depends on the cleaning of root canal. The removal of all pulp tissue, necrotic material and microorganisms from the root canal requires minimal disturbance of the surrounding tissue. The apical foramen is not always located at the anatomical tooth apex (Fig. 1) and that distance can be up to 3 mm. Apical
Fig. 1 Apical anatomy
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 206–209, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Benefits and disadvantages of impedance-ratio measuring method in new generation of apex-locators
other (active) electrode is connected to inter–canal instrument, like Kerr file, which is moved along the root canal. The impedance between the electrodes is measured and then the distance from the apical constriction is calculated. For better accuracy, significant change of impedance, used for calculation, is required when the position of Kerr file in root canal changes. On the other side, minimum influence on impedance with different teeth and lower dependency on electrolyte presence is needed. Measurement method that used bioimpedance on single frequency worked well with dry canal, but is less accurate) when commonly used agents in therapy, blood or saliva are present in the canal. For this reason, impedance-ratio method and new generation of apex-locators have been introduced. Impedance sensitivity and impedance-ratio values on electrode displacement near the apical foramen are elaborated in this paper, as well as the influence of commonly used agents and variation with different teeth. II. MATERIALS AND METHODS Measurement is performed in-vitro on twenty singlerooted teeth placed in a freshly-mixed alginate dental impression material, commonly used as a physical model for apex-locator evaluation [2, 3, 4, 5]. A simple mounting model with micrometer has been built for precise and consistent measurement. A large-area neutral electrode made of stainless steel is placed into the alginate and for active, measuring electrode, a Kerr file K-10 or K-15 is used, depending on which fits better into the foramen. Displacement of Kerr file tip from the apical foramen is controlled with micrometer. A Hewlett Packard HP4284A precise RLC meter is used for impedance measurement. Measurement and data storage are computer-controlled and both, real and imaginary part of impedance are logged on several frequencies in the range from 100 Hz to 1 MHz. Measurement range includes frequencies of 400 Hz and 8 kHz which are used by the majority of apex-locators (Root ZX for instance) and elaborated into detail in this paper. Teeth are kept in saline solution until the experiment. The position of apical foramen is precisely determined with microscope and then each tooth is placed in a freshly mixed alginate. Once the canal has been dried with paper points, the measurement of root canal impedance is performed in the range from 2 mm above the apical foramen to 0.5 mm under the apical foramen in steps of 0.25 mm. During the measurement a hysteresis is noticed, so the measurements are taken only when the file has scrolled down. Measurements are repeated for the canal moisten with saline solution and for the canal filled with 2.5% sodium
207
hypochlorite solution and calcinase, commonly used in endodontic treatment. For quantitative comparison of measuring methods, we have introduced the following parameters: sensitivity factors, coefficient of variation and estimated variability of position. The sensitivity factor of impedance S|Z| is defined as the relative impedance change ΔZ/Z on a single frequency with file-tip displacement:
SZ =
ΔZ Z . Δd
(1)
If we define impedance ratio rZ = Z(400 Hz) / Z(8 kHz), then the sensitivity factor of impedance-ratio Sr is defined as the relative change in impedance ratio ΔrZ/rZ with displacement Δd:
Sr =
ΔrZ r Z . Δd
(2)
The coefficient of variation cv is defined as the ratio of the standard deviation σ to the mean µ:
cv =
σ μ
.
(3)
For the purpose of rough estimation of variability in measured position, we define an estimated variability of position Δl for both, single frequency and impedance–ratio, methods as:
Δl =
cv . S
(4)
III. RESULTS AND DISCUSSION Frequency dependence of impedance magnitude differs significantly for each tooth, but they all track a similar curvature. For comparison purposes, impedances are normalized and results for clinically important file displacement of the apical foramen are plotted on Fig. 2 to 4. The majority of single frequency apex-locators work on 1 kHz or around this frequency. It is obvious that they have great sensitivity [6] in normal clinical condition of a dry canal when file approaches apical foramen (Fig. 2 to 4) – relative change in magnitude of impedance when moving file-tip is 92 %/mm. But in the situation when the canal is not sufficiently dried immediately before taking
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
208
T. Marjanovic, Z. Stare
dried canal wet canal hypochlorite calcinase
0,75
0,50
0,25
0,00 100
1000
10000 Frequency [Hz]
100000
1000000
0,75
dried canal wet canal
0,50
hypochlorite calcinase
0,25 0,00
Normalised impedance
1,00
Fig. 2 Influence of canal filling, 1.0mm above apical foramen
100
1000
10000 Frequency [Hz]
100000
1000000
1,00
5,00
Fig. 3 Influence of canal filling; 0.5mm above apical foramen
dried canal wet canal hypochlorite calcinase
1,00
Normalised impedance 0,25 0,50 0,75 0,00
measurement, problems can occur – relative change in magnitude of impedance when moving file-tip drops to around 30 %/mm on 1 kHz, depending on type and amount of moisture; as presented later in Table 1. Two widely accepted approaches exist in case of two or more frequency apex-locators [1, 7]: voltage difference and impedance ratio method. Voltage difference method uses absolute difference of impedance on two frequencies (which incorporates the information on presence of the electrolyte). The sensitivity of impedance change with file-tip displacement remains the same as in already discussed case of single-frequency measurement. In case of ratio method apex-locators, the impedance ratio on two frequencies represents the position of the file in root canal and it is required that this ratio significantly depends on file-tip displacement, but not on the presence of electrolytes in the canal and on the type of a tooth. The majority of instruments that use this method (widespread Root ZX for example) take measurements on frequencies of 400 Hz and 8 kHz. The ratio on these frequencies is analyzed here. Impedance–ratio shows significantly less change in moistened canal (Fig. 5) than in a dry canal, but generally lower relative sensitivity to file displacement of apical foramen can also be noticed (about 60 %/mm, Table 1). Except for decreased sensitivity of impedance–ratio method, measurement on different teeth showed even higher variation coefficient of impedance–ratio than is the variation coefficient of impedance on a single frequency. With an estimated variability of position Δl defined as in (4) and according to Table 1, we can conclude that single frequency method gives best results when measurement is performed in normal conditions of dry root canal. However, ratio method is more accurate if root canal is moistened.
100
1000
10000 Frequency [Hz]
100000
1000000
Fig. 4 Influence of canal filling; file at apical foramen
dried canal wet canal hypochlorite calcinase
Impedance ratio 2,00 3,00 4,00
Normalized impedance
1,00
0,50
0,00 -0,50 -1,00 -1,50 -2,00 Displacement of file tip to apical foramen [mm]
Fig. 5 Influence of canal filling on impedance–ratio
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Benefits and disadvantages of impedance-ratio measuring method in new generation of apex-locators
Table 1
impedance ratio
single frequency
parameter
Comparison of measuring methods in presence of electrolyte unit
electrolyte in canal dry
hyp.
calc.
cv , Z
-
0,69
0,51
0,63
SZ
% / mm
92
25
18
Δl Z
mm
0,75
2,04
3,5
cv ,r
-
1,04
1,13
1,17
Sr
% / mm
54
69
74
Δl r
mm
1,92
1,64
1,58
Minimal error (in millimeters) can be expected in single frequency method in case of dry root canal. When electrolyte is present, accuracy of single frequency method drastically decreases. However, if impedance-ratio method is used, better results are achieved when conductive media is present in the canal.
Besides the sensitivity to electrode displacement and variation in measured parameter (impedance or impedanceratio) which cause dispersion in determination of apical foramen position (Δl|Z|, Δlr), a systematic error in measurements can also be noticed when electrolyte is present in the canal. Fig. 5 shows that in case of impedanceratio method, the mean reading of working length is expected to be about 0.3 mm longer than it really is when electrolyte is present. In contrast, if single frequency measurement is used, readings will be shorter with conductive media in the canal (for example –0.5 mm on 5 kHz). The differences between a dry root canal and a canal filled with electrolyte are expected to be much lower in reallife than presented in this paper. Normally, only insufficiently dried (blown) root canal can be expected, and here, for research purposes, canals were filled to the top. Conclusions are made on the basis of physical model and it is necessary to compare them with in-vivo measurements to achieve its full credibility [8].
209
dry root canal. Impedance-ratio method has better tolerance to electrolytes in the canal, but it also shows lower accuracy in normal conditions. In real applications root canal is never fully-filled with agents (like in this research) and their lower influence is expected. Thus, if there is no assurance that root canal is dry (canals with perforations, bleeding teeth), impedance-ratio method with the risk of lower accuracy is advisable. Although the accuracy of contemporary apex–locators is sufficient for most cases, situation is not perfect and further elaborations of single frequency methods, as well as new measuring methods developments are encouraged. For the credibility of drawn conclusions, further research and in-vivo verification is required.
REFERENCES 1. 2.
3. 4. 5. 6.
7.
8.
Gordon MP, Chandler NP. (2004) Electronic apex locators. Int Endod J 7(7): 425-7 Jenkins JA, Walker War , Schindler WG, Flores CM. (2001) An in vitro evaluation of the accuracy of the root ZX in the presence of various irrigants. J Endod. 27:209–11 DOI 10.1097/00004770-200103000-00018 Kaufman AY, Keila S, Yoshpe M. (2002) Accuracy of a new apex locator: an in vitro study. Int Endod J. 35:186–92 DOI 10.1046/j.1365-2591.2002.00468.x A. Y. Kaufman, S. Keila & M. Yoshpe (2002) Accuracy of a new apex locator: an in vitro study. Blackwell Int End J 35:186–192 A. ElAyouti & C. Löst (2006) A simple mounting model for consistent determination of the accuracy and repeatability of apex locators. Int Endod J 39: 108–112 Z. Stare, T. Protulipac (2003) Sensitivity of the Root Canal Impedance to Electrode Displacement – in vivo and in vitro Measurement, MEASUREMENT Proc. of the 4th Int. Conf., Smolenice, Slovak Republic, 2003, pp 230–233 K. C. Nam, S. C. Kim, S. J. Lee et al. (1991) Root canal length measurement in teeth with electrolyte compensation. Med and Biol Eng and Comp, 40(2):200-204 DOI 10.1007/BF02348125 Stare Z, Lacković I, Galić N (2001) Evaluation of an in vitro model of electronic root canal measurement, Proc. of the 9th Mediterranean Conf. on Med. and Biolog. Eng. and Comput., Pula, Croatia, 2001. pp: 1047–50
IV. CONCLUSION Two measuring methods are compared in this paper: absolute impedance and impedance-ratio methods. For the purpose of quantitative comparison, quality factors were introduced and accordingly it has been concluded that single-frequency method gives best results in conditions of
Author: Institute: Street: City: Country: Email:
Tihomir Marjanovic Faculty of Electrical Engineering and Computing Unska 3 Zagreb Croatia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Bioimpedance spectroscopy of human blood at low frequency using coplanar microelectrodes J. Prado, M. Nadi, C. Margo and A Rouane Laboratoire d’Instrumentation Electronique Nancy (LIEN), Nancy University, France. Abstract— Dielectric properties of biological substances are usually deduced ex vivo by the way of the impedance measurement of a cell loaded by the investigated medium. At low frequency it is well known that the bioimpedance depends on the polarization effects that occur at the electrodes interface. Measurements being affected for frequencies lower than 50 kHz for standard electrodes, black platinum was used to decrease polarization effects. In this paper, dielectric properties of blood measured at different temperatures are presented for frequencies varying between 100 Hz and 1 MHz. Keywords— bioimpedance spectroscopy, microelectrodes, polarisation effect, dielectric properties, blood
I. INTRODUCTION Impedance Spectroscopy of biological tissues has been previously inestigated as a technique for non invasive assessment of tissue characterisation [1]. Measurement of dielectric properties may be deduced from the bioimpedance measurement obtained by the interaction between an electromagnetic field source and a biological sample. In the frequency range up to 10 MHz, current conduction through tissue is mainly determined by the tissue structure, i.e. the extra- and intra-cellular compartments and the insulating cell membranes. Therefore, changes in the extra- and intracellular fluid volumes are reflected in the impedance spectra [2]. Different electrodes configurations are used to measure bioelectric phenomenon. One can distinguish between two basic functional types, macroscopic or microscopic measurement. Electrodes for macroscopic characterisation consist in bioimpedance measurement of a biological tissue sample or organ. Electrodes for microscopic characterisation were more recently [3] used for bioimpedance measurement of biological cell or very small cells aggregate. Microscopic electrodes may be used to characterize extracellular and intracellular fluids, or cell membrane. In this paper we are dealing with biological cell scale with the goal of optimizing the interface between electrodes and biological cells agregate at low frequency. The contact of the electrodes with the biological tissue or the electrolyte leads to electrochemical phenomena. This results in a charge distribution in the immediate vicinity of the electrodes and thus in an additional impedance called
“polarization impedance” [4]. This well known phenomena is specific for a given electrode-electrolyte interface. The potential due to the polarisation impedance is dependent upon he metal-electrolyte combination, current density, and frequency [5]. Since water is the primary constituent of both in vivo and vitro fluids, it is generally assumed that electrode interface impedance in these fluids is similar to that observed for physiological serum. Considerable data describing the impedance at the electrodes-solution interface are available [6]. In therapeutic or diagnostic applications or in studies on biological effects of the electromagnetic radiations, dosimetric evaluations are greatly affected by the precision of dielectrics parameters values of biological tissues. These parameters are sensitive to many influencing factors like temperature of the target organ. However, these effects remain misunderstood and the measured values are sparse, at various frequencies and exist only for some organs as compiled in [7]. II. MATERIAL AND METHODS Many methods and techniques for the measurement of complex impedance exist and are described in the litterature. In this work, the auto balancing bridge technique associated to a microsensor based on a multielectrodes matrix was applied to determine the frequential variation of a complex impedance. In the present study, we intented to measure the dielectric properties of blood for a frequency range between 100 Hz and 1 MHz in order to complete previous work at higher frequencies [8]. A. The microsensor The microsensor consists of an array of sixteen platinum microelectrodes for measurement with two reference microelectrodes on its surface. That kind of geometry has been employed previously for neural activity recording or stimulation [9], [10], [11]. We have adapted it (Figure 1) for impedance measurements of solutions. Thus, new design strategies had to be considered in order to optimize the probe performances for bio-impedance measurements at low frequencies [13].
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 186–189, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Bioimpedance spectroscopy of human blood at low frequency using coplanar microelectrodes
187
a PC (Figure 2). Data acquisition and analysis were managed using HPVEE© software. III. RESULTS C. Characterization of the sensor
Figure 1: view of the sensor with its tank The microelectrodes matrix consists of an eighteen microelectrodes (2-ground reference, 2 sets of 8 active electrodes) placed on a plate of substrate glass (1 x 1.2cm). This configuration provides the advantage of an easy fabrication by using standard microelectronic technologies which implies high reliability, low costs and the possibility to integrate other sensors and electronics on the same probe. The active microelectrodes are squares with dimensions of 100x100µm and 180 nm thickness. Microelectrodes reference are rectangular with 1x2.5mm of surface and 180nm thickness. This cell measurement was produced at the Centro Nacional de Microelectronica (CNM-University of Barcelona). The technological process consists in two photolithographic steps starting from a thermal oxydation to grow a thick field layer (800 nm) on 4-in. (~10 cm) P-type ‹100› Si wafers with a nominal thickness of 525 µm and is presented in [8]. B. Electronic Instrumentation The basic measurement system, based on an impedancemeter to which a microsensor is connected, is not described here in details. This sensor is a measurement cell including a Plexiglas tank for the solution under test, the electronic card for data acquisition, the impedancemeter and
S en so r
D a t a a cq u i si tio n electro n ic bo a rd
The bioimpedance depends on the frequency and the polarization effects that occur at low frequency. In this paper, dielectric properties of physiological serum are determined by a monopolar system. The polarization resistance and capacitance of electrodes in contact with standard solutions (Potassium chloride) has been investigated. A set of measurements were made at 37°±0.5°C from 100 Hz to 1 MHz range using a commercial impedancemeter. The measurement microsensor has coplanar platinum electrodes square of 100µm x 180 nm in height, the applied voltage level being 25mV. Electrolyte solution of known conductivity was used to characterize the measurement cell. A theoretical model of the impedance has been calculated using finite elements method (FEM) for each electrode. The numerical simulation was used to determine the cell factor of the microelectrodes. Effects of the polarization on the measured impedance and the dielectric characteristics were also investigated. Measurements are affected for frequencies lower than 50 kHz for standard electrodes in Platinum. This limit due to the polarisation impedance decreases to 10 kHz when, as it is well known, black platinum is used [14]. D. Measurement of blood electrical properties Measurements were done using standard Platinum electrodes and electrodes covered by black Platinum for both satndard solutions, animal and human bloods. We present here, as an example, only the relative permittivity and the electric conductivity obtained for human blood at different temperatures between 100 Hz and 1 MHz using microelectrodes covered by black platinum (Figure 3).
I m p éd a n cem eter
PC
Figure 2: View of the sensor and its electronic conditionning
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
188
J. Prado, M. Nadi, C. Margo and A Rouane
1,2
100000000 10000000
1
εr
1000000
εr
0,6
10000 1000
0,4
σ
100
σ (S/m)
0,8
100000
0,2
10 1
0 0,1
1
10
100
1000
Fréquence (kHz) εr 25°C
εr 37°C
εr 42°C
σ 25°C
Figure 3: Electric properties of human blood with platinum IV. DISCUSSION AND CONCLUSION For the impedance obtained with the microelectrodes covered with a black platinum layer, approximately 50% of the polarization effects was eliminated for frequencies greater than 10 kHz. But for frequencies lower than 10 kHz the electrode electrolyte interface influenced clearly the measurement of the bioimpedance. These results have shown that the measurement cell may be characterized by determining the conductivity of a dielectric properties with less than 20% error for a frequency over 10kHz when the microelectrodes are covered by black platinum. The conductivity has an error mainly due to the temperature effects, the electrode/electrolyte impedance, electrodes geometry of the measurement cell. The imaginary part of the complex impedance is strongly influenced by the parasitic capacitances at the microelectrodes interface and thus the permittivity at these frequencies remains to high and difficult to measure. The measurement cell was used for a small sample of blood in this first step with only one active electrode and a reference electrode. The next step will be to develop comparative measurements between single cells using a multielectrodes configuration since there is no interaction or cross-talk between the micro-electrodes. The use of microelectrodes opens up new areas in biomedical applications. Using microelectrodes arrays, it
σ 37°C
σ 42°C
covered by Pt-noir
should be possible to monitor cell movement or characterise dielectric properties of biological cell. Understanding their electromagnetic behaviour, the nonlinear phenomena or analyzing electromagnetic properties of an isolated cell versus an aggregate of cells [15], are a few examples of benefits that bioimpedance spectroscopy applications will offer from miniaturization in the near future.
ACKNOWLEDGMENT We thank the Centro Nacional de Microelectronica (CNM) for the fabrication of the measurement cell and PhD. Antoni Ivorra and M.S. Rodrigo Gomez from Barcelona University for their help and advices.
REFERENCES 1. 2. 3.
4.
Schwan H P 1963 Determination of biological impedances - Chapter 6 Physical techniques in biological research 6 Academic press Foster K R and Schwan H P 1996 Dielectric properties of tissues Chapter 1 Handbook of Biological Effects of Electromagnetic Fields 2ème edition Ed :Polk C. et Postow E CRC Press 27-102 Gomez R, Bashir R, Sarikaya A, Ladish M R, Sturgis J, Robinson JP, Geng T, Bhunia A K, Apple HL and Werely S 2001 Microfluidic Biochip for Impedance Spectroscopy of Biological Species Biomedical Microdevices 3:3 201-209 Fricke H 1932 The theory of electrolytic polarisation Phil. Mag. 14 310-318
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Bioimpedance spectroscopy of human blood at low frequency using coplanar microelectrodes 5.
Schwan H P 1966 Alternating current electrode polarisation Biophysik 3 181-201 6. Grimnes S and Martinsen O G 2000 Bioimpedance and Bioelectricity basics Academic Press. 7. Gabriel C., Gabriel S and Corhout E (1996a), “ The dielectric properties of biological tissues:I. Literature survey ”, Phys. Med. Biol., vol 41 pp 2231–49. 8. Jaspard F and Nadi M 2002 Dielectric properties of blood: an investigation of temperature dependence, Physiol. Meas. 23 547-554 9. Ivorra A, Gomez R, Noguera N, Villa R, Sola A, Palacios L, Hotter G and Aguilo J 2003 Minimally invasive silicon probe for electrical impedance measurements in small animals Biosensors Bioelectron. 19 391–9 10. Borkholder DA, Bao J, Maluf NI, Perl ER and Kovacs GT 1997 Microelectrode arrays for stimulation of neural slice preparations J Neurosci Methods 77 61-66 11. Kovacs G T A, Storment C W and Rosen J M 1992 Regeneration Microelectrode Array for Peripheral Nerve Recording and Stimulation IEEE Trans. Biomed. Eng. 39 893-902
189
12. Ackmann J J and Seitz M A 1984 Methods of complex impedance measurements in biologic tissue CRC Crit. Rev. Biomed. Eng. 11 281-311. 13. Markx G H and Davey C L 1999 The dielectric properties of biological cells at radiofrequencies Applications in biotechnology Enzyme and Microbial Technology 25 161-171 14. Mcadams ET and Jossinet J 1991 Electrode-electrolyte impedance and polarisation Innov. Tech. Biol. Med. 12 11-20 15. Pavlin M, Miklavcic D. Effective conductivity of a suspension of permeabilized cells: a theoretical analysis. Biophys. J. 85: 719-729, 2003. NADI Mustapha Electronic Instrumentation Laboratory of Nancy Nancy University BP 239, Bd des Aiguillettes 54506, Vandoeuvre les Nancy France
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Dielectric properties of water and blood samples with glucose at different concentrations A. Tura1,, S. Sbrignadello1, S. Barison2, S. Conti3, G. Pacini1 1
ISIB-CNR, Padova, Italy, 2 IENI-CNR, Padova, Italy, 3 BC Dynamics, Milan, Italy
Abstract - Impedance spectroscopy has been proposed as possible approach for non-invasive glycaemia monitoring. However, few quantitative data are reported about impedance variations related to glucose concentration variations, especially below the MHz band. Furthermore, it is not clear whether glucose directly affects the impedance parameters or only indirectly by inducing biochemical phenomena. We investigated the impedance variations in glucose-water and glucose-blood samples, for increasing glucose values (up to 300 mg/dl). In all the frequency range (0.1–107 Hz) glucose-water samples showed huge impedance modulus increases for increasing glucose values (up to 135%). In blood the impedance modulus showed only slight variations (2%), but again in a wide frequency range. Therefore: i) glucose directly affects the impedance parameters of solutions; ii) the influence on the impedance seems to decrease in high conductivity solutions, but it is still clearly present. Keywords - Impedance spectroscopy, glycaemia, diabetes, monitoring, non-invasive
I. INTRODUCTION In recent years the measurement of tissue and blood impedance through an alternating current has been suggested as a non-invasive approach to determine glycaemia [1]. In [2], it was shown that variations in blood glucose concentration determine significant changes in the impedance of a subject’s skin and underlying tissues in a range between 1 and 200 MHz. However, the authors claimed that the observed impedance changes were not due directly to glucose but to biochemical reactions triggered by variations of glucose concentration, which cause variations in the electrolyte balance across the membrane of erythrocytes. In other studies, however, impedance variations were found in glucose-water solutions with different glucose concentrations, despite no cell component was present. This was observed even at glucose concentration values that mimic glycaemic levels in human blood [3]. On the other hand, in [3] the impedance differences were observed only in a relatively narrow frequency range. These partially contradictory results show that it is not completely clear whether glucose directly affects the impedance behavior of a solution, especially when
physiological concentration levels are considered. The aim of this study was to examine possible impedance variations in solutions at different glucose concentrations within the physiological range. We studied glucose solutions both in pure water and in blood in an in vitro context. Special attention was devoted to the analysis of low frequencies, which were poorly investigated in previous studies, especially in blood. II. MATERIALS AND METHODS A. Preparation of samples A sample of deionized water (18.5 MΩ · cm resistivity, Millipore MilliQ Element system, Billerica MA, USA) was prepared. The same water was used to prepare three glucose-water samples, at glucose concentrations spanning from normal glycaemia to that observed in severe diabetes, i.e. 100, 200, 300 mg/dl. Each sample consisted of 50 ml of water. D-glucose (99.5%, Fluka) was added to the water samples to reach the indicated concentrations. For the preparation of blood samples we collected 500 ml of bovine blood immediately after the animal slaughter. In the blood container we had previously poured 1 g of potassium oxalate (99.98%, Sigma-Aldrich) and 1.25 g of sodium fluoride (99.99%, Sigma-Aldrich), acting as anticoagulant and glycolytic inhibitor, respectively [4]. We then measured the glucose concentration of the blood sample by two portable glucose meters (Freestyle, TheraSense, and Glucomen, Menarini Diagnostics). We performed two measures for each meter: the average value was 65 mg/dl. Blood was then stored into a refrigerator at 4 °C. In the following hours we checked again the glucose concentration several times: the differences compared to the first measures were always within the precision of the meters, thus confirming that glycolysis was properly inhibited. Then, we properly added D-glucose to obtain blood samples with concentrations similar to those indicated above. B. Impedance measurement Within 72 hours from sample preparation we performed the impedance measures through a Solartron 1260 impedance analyzer. For the measurement cell a probe from
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 194–197, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Dielectric properties of water and blood samples with glucose at different concentrations
Delta OHM was chosen (SP06T model). As shown in Fig. 1, the cell is characterized by four platinum electrodes for separation between stimulation and sensing terminals, thus allowing minimization of possible secondary effects (such as inductance of cables or parasitic capacitances) that can influence the accuracy of the impedance measurement [5]. The electrodes are then surrounded by a bell: when the cell is immersed into the sample to be studied, the measurement region is delimited and kept constant. The cell also includes a temperature sensor. The cell constant is 0.7. Through the Solartron 1260 we applied a 100 mV r.m.s. voltage to the outer couple of electrodes. The electric current was read through the inner electrodes. We analyzed the impedance of the samples in the 10-1–107 Hz range. The impedance was measured in five frequency points for each decade. For each sample studied, we performed two independent measures: after the first measure the cell was cleaned before immersing it again into the sample. The impedance values presented for each sample are the average between the two measures. All the impedance measures were performed with the samples at ambient temperature (23 °C with maximum variations of ±0.3 °C). All the measures were corrected through open-short compensation technique. III. RESULTS The impedance modulus of water, and of glucose-water mixtures, is reported in Fig. 2. The modulus increased for increasing glucose concentration values in a wide frequency range. More precisely, the frequency range where the differences were more evident (we define it as reference range) was 0.1–800 Hz. Outside this range, the differences were less clear, as the modulus curves showed relatively frequent intersections at some frequency values. As regards the phase, a decrease was observed for increasing glucose values, though variations were less marked than those observed in the modulus. The reference range for the phase
195
Fig. 2 Impedance modulus for glucose-water samples (pure water: empty circle; 100, 200, 300 mg/dl glucose: full circle, triangle, square, respectively) was 80 Hz–107 Hz, though from 105 Hz onwards the differences were small. Percentage difference values between the blank sample and that at the highest glucose concentration for both modulus and phase in their own reference ranges were 125±17% (mean±standard deviation) and 43±56%, respectively. Maximum and minimum values were: 135% and 63% for the modulus; 157% and 0.1% for the phase. As regards blood, when looking to the modulus and phase curves in the whole frequency range, almost no variation can be appreciated for the different samples. However, when the analysis was focused on a specific (though still wide) frequency range some differences emerged. In fact, in the 8– 2·106 Hz frequency range there was a slight but evident difference in the impedance modulus: similarly to glucosewater samples, the modulus increased for increasing glucose concentrations, as shown in Fig. 3. Outside the reported reference range, the modulus curves showed frequent intersections. Similar analysis for the phase showed that there was again a relatively wide frequency range, i.e. 2·105–8·106 Hz, where a slight phase decrease for increasing glucose concentration was observed in the whole range. Thus, in a frequency range which almost covers all the studied range at least one between impedance modulus and phase showed slight but clear variations for increasing glucose concentrations. Percentage difference values between the sample with endogenous glucose only and that at the highest glucose concentration for both modulus and phase in their own reference ranges were 2.00±0.09% and 1.51±0.11%, respectively. Maximum
Fig. 3 Impedance modulus for glucose-blood samples in a portion Fig. 1 Inner part of the measurement cell
of the studied frequency range (blood alone (65 mg/dl glucose): empty circle; 100, 200, 300 mg/dl glucose: full circle, triangle, square, respectively)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
196
A. Tura, S. Sbrignadello, S. Barison, S. Conti, G. Pacini
and minimum values were: 2.24% and 1.68% for the modulus; 1.60% and 1.27% for the phase. IV. DISCUSSION In the recent years a strong effort has been carried on aimed at the development of techniques for non-invasive glucose measurement [1]. Some of these approaches have lead to the production of non-invasive glucose meters [6], but for several reasons many of them remained at prototype level. The only one available today is the GlucoWatch [7], and it has several drawbacks [6]. A promising approach for non-invasive measurement of glycaemia is impedance spectroscopy. Some device prototypes have been developed based on this approach, and one of them also reached the market [1,8,9], but it was withdrawn and the company filed for bankruptcy [10]. A new company seems to be working on a similar device [11,12], but at the moment no device is on the market. In [2] the authors claimed that the measurement of glycaemia through impedance spectroscopy is possible as variations in blood glucose concentration induce some transportation phenomena of electrolytes through the cell membrane, and that results in variations in the dielectric properties of the medium. The most relevant phenomenon is the plasma sodium concentration lowering in the presence of hyperglycaemia [13]. In [2] it was claimed that these effects are entirely responsible for the impedance variation of blood and underlying tissues, since glucose variations do not directly affect the dielectric properties of the investigated medium in the MHz band, as also stressed in other studies from the same research group [14,15]. In fact, some references were provided to other studies where the effect of variations in glucose concentration was studied in water [16,17]. In [17] it was shown that at glucose concentrations lower than 1 g/cc the dielectric properties of the glucose-water solution are not different from those of pure water. However, a more recent study contradicts these findings. In [3] the dielectric properties of glucose-water solutions were found different for glucose concentration values varying within the physiological range. In particular, the impedance modulus increased for increasing glucose concentrations within the 1 kHz – 1 MHz band. The first aim of our study was to reproduce the reported experiments on glucose-water solutions. Our results essentially confirm those of the study [3], but in even wider frequency band: in fact, in all the investigated range, i.e. 0.1 Hz – 10 MHz, we observed a significant variation in the impedance modulus, phase, or both, though the greater differences were found for frequencies lower than 100 kHz. Thus, we can claim that variations in glucose concentration even at low values such as physiological ones directly affect
the dielectric properties of a solution, independently from other mechanisms that may be induced by glucose variations. On the other hand, it is confirmed that the impedance variations due to variations in glucose are certainly more evident at low frequencies, and this may partially explain why they were not observed in the studies [16-17] where frequencies over 1 MHz were considered. It must also be noted that the partial differences between our results and those of study [3] may be due to the use of a different measurement cell. In fact, we used a four electrodes cell instead of simple two electrodes cell, thus allowing four terminal measurements less prone to noise effects at medium-high frequencies [5]. Furthermore, we used platinum instead of stainless steel electrodes, the latter being more sensitive to the effects of possible reactions with the solution at low frequencies. Few data can be found in the literature on impedance in blood at different frequencies related to glucose concentration values. In [2] some impedance data were reported from an in vivo experiment on humans where glycaemia was 100 and 200 mg/dl, though only frequencies above 1 MHz were investigated. It was shown that both impedance modulus and phase were different between the two glucose concentration values in some frequency ranges, and similarly to our results higher values were found for the 200 mg/dl concentration. As regards the modulus, which was the impedance parameter of major interest, it was claimed that the sensitivity to glucose changes was between 20 and 60 mg/dl glucose/Ω, and this was in acceptable agreement to our results, though the sensitivity that we found was slightly lower. In fact, the best sensitivity that we observed for the modulus between the 100 and 200 mg/dl samples was about 110 mg/dl glucose/Ω (100 mg/dl: 72.2 Ω; 200 mg/dl: 73.1 Ω), at frequencies around 1 kHz. In [3] the dielectric properties of blood for different glucose concentrations were studied in vivo on hamsters, and variations in the dielectric parameters were observed for glucose concentrations varying between 150 and 300 mg/dl. However, only one frequency value was investigated (10 kHz), and only semi-quantitative results were reported. Some studies investigated the dielectric properties for different glucose concentrations of PBS buffers with suspended erythrocytes [14,15]. In [14] different glucose concentrations were considered ranging from zero to about 400 mg/dl, and the analysis was performed between 10 kHz to 100 MHz. Variations in the dielectric properties of the buffers were found for the different glucose concentrations. However, the dielectric parameters showed a nonmonotonic pattern for increasing glucose values, differently to our results. In [15] the analysis was extended to 2 GHz, with similar findings. The authors claimed that this non-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Dielectric properties of water and blood samples with glucose at different concentrations
monotonic behavior may be due to erythrocytes activities at the membrane level, but no further details were provided. In our study huge variations in the impedance parameters for different glucose values were observed only in water. This suggests that the ability of glucose to induce variations in the dielectric properties of the solution may depend on the conductivity levels involved. In fact, in blood solutions, which have conductivity much higher than water, the impedance variations were modest compared to those in water, but still clearly present. In conclusion, this study investigated the effect of glucose concentration on the impedance of different solutions, i.e. glucose in water and in blood. Few studies showed the impedance variations of blood for different glucose concentrations within the physiological range, and to our knowledge no study examined in detail the frequency values below the MHz band: this is one of the main novelties of this study. The advantage of focusing on frequency values below the MHz for possible future clinical applications may consist in a lower sensitivity to the electromagnetic noise in the environment. The study showed that glucose is able to directly affect the impedance of the investigated samples. In blood, slight but clear impedance variations for different glucose values were observed in a wide frequency range, and especially below 1 MHz. Possible indirect mechanisms involving cells may only contribute to the observed total variations.
5.
6.
7.
8.
9. 10. 11.
12.
13.
ACKNOWLEDGEMENTS The authors thank Dr. Franceschini for supply of bovine blood, and Dr. G. Sbrignadello and Dr. M.C. Scaini for their useful comments and help. The study was partially supported by a grant from Regione Veneto (DGR 2702/1009-04) and from CNR in the framework “Ricerca Spontanea a Tema Libero” (Research number: 946).
14. 15.
16.
REFERENCES 1. 2.
3. 4.
Khalil OS (2004) Non-invasive glucose measurement technologies: an update from 1999 to the dawn of the new millennium. Diabetes Technol. Ther. 6: 660-697 Caduff A, Hirt E, Feldman Y, Ali Z, Heinemann L (2003) First human experiments with a novel non-invasive, nonoptical continuous glucose monitoring system. Biosens Bioelectron. 19:209-217 Park JH, Kim CS, Choi BC et al. (2003) The correlation of the complex dielectric constant and blood glucose at low frequency. Biosens Bioelectron 19:321-324 Chan AY, Swaminathan R and Cockram CS (1989) Effectiveness of sodium fluoride as a preservative of glucose in blood. Clin Chem 35:315-317
17.
197
Awan SA, Kibble BP (2005) Towards accurate measurement of the frequency dependence of capacitance and resistance standards up to 10 MHz. IEEE Trans Instrum Meas 54:516520 Tura A, Maran A and Pacini G (2006) Non-invasive glucose monitoring: Assessment of technologies and devices according to quantitative criteria. Diabetes Res Clin Pract. In Press (DOI: 10.1016/j.diabres.2006.10.027) Tierney MJ, Tamada JA, Potts RO et al. (2001) Clinical evaluation of the GlucoWatch (R) biographer: a continual, non-invasive glucose monitor for patients with diabetes. Biosens Bioelectron 16:621-629 Pfutzner A, Caduff A, Larbig M et al. (2004) Impact of posture and fixation technique on impedance spectroscopy used for continuous and noninvasive glucose monitoring. Diabetes Technol Ther 6:435-441 Weinzimer SA (2004) PENDRA: the once and future noninvasive continuous glucose monitoring device? Diabetes Technol Ther 6:442-444 Wentholt IM, Hoekstra JB, Zwart A et al. (2005) Pendra goes Dutch: lessons for the CE mark in Europe. Diabetologia 48:1055-1058 Forst T, Caduff A, Talary M et al. (2006) Impact of environmental temperature on skin thickness and microvascular blood flow in subjects with and without diabetes. Diabetes Technol Ther 8:94-101 Caduff A, Dewarrat F, Talary M et al. (2006) Non-invasive glucose monitoring in patients with diabetes: a novel system based on impedance spectroscopy, Biosens Bioelectron 22:598-604 Hillier TA, Abbott RD and Barrett EJ (1999) Hyponatremia: evaluating the correction. factor for hyperglycemia. Am J Med 106: 399-403 Hayashi Y, Livshits L, Caduff A (2003) Dielectric spectroscopy study of specific glucose influence on human erythrocyte membranes. J Phys D: Appl Phys 36:369-374 Caduff A, Livshits L, Hayashi Y (2004) Specific D-glucose Influence on Electric Properties of Cell Membrane at Human Erythrocyte Studied by Dielectric Spectroscopy. J Phys Chem B 108:13827-13830 Fuchs K, Kaatze U (2001) Molecular dynamics of carbohydrate aqueous solutions. Dielectric relaxation as a function of glucose and fructose concentration. J Phys Chem B 105:2036-2042 Mashimo S, Miura N, Umehara T (1992) The structure of water determined by microwave dielectric study on water mixtures with glucose, polysaccharides, and L-ascorbic acid. J Chem Phys 97:6759-6765 Corresponding author: Author: Institute: Street: City: Country: Email:
Andrea Tura, PhD ISIB-CNR Corso Stati Uniti, 4 Padova Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
FENOTIP: Microfluidics and Nanoelectrodes for the Electromagnetic Spectroscopy of Biological Cells V. Senez1, A. Treizebré1, E. Lennon3, D. Legrand2, H. Ghandour1, B. Bocquet1, T. Fujii3 and J. Mazurier2 1
IEMN/CNRS-USTL-ISEN, University of Lille, 59652 Villeneuve d’Ascq, France 2 UGSF/CNRS-USTL, University of Lille, 59652 Villeneuve d’Ascq, France 3 CIRMM/IIS, University of Tokyo, 153-8505 Tokyo, Japan
Abstract— This work is focused in the area of ligandreceptor interaction analysis. The purpose is to be able to assign, in real time, a specific crossing pathway to a ligand/receptor pair, without the use of molecular labels. The classification is based on changes in the electrical properties of the cells. Various BioMEMS have been designed and fabricated in order to characterize the variation of the electrical properties of the biological cells. We are interested in both dielectric (i.e. polarization) and vibrationnal (i.e. absorption) spectroscopy. Several devices are currently tested for low (<10MHz) and high (>40GHz) frequency range (LFR & HFR) measurements. In the LFR, we have fabricated coplanar and 3D electrodes sensors for impedance measurements. In the HFR, we have designed and processed coplanar waveguides. In the LFR, we have performed static and dynamic measurements on small cluster of cells. In the HFR, we have shown that we can propagate microwaves along submicrometer single wires (Goubau propagation). We are going to use these HFR devices for measurement on small cluster of cells. Keywords— Living Cell, Cell Signaling, Dielectric & Vibrationnal Spectroscopy, Biomems, Nano-Electrode and wire.
I. INTRODUCTION Biological cell analysis is a very important field of research. Currently, the prevailing paradigm to analyze cellular functions is the study of biochemical interactions using fluorescence based imaging systems. However, the elimination of the labeling process is highly desirable to improve the accuracy of the analysis. Recent developments in microand nanofabrication technologies are offering great opportunities for the analysis of biological cells; the combination of micro fluidic environments, nano-electrodes/wires and ultra wide band electromagnetic engineering will soon make possible the investigation of local (submicrometer scale) dynamic processes integrating several events at different time scales. In the paper, we present our work which aims at investigating living-cells with the help of MEMS and NEMS (Micro and Nano Electro Mechanical Systems) and ultra wide band (DC-THz) electromagnetic characterization techniques. We are working with a well-characterized biological model
(ligand: lactoferrin, receptors: nucleolin and sulfated proteoglycans) [1]. We propose to follow the various phases of assembly between the ligands and receptors present or not on a set of mutant CHO cell lines and internalization of the ligands/receptors complexes into the cells (Fig. 1). II. MICRODEVICES AND TECHNOLOGIES Permittivity measurements across a range of frequencies provide information about the species and their chemical environment. Features in dielectric spectrum (the polarization relaxation) are classified as α-, β- and γ-dispersion. In the HFR range, the spectrum may exhibit rotationabsorption phenomena. For biological species, access to a broad frequency range is absolutely necessary due to their size and chemical diversity. That is why we are currently investigating different types of structures. We have tested various methods for the immobilization of the cells: i) mechanical (Fig. 2a), ii) chemical (Fig. 2b) and 3) electrical (Fig. 2c).
Fig. 1 Binding of lactoferrin on two mutants CHO cells. CHO 677 line does not allow endocytosis of lactoferrin.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 170–173, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
FENOTIP: Microfluidics and Nanoelectrodes for the Electromagnetic Spectroscopy of Biological Cells
171
Fig. 3 Impedance microsensor (4-points): a) 3D silicon and planar gold electrodes, b) 3D and planar gold electrodes. Channels are made of PDMS.
Fig. 2 a) Single cell measurement set-up using deformability of cell between 3D electrodes; b) Patterning and immobilization of CHO cells on glass substrate by surface treatment with OTS and amine; c) Microfluidic chip for dielectrophoretic immobilization of single cells. In the low frequency range (LFR) (<10MHz), we have designed and fabricated coplanar and 3D electrodes sensors for impedance measurements (Fig. 3a-b). In HFR, we are using coplanar slotline (CS) (Fig. 4a) and we are also investigating waveguide coupled to Goubau line (GL) (Fig. 4b). GL is a single wire line showing low loss transmission thanks to the excitation of a surface electric wave [2]. BioMEMS are fabricated on silicon or glass substrates with microfluidic parts in polymeric materials. Different polymer materials are available (e.g.: polycarbonate, polyethylene, polymethylmethacrylate, polystyrene, polydimethylsiloxane (PDMS)).
Fig. 4 HFR devices: a) slot lines on PPTMDSO; b) Coplanar wave guide to Goubau line on quartz.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
172
V. Senez, A. Treizebré, E. Lennon, D. Legrand, H. Ghandour, B. Bocquet, T. Fujii and J. Mazurier
Fig. 5 Gold planar waveguide is deposited on PPTMDSO substrate. The inset shows the texture of gold.
We are using usually PDMS which is optically transparent, amenable to micromolding, biocompatible and has excellent O2 and CO2 permeability. However, it has several disadvantages. It shows fairly high losses (εr≈2.7, tanδ≈0.04 in V and W-bands). Its Young modulus is extremely low (<5MPa). Its metallization requires surface treatment (i.e.: with teflon like material). That is why we have selected Plasma Polymerized TetraMethylDiSilOxane (PPTMDSO) [3]. A new technological process based on cold remote nitrogen plasma (Fig. 5) allows us to obtain 50-80µm thick layers with a rigid texture and a very good adhesion to silicon substrate. This technological process is now well defined and is compatible with a classical microelectronic process.
Fig. 6 Advanced model based on transport lattice method for cell-electric field interaction in microfluidic channel: a) geometry and mesh of the structure, b) Magnitude and phase of the electrical impedance calculated with SPICE.
III. MODELING AND MEASUREMENT The accuracy of the electromagnetic characterizations depends on the correct design of the sensors (i.e.: coplanar electrodes or waveguide, 3D electrodes, GL, etc…). In the LFR, one can solve Laplace equation in combination with the finite element method (FEM). However, for realistic dielectric model of the cell, geometrical details prevent the use of FEM due to memory and solver limitations and we are developing an approach based on Kirchhoff laws and transport lattice method (Fig. 6a-b) [4]. In the HFR, one can solve the Maxwell equations and obtain the spatial distribution of the electric and magnetic field along the CPW, SL and GL (Fig. 7a-b)). Simulations have been performed with the Microwave Studio software from CST. In the future we have to work on the high frequency modeling of the cell. In the LFR, we have already performed measurements on small cluster of cells and single cell using planar gold electrodes or 3D silicon electrodes (Fig. 8a-b).
Fig. 7 a) Distribution of the electric field and b) reflexion and transmission coefficients for a microwave guide obtained by solving Maxwell equations with Microwave Studio from CST.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
FENOTIP: Microfluidics and Nanoelectrodes for the Electromagnetic Spectroscopy of Biological Cells
173
impedance of the cell’s membrane (i.e.: capacitive). Current experiences aims at studying the effect of receptor-ligand (i.e. lactoferrine-proteoglycans) interactions on the impedance of the membrane. Very preliminary results show a decrease of the impedance during the ten first minutes after the introduction of lactoferrine and a recovery within 30 minutes. In HFR, on-wafer measurements (Fig. 9) have been made with Vectorial Network Analysers (VNA) (see inset in Fig. 9) covering the [40MHz-110GHz] and [140GHz220GHz] bands on a back-to-back structure (Fig. 4). One can see that the propagation of the wave is experimentally demonstrated regarding the value (-5dB at 140 GHz) of transmission parameter (S21). These preliminary results show that GL are promising sensors for BioMEMS applications. The next step is to performed measurements on single cell with this GL.
ACKNOWLEDGMENT
Fig. 8 a) SEM picture of the 3D electrodes used to characterize the single cell, b) Ratio of the amplitudes of the electrical impedance (cell+medium/medium) versus frequency.
Parts of this work have been carried out within the FENOTIP project funded by the French Research National Agency (ANR) as PNANO n°05-0244-A3 and the grant of Dr. E. Lennon was supported by the Japanese Society for the Promotion of Science (JSPS).
In Fig. 8, one can observe that the sensor can detect the presence of the cell between the electrodes in the [1KHz1MHz] range. This impedance increase corresponds to the
REFERENCES 1.
2. 3.
4.
Fig. 9 a) Simulated and measured reflexion (S11)/transmission (S21) coefficients of a Goubau line (GL) measured with a vectorial network analyzer (inset), b) Schematic of the biomems under fabrication incorporating the GL line and the microfluidic channel for single cell characterization.
Legrand D, Vigié K, Said E., Elass E., Masson M., Slomianny M-C., Carpentier M., Briand J.P., Mazurier J. and Hovanessian A., (2004) Surface nucleolin participates in both the binding and endocytosis of lactoferrin in target cells, Eur. J. Biochem. 271, 303–317. Wang K. and Mittleman D., (2004) Metal wires for terahertz wave guiding, Letters to. Nature, 432, 376-379. Mille V., Bourzgui N.E., Medjdoub F., Bocquet B., (2004) Design of silicon-pTMDS bio-MEMS with millimeter and sub-millimeter waves transducers, 34th European Microwave Conference, 1, 169-172. Smith K., Gowrishankar T., Esser A., Stewart D. and Weaver J., (2006), The spatially distributed, dynamic transmembrane voltage of cells and organelles due to 10~ns pulses: meshed transport networks, IEEE Trans Plasma Sci., 34,1394-1404.
Author: Institute: Street: City: Country: Email:
Vincent Senez IEMN/CNRS Avenue Poincaré – BP 60069 59652 Villeneuve d’Ascq France
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Impedance method for determination of the root canal length D. Krizaj1, J. Jan2 and T. Zagar1 1
University of Ljubljana, Faculty of Electrical Engineering, Lab. for bioelectromagnetics, Trzaska 25, 1000 Ljubljana, Slovenia 2 University of Ljubljana, Medical School, 1000 Ljubljana, Slovenia
Abstract— Accurate root canal length determination is crucial for successful endodontic procedure where the root canal of the tooth is opened, cleaned and sealed. Impedance method is becoming a standard method for determination of the position of apical foramen. Current devices rely on determination of apical foramen from the ratio of impedances measured at two or more frequencies. The investigation is aimed in determination of optimal parameters for determination of apical foramen with best possible accuracy. The investigation has revealed an optimal ratio of 0.79 determined at frequencies 1 kHz and 500 Hz with a standard deviation of about 1 mm. Keywords— root canal, impedance method, ratio method, impedance spectroscopy.
as a measure for determination of the root canal length. Several investigations have been published evaluating the devices for measuring the root canal length (for instance Czerw et al., 1995; Haffner et al. 2005, Venturi and Breschi, 2005, etc.) as well as the method itself (Križaj, 2004). In general the results indicate the method is exact up to a certain accuracy. The aim of former investigations was mostly focused in comparisons of different types of apparatus (devices) and not the method itself. In order to advance our understanding of the operation of these devices as well as further improve the measurement techniques we analyzed the impedance between the file and the oral mucosa by using impedance spectroscopy.
I. INTRODUCTION Root canal length determination of a human tooth using measurement of electrical impedance between the file inserted into a root canal of the tooth and an outside electrode applied to the oral mucosa is becoming a standard method in endodontics – a special branch of dentistry. Exact assessment of root canal length is a crucial factor for successful endodontic treatment. Several methods have been proposed to improve the accuracy of the measurements and several investigations have been performed to evaluate the devices on the market. On the other hand, not much research has been devoted to the investigation of the technique itself and its limitations. The impedance method is tempting due to its simplicity of use as well as low cost of the device, however, due to large variation between electrical and morphological properties of the teeth the technique can be regarded accurate only up to a certain limit. The first attempts to use the measurement of electrical resistance between the tip of the file placed inside the root canal to the electrode placed on the oral mucosa go back the to year 1962. Sunada (Sunada, 1962) found that the tip of the file reaches the apical foramen when the electrical resistance dropped to about 6.5 kΩ. It was however difficult to obtain consistent values especially due to the polarization effects at the surface of the electrodes that occurs at direct (constant) current. An improvement was achieved by an approach suggested by Kobayashi (Kobayashi et al., 1991). Two frequencies of 0.4 kHz and 8 kHz were used and the ratio instead of difference between the impedances was used
II. METHOD Figure 1 presents a measurement set-up with the file inserted in the root canal serves as one electrode, the second electrode was placed in the saline surrounding the extracted tooth. Fourteen extracted teeth (10 incisors, 2 canines, 2 premolars) from adults (mean age 57 years, range 45-75 years) were used for the study. They were further separated to two groups: 8 tooth from adults older than 50 years and 6 tooth from adults younger than 50 years (the youngest had 45 years). The data was analyzed separately for the two groups as well as altogether. The teeth had matured root apices and a single root canal configuration. Actual canal lengths were established by advancing a size 10 K-file apically until the tip of the instrument was just visible at the apical foramen.
Quadtech 1920 PRECISION LCR METER
electrode 1
-
+
fixation
electrode 2 NaCl
Fig. 1: Measurement set-up.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 174–177, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Impedance method for determination of the root canal length
175
Fig. 3: Impedance ratio at frequencies 50 kHz and 500 Hz. Thick line represents average of all measured curves together with standard deviations.
Fig. 2: Absolute impedance measured at frequency of 50 Hz for 14 teeth. solid line:older than 50 years, broken line: younger than 50 year.
The tooth prepared for measurements was placed in a normal saline solution and fixed with light-cured composite resin. Impedance was measured between the size 10 K-file inserted in the root canal and the outer metal electrode, placed in the saline solution. A Quadtech 1920 precision impedance analyzer (LCR meter) in a frequency range from 20 Hz to 1 MHz was used for impedance measurements. The measurements were repeated at different distances of the file tip from apical foramen: 8, 6, 4, 3, 2, 1.5, 1, 0.5, 0, -0.5, -1, and -2 mm. III. RESULTS For each tooth the tip of the file was gradually set to exact canal lengths and the impedances were measured. After the impedances were measured for all canal lengths they were analyzed as a function of the canal length. Figure 2 presents measured absolute values of impedances for 14 teeth (8+6) at a selected frequency of 1 kHz and varying canal lengths. It can be seen that the impedance was for most measurements almost constant up to about 2 mm from the apical foramen and than decreased with decreasing canal length. Canal length was measured from the apical foramen towards the crown. When the tip of the file reached the apical foramen an additional decrease of the impedance occurs. However, this decrease was not so large and appeared at different values of the impedance (from 1 kΩ to 10 kΩ). This makes determination of the apical foramen directly from the measured impedance inaccurate. With dashed line are indicated the results for teeth younger that 50 years and with solid lines the teeth older than 50 years. On average the teeth older than 50 years gave larger impedances between the file and the saline.
Currently the most frequently used approach for determination of the root canal length is to measure impedance at least two different frequencies and use a ratio of impedances at different frequencies as a measure of the canal length. An example of this approach can be seen in Figure 3 for a ratio of impedances Z ( f 2 ) / Z ( f1 ) for a pair of frequencies f2 = 50 kHz and f1 = 500 Hz. In all cases a significant reduction of the ratio of impedances appeared when the tip of the file approached the apex. Typically this drop was observed at lengths smaller than 1 mm. Apical foramen (zero length) could be determined at a certain range of ratios, from about 0.6 to 0.4, indicating that for a fixed ratio, several different canal lengths could be obtained. This technique can thus not be expected to be absolutely accurate but exact up to certain accuracy. The thicker line in Figure * represents an average for all 14 samples with added standard deviations. On average a value of impedance ratio of about 0.5 seems to be the optimal choice. On the other hand, for optimal determination of apical foramen it is more important to find a ratio of impedances that would give the smallest standard deviation from on average exactly determined zero canal length (apex). Therefore, we analyzed the data in such a manner that we started from optimal average ratios at apical foramen and searched values around it to verify if a smaller standard deviation of measured distance from apical foramen can be obtained. Ten pairs of frequencies were investigated denoted with letters from A to J for 5 different frequencies (0,5 kHz, 1 KHz, 5 kHz, 10 kHz and 50 kHz). As a starting point we used the average ratios obtained at apical foramen (let us call them M) and searched for a smallest mean distance and standard deviation in its vicinity (M+0.05, M-0.05 and M0.1). In this manner we obtained for each set (from A to J) four average canal lengths and its standard deviations (not shown in Figure 4). A number from 1 to 4 is added to the letters from A to J to indicate these data. As both, minimal average canal lengths as well as minimal standard deviation
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
176
D. Krizaj, J. Jan and T. Zagar
If we choose a very simple criterion function, that a sum of the average ratio and its standard deviation should be minimal (min(SD+X)). Using this criterion for all measured teeth the set H3 has been found the optimal choice following with the set E4 and A3. The set H3 might however not turn out to be the favorable choice since only a small number of samples fall in the ± 0.5 mm range and only for 11 samples a length can be determined. On the other hand for all 14 samples of the set E4 a length could be determined and 7 of them fall in the ± 0.5 mm range. Set E4 has ratio determined from frequencies 5 kHz and 1 kHz at an impedance ratio of 0.79. Comparable result is achieved for a set A3 determined at frequencies 1 kHz and 0.5 kHz at an impedance ratio of 0.91.
Average distance from apical foramen [mm]
2 1.5 1 0.5 0
-0.5 -1
A
B
C
D
E
F
G
H
I
J
Fig. 4: Average distance from apical foramen for different sets of ratio
IV. CONCLUSION
values and frequencies.
are required for optimized canal length determination, we evaluate them in the XY plot where the X-axis represents the average canal length and the Y-axis the standard deviation shown in Figure 5. For example, A2 represents results (average distances) for impedance ratios Z (1 kHz) / Z (500 Hz) determined at the ratio 0.91 and A1 for a ratio of 0.91+0.05. A certain criterion function is required to determine the optimal set of frequencies and the optimal ratio. This criterion should be set by the clinicians. The results show that standard deviation and minimal average canal length are correlated. Smaller standard deviation can be expected for reduced average canal length. Quite a nice linear relationship can be set as SD = -0.8X+1.1.
Standard deviation = SD [mm]
3.5 3
y = 0.88*x + 1.1
J3 I3 J1
2.5 2 1.5 1 0.5 0 -1
A3 H2 A4
G2 F1 D3 D2 J4 G3 C1 G4 D4 B1 I4 F2 C2 E1 B2 F3C3 B3 A1 C4 E2 H1 E3B4 A2F4 E4
J2 I2 G1
I1
An impedance method used for evaluation of the root canal length has been investigated. Teeth of subjects older than 45 and younger than 75 years were used in the study. The impedance between the file inserted in the root canal and the electrode placed in the saline was measured at frequencies from 20 Hz to 10 MHz. Impedance ratio was investigated as a measure for evaluation of the root canal length. An attempt was made to determine the set of frequencies that would yield the best accuracy in determination of the root canal length. A trade-off between minimal average distance from apical foramen and its standard deviation is necessary. For this purpose a simple criterion function was set as a minimum of a sum of average distance from apical foramen and its standard deviation. Using this criterion function with an additional criterion that at the selected impedance ratio the distance from the apical foramen could be determined for all teeth the best results were obtained for a ratio of impedances equal to 0.79 measured at frequencies 5 kHz and 1 kHz. In this case an average distance from apical foramen is –0.07 mm with a standard deviation of 1.04 mm.
D1
REFERENCES 1. 2.
-0.5
0 0.5 1 Average distance = X [mm]
1.5
2
Fig. 5: Standard deviation and average distance for all evaluated impedance ratios.
3.
Czerw RJ, Fulkerson MS, Donnelly JC, Walmann JO (1995) In vitro evaluation of the accuracy of several electronic apex locators. J Endod 21(11):572-5. Haffner C, Folwaczny M, Galler K, Hickel R (2005) Accuracy of electronic apex locators in comparison to actual length--an in vivo study. J Dent 33(8):619-25. Križaj D, Jan J, Valenčič V (2004) Modeling AC current conduction through a human tooth. Bioelectromagnetics 25(3):185-95.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Impedance method for determination of the root canal length 4. 5. 6.
Kobayashi C, Suda H (1994) New electronic canal measuring device based on the ratio method. J Endod 20(3):111-114. Sunada I (1962) New method for measuring the length of the root canal, J Dent Res 41:375-380. Venturi M, Breschi L (2005) A comparison between two electronic apex locators: an in vivo investigation. Int Endod J 38(1):36-45.
177
Author: Institute: Street: City: Country: Email:
Dejan Krizaj Faculty of Electrical Engineering Trzaska 25 1000 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Impedance Spectroscopy of Newt Tails F.X. Hart1, J.H. Johnson2 and N.J. Berner2 1 2
Department of Physics, The University of South, Tewanee, USA Department of Biology, The University of South, Tewanee, USA
Abstract— We use impedance spectroscopy to characterize the electrical properties of newt tail. The newt model has attracted recent interest because the regeneration of newt tail and limbs may provide insights into tissue engineering for mammals. Impedance spectroscopy could provide a convenient means to monitor the progress of regeneration while various controlling stimuli are applied. In this initial phase of our research we compare the impedance spectra of healthy and necrotic tail tissue and identify a potentially confusing artifact in the spectra due to interneedle capacitance. Keywords— impedance, newt, regeneration.
I. INTRODUCTION Impedance spectroscopy has been used to characterize a wide range of tissue properties. Detailed information regarding its use and the underlying theory may be found in a recent review article [1]. In particular, this technique can be used to monitor the progress of various physiological changes in biological systems [2 – 5]. Our long term goal is to use impedance spectroscopy to monitor the regeneration process in newt tails. Here we describe differences in the dielectric spectra of normal and necrotic tissue in the tail of a newt (Notophthalmus viridescens viridescens). The conductivity of the necrotic tail tissue is less than that of healthy tissue, in contrast to what is commonly observed during tissue degeneration. Moreover, we identify and explain an artifact related to electrode capacitance which appears at high, rather than low frequencies. Tail regeneration in the newt occurs in three phases: wound healing and differentiation; blastema accumulation and growth; and differentiation and morphogenesis [6]. In this process a crucial aspect is the formation of a blastema, which contains mesenchymal stem cells that are able to differentiate to form a regenerate. The blastema begins to grow early in the process of regeneration, around 5-11 days after amputation [7] at the site of the lost appendage. Complete tail regeneration generally takes between 6 and 8 weeks, with a greater rate of tail elongation in newts with more proximal amputations than in those with more distal amputations (see references in [7]). The rate of regeneration is also highly dependent on the temperature at which the experiment is conducted. N. viridescens is usually kept at
25 degrees Celsius which is considered the temperature most comfortable for the species (see references in [8] ). The newt model provides special insights for regenerative medicine. Brockes and Kumar [9] have compared regeneration in salamanders and mammals. Differentiated mesenchymal cells in the blastema exhibit plasticity and dedifferentiate into other cell types. These regenerating cells also display positional memory in that they regenerate tissue proper to their original site when transplanted elsewhere. The mechanisms for these processes are not well understood. Brockes and Kumar express the hope that insights obtained from the study of regeneration in the salamander, and the factors which control it, will lead to advances in mammalian tissue engineering and, perhaps someday, to the engineering of a mammalian founder blastemal cell. Determining which factors affect regeneration must involve monitoring the progress of blastema development and subsequent tissue growth while various chemical and physical stimuli are applied. Modalities such as functional MRI, PET scans, or x-ray imaging are suitable for monitoring local changes in blood flow patterns, uptake of drugs, or structure. There is a need for a modality which can examine changes in larger-scale physiology while leaving the examined tissue unaffected by their use. We are beginning a study of the potentiality of impedance spectroscopy for this purpose and report here some of our initial findings. II. MATERIALS AND METHODS Eastern red spotted newts (Notophthalmus viridescens viridescens, family Salamandridae) were collected by dip net in Franklin County, Tennessee. Newts (2 – 4 g total mass) were kept in the laboratory in aquaria with 4 – 5 cm filtered, conditioned tap water (treated for chlorine, chloramine, and excess ammonia) at about 10°C in incubators (Percival I-30BLL Biological Incubators). Newts were exposed to natural light-dark cycles and fed cut-up mealworms (Tenebrio sp.) ad libitum. For experimentation, newts were wrapped in a moistened paper towel with only their tail exposed and placed in a clamp lined with foam rubber. Thus the animals were held firmly, and the tail was readily available. Newt tails became necrotic when the spinal cord was inadvertently severed
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 190–193, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Impedance Spectroscopy of Newt Tails
during a bleeding protocol (not for these experiments). Necrotic tails were red posterior to the cord lesion. The most necrotic tail was missing some flesh ventrally. We performed a series of preliminary measurements to characterize and compare the dielectric spectra of healthy and necrotic newt tail tissue. A pair of nickel-plated brass needles, diameter 0.60 mm and separation 13.7 mm. served as the electrodes. We embedded the non-pointed ends of the needles in a small, 20 mm thick plexiglass block to maintain mechanical rigidity. The pointed ends of the needles extended 13.0 mm beyond the block into a piece of styrofoam to ensure complete penetration of the tail. We connected the electrodes to a Solartron 1260 Impedance Analyzer, which was controlled by a Dell computer using the program Z60 from Scribner Associates. The Solartron applied 76 frequencies between 1 Hz and 30 MHz with amplitude of 50 mV. The data were first analyzed using the program ZVIEW from Scribner Associates and then transferred to the program Microsoft Excel for more extensive modeling. III. RESULTS
191 6 5 4 3 2 1 0 0
1
2
3
4
5
6
7
Log freq. (Hz)
Fig. 1 Impedance Spectrum of a healthy newt tail. Squares, ReZ; Circles, ImZ; Dashed lines indicate the fits provided by the circuit shown in Figure 3.
6 5 4 3
Figure 1 illustrates an impedance spectrum taken from the tail of a healthy newt. Similar results were obtained from tails of other healthy animals. Three dispersions are apparent. The low frequency dispersion below about 100 Hz is presumably due to the electrode-tissue interface. A weak dispersion near 100 kHz could be attributed to the bulk tissue. These two dispersions have been common features of impedance spectra measured, in-vivo, on frog gastrocnemius [10], octopus arm [11] and crayfish tail muscles [12]. However, the third dispersion centered near 10 MHz does not appear in these muscle spectra. To determine the origin of this third dispersion we submerged the electrodes in ordinary tap water and obtained the spectrum shown in Figure 2. The ReZ values are constant, as expected, between about 10 Hz and 100 kHz. The decrease in ReZ at higher frequencies and the corresponding peak in ImZ, which appears around 400 kHz, must be an artifact. We noted that in the previous muscle studies the inter-needle capacitance was lower because of the larger needle separation used and the resistance was lower because the muscle tissue was more conductive. In those cases the lower resulting RC time constant would have created an artifactual dispersion appearing at frequencies beyond the range of the Solartron. In the present study this artifactual dispersion has been pushed down into the frequency measurement range. Further measurements with different electrode separations and water conductivity confirmed this hypothesis.
2 1 0 0
1
2
3
4
5
6
7
Log freq. (Hz) Fig. 2 Impedance Spectrum of tap water. Squares, ReZ; Circles, ImZ; Dashed lines indicate the fits provided by the circuit shown in Figure 3.
The equivalent circuit used to model the impedance spectra is displayed in Figure 3. A series of two parallel combinations of a resistor and a constant phase element (CPE) represents, respecively, the electrode/tissue interface and the bulk material, which is in this case the newt tail. A resistance Rhi-f is added to account for the real part of relaxations which occur above 1 MHz. The complex impedance of a CPE is given by Z* = A(iw)-n, where w = 2πf, i = √-1, A and n are parameters, and the * refers to a complex-valued quantity. A parallel CPE/R combination produces in the impedance spectrum a peak in ImZ and a knee in ReZ. This combination is the circuit realization of a Cole-Cole dispersion. The series CPE sequence shown in Figure 3 has been used to describe the electrical behavior of a wide variety of biological [12 - 14] and non-biological systems [15, 16]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
192
F.X. Hart, J.H. Johnson and N.J. Berner
Zel
Zbulk Rhi-f
of the other parameters change, but not by a large amount. The increase in resistances is unexpected. We performed similar measurements on a second newt with a necrotic tail and obtained similar, though somewhat smaller increases in resistance. 7
Rel
Rbulk
6 5 4 3
C interneedle
Fig. 3 Equivalent circuit used to model impedance spectra.
This model can be extended to the present situation by the addition of a parallel, interelectrode capacitance as shown in Figure 3. The capacitance per unit length of a pair of long, parallel, circular cylindrical electrodes of radius r and separation S is C/l = πKeo/ln(S/r – 1), where eo is the permittivity of free space and K is the dielectric constant of the medium. We approximate the relatively short needles as a parallel combination of a pair of long cylinders embedded in plexiglass and a pair of long cylinders embedded in either water or tissue. For the tap water results shown in Figure 2 this model predicts an interneedle capacitance of about 95 pF. The dashed lines in Figure 2 illustrate the excellent fit to the measured spectra obtained using the model shown in Figure 3. We used the following parameters to achieve this fit: Ael = 3.7x105, nel = 0.85, Rel = 10 MΩ; Rbulk = 11 kΩ; Rhi-f = 0; C = 43 pF. No Zbulk term was necessary. The interneedle capacitance obtained in the fit is of the same magnitude as the approximate value calculated above, but is necessarily smaller because of the short length of the actual needle electrodes. The same model was also used to obtain the excellent fits to the measured impedance spectra in Figure 1. In that case Ael = 1.9x106, nel = 0.72, Rel = 500 kΩ; Abulk = 5.0x107, nbulk = 0.80, Rbulk = 1.1 kΩ; Rhi-f = 3kΩ; C = 6 pF. Figure 4 presents the impedance spectrum for a necrotic newt tail. The model of Figure 3 was again used to obtain the excellent fits to the measured spectra. In this case Ael = 2.3x106, nel = 0.72, Rel = 11 MΩ; Abulk = 1.2x108, nbulk = 0.72, Rbulk = 26 kΩ; Rhi-f = 40 kΩ; C = 5.5 pF. The main difference between the parameters for the normal and necrotic tails is the large increase in the resistances. Several
2 1 0 0
1
2
3
4
5
6
7
Log freq. (Hz) Fig. 4 Impedance Spectreum of a necrotic newt tail. Squares, ReZ; Circles, ImZ; Dashed lines indicate the fits provided by the circuit shown in Figure 3.
IV. DISCUSSION Tissue resistance generally decreases when cells are damaged or killed because the integrity of the cell membrane is reduced. Ions may enter the intercellular fluid, and the membrane provides less of a barrier for current transport. Freezing of apples [17] or chicken [18] significantly increases conductivity. Heating frog muscle [19], rat muscle [5] or apples [13] above about 42oC damages the membrane and also increases conductivity. Electrical Impedance Spectroscopy measurements made on a variety of damaged tissues show a decrease in resistivity following injury [20]. For this reason we are surprised by the large increase of resistance of the necrotic tissue compared to normal tail tissue. However, Martinsen et al. [21] observed an initial increase in fish muscle resistance as the muscle went into rigor following sacrifice, but then a subsequent decrease as relaxation followed. We intend to examine the increase in resistance which we observed in order to clarify its origin. We have identified an artifact in the measuring process due to the interneedle capacitance. Electrode capacitances are generally a concern for low frequencies at the tissue/electrode interface, but not at high frequencies where lead inductance may be more of an issue. We report this observation here so that other researchers may be aware of this potential problem for small tissue samples and needle
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Impedance Spectroscopy of Newt Tails
electrodes. Otherwise this dispersion might be mistakenly attributed to the tissue being studied.
REFERENCES 1.
Miklavcic D, Pavselj N and Hart F (2006) Electric Properties of Tissues. in "Wiley Encyclopedia of Biomedical Engineering", M. Akay, ed., Wiley, New York, Vol. 6, 3578-3589 2. Blad B, Wendel P, Jonsson and Lindstrom K (1999) An electrical impedance index to distinguish between normal and cancerous tissues. J Med Eng Tech 22: 1-5 3. van Kreel B K, Cox-Reyven N and Soeters P (1998) Determination of total body water by multifrequency bio-electric impedance: development of several models. Med Biol Eng Comput 36: 337-345 4. Schaefer M, Gross W, Ackemann J and Gebhard M M (2002) The complex dielectric spectrum of heart tissue during ischemia. Bioelectrochemistry 58: 171-180 5. McRae D A and Esrick M A (1993) Changes in electrical impedance of skeletal muscle measured during hyperthermia. Int J Hyperthermia 9: 247-261 6. Iten LE and Bryant S V (1973) Forelimb regeneration from different levels of amputation in the newt, Notophthalmus viridescens: Length, rate, and stages. Development Genes and Evolution 173: 263-282 7. Iten LE and Bryant S V (1976) Regeneration from different levels along the tail of the newt, Notophthalmus viridescens. Journal of Experimental Zoology 196: 293-306 8. Baranowitz S A, Maderson P F A and Connelly T G (1979) Lizard and newt tail regeneration: A quantitative study. Journal of Experimental Zoology 210: 17-38 9. Brockes J P and Kumar A (2005) Appendage regeneration in adult vertebrates and implications for regenerative medicine. Science 310: 1919-1923 10. Hart F X, Berner N J and McMillen R L (1999) Modelling the anisotropic electrical properties of skeletal muscle. Phys Med Biol 44: 413-421 11. Hart F X, Toll R B, Berner N J and Bennett N H (1996) The low-frequency dielectric properties of octopus arm muscle measured in vivo. Phys Med Biol 41: 2043-2052
193 12. Hart F X, Ismail U and Schmidt VA (2001) Comparison of the impedance spectra of frog gastrocnemius, octopus arm and crayfish tail muscles, measured in-vivo. Proc. XI Intl Conf on Electrical Bio-Impedance, Oslo, Norway, 2001. ICEBI; 321324. 13. Hart F X, Schmidt V, Ray L and Shrum E (1998) Measurement of the activation enthalpies for ionic conduction in apples. J Mat Sci 33: 3919-3925 14. Bao J-Z, Davis C C and Schmuckler R E (1992) Frequency domain impedance measurements of erythrocytes. Biophys J 61: 1427-1434 15. Bates J B, Lubben D, Dudney N J and Hart F X (1995) 5 Volt Plateau in LiMn2O4 Thin Films. J Electrochem. Soc 142: L149-L151 16. Wang B, Bates J B, Hart F X, Sales B C, Zuhr R A and Robertson J D (1996) Characterization of Thin-Film Rechargeable Lithium Batteries with Lithium Cobalt Oxide Cathodes. J Electrochem Soc 143: 3204-3213 17. Hart F X and Cole W H (1993) Dielectric properties of apples in the range 0.1 – 100 kHz. J Matl Sci 28: 621-631 18. Bodakian B and Hart F X (1994) The dielectric properties of meat. IEEE Trans Dielec & Elec Insul 1: 181-187 19. Hart F X, Davila-Moriel E E, Berner N J and McMillen R L (2005) Thermal hysteresis of bioelectrical impedance in frog gastrocnemius muscle, measured in-vivo. J Non-Crystalline Solids 351: 2929-2934 20. Heroux P and Bourdages M (1994) Monitoring living tissues by electrical impedance spectroscopy. Annals Biomed Engng 22: 328-337 21. Martinsen O G, Grimnes S and Mirtaheri P (1996) Electrical properties of fish muscle post mortem. Med Biol Eng Comput 34: Suppl 1, Part 2 187-188 Author: Institute: Street: City: Country: Email:
Francis X. Hart University of the South 735 University Avenue Sewanee USA
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Inherently Synchronous Data Acquisition as a Platform for Bioimpedance Measurement G. Poola1 and J. Toomessoo2 1
Artec Group, and Dep. of Electronics, Tallinn University of Technology ,Tallinn, Estonia 2 Artec Design, IEEE graduate member, Tallinn, Estonia
Abstract— Direct sampling of carrier signal with dedicated Digital Signal Processor is today widely used as a method for bioimpedance measurement. Due to limits in sampling rates that general DSP can provide, and due to the fact that DSPs are not fully synchronous, it is difficult to guarantee synchronization between sampling and generated carrier. By replacing DSP with FPGA it is possible, not only to make inherently synchronous system, but also to create flexible platform for product development. Keywords— Synchronous data acquisition, bioimpedance, stimulus generation, FPGA, Ethernet.
I. INTRODUCTION Direct sampling of measurement signal used for electrical bioimpedance measurement is becoming the most widely used method today. Even more, in the last decade the bioimpedance has gained its popularity mostly due to using of the Digital Signal Processors (DSP), as in Figure.1., and other digital methods for acquiring bioimpedance data in a digital form, and processing it [1-3]. For accomplishing measurements of the bioimpedance in a wide frequency range quite a dedicated Digital Signal Processor (DSP) is to be used as the sampling rates that the general purpose DSPs can provide, are relatively limited. In addition, due to the fact that DSPs are not fully synchronous, it is very difficult to guarantee precise synchronization between sampling and generated excitation signal. Precise synchronization is needed to obtain exact timing of sampling of the picked-up voltage as a carrier of information about the bioimpedance of interest. Exact timing is needed for determination of the complex components of the bioimpedance, especially of the relatively smaller quadrature component in presence of the inphase component. Thus, an idea appeared to overcome these problems by replacing the general purpose DSP with a special signal processor accomplished on the basis of a field programmable gate array (FPGA). With this solution it becomes possible to build up an inherently synchronous Data Acquisition System, but also to create a flexible platform for development of a whole series of different products based on bioimpedance measurement.
Combining low noise digital signal synthesis, with high rate synchronous sampling, to avoid the shortcomings of standard DSP approaches in bioimpedance measurement and providing adaptable platform, for similar applications and research purposes, with reusable infrastructure, lowering time to market for new methods and overall cost of resulting products. There are multiple different approaches for this, but they all have basic common functions, that have to be solved for such a digital system to be built. I. Configuration and control II. Stimulus generation III. Data acquisition IV. Data processing V. Data transfer to host VI. Electrical isolation There are differences in data processing (the algorithms used), acquisition techniques (under sampling, synchronous sampling) and also in stimulus generation (single frequency, multi frequency). Most of the architectures can be reduced to described steps while some already follow them [1;4]. Some of these solutions are forced on the researches by technology. For example, standard DSP can not handle high frequency sampling, so under sampling has to be used in high frequency carrier demodulation.
Fig. 1 Standard DSP bioimpedance system
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 202–205, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Inherently Synchronous Data Acquisition as a Platform for Bioimpedance Measurement
II. PROPOSED APPROACH Field Programmable Gate Array (FPGA) based platform for research and product development is the best option when flexibility is needed. It also opens up new avenues for converting from FPGA based platform to Application Specific Integrated Circuit (ASIC) if large scale production of devices should arise. FPGA based design also enables the use of methods that are not available on standard DSP based systems. A. Synchronous sampling Synchronous sampling reduces processing complexity and avoids noise resulting from asynchronous sampling of generated stimulus signals [4,5]. Achieving completely synchronous sampling of generated stimulus signal is possible when the stimulus stays synchronized with the sampling pulse. This requires alternate approach for Direct Digital Synthesis (DDS) to generate inherently synchronous output. The approach contains changed angle accumulator so that the increments cause the accumulator to always reach exact equivalent of 90 degrees resulting the quarter period of signal to be synchronous to the system clock. For that purpose the design uses two parameters to change the output frequency. System clock divider (1) and angle increment (2) (1) (2) when the 90 degree equivalent point (3) is always (3) The sampling pulses for the picked-up carrier measurement are derived from these synchronization points allowing periodical sample collection from the exact same location on the stimulus signal. This makes it possible to collect samples over many stimulus signal periods. Furthermore FPGA based architecture makes it possible to use much higher sampling rates 100 Million Samples Per Second (MSPS). Higher sampling rates enable to collect substantially more data from multiple sources making it possible to detect even subtle changes in the carrier signal. This level of synchronous sampling can not be achieved with standard DSP designs as DSPs are not inherently synchronous. B. Direct digital synthesis Direct sine calculation is used to achieve the best possible signal to noise ratio, instead of Look Up Table (LUT),
203
what is generally used in DDS removing the sine look up error from the digital domain. This reduces the need to post process or filter the signal further. The theoretical Signal-toNoise Ratio SNR (4) of a 16 bit signal is 96dB that comes form the quantization noise for 16 bits conversion assuming a uniform distribution of input signal values. (4) With direct sine calculation the SNR depends on precision of the calculation. Adding one bit to the calculation unit results a linear design size increase. In the case of using LUT adding one bit of precision will double the size of the LUT. Direct sine calculation algorithms have all inherent error. The calculation itself is either not long enough (polynomial algorithms) or is not very precise (approximation algorithms). The problem with approximation algorithms can be solved by calculating sine with higher precision and discarding part of the product, that contains the calculation error. This approach can deliver high resolution sine for DDS. III. RESULTS Developed bioimpedance data acquisition and stimulus generation platform based on Xilinx Spartan3 1000 FPGA enables synchronous signal generation in the Hz to Mhz range and is, in tests, capable to sample up to 200 MSPS, with two parallel Analog-to-Digital Converters (ADC). FPGA platform provides reusable infrastructure, with 100 Mbps Optical Ethernet connection, User Datagram Protocol (UDP)/Internet Protocol (IP) stack, configuration registers and dedicated control processor, consuming only 15% of the device resources. The Optical Ethernet serves as an isolation, form external noise, for medical applications. Providing driver free, optically isolated and high speed interface, to host based applications on any operating system, with network support. Other components in the synchronous bioimpedance measurement design consume about 50% of the FPGA device (12% sine calculation, 38% sampling control and filters). SNR of the signal generator digital output in hardware simulations is around 84dB for 16 bit output 1.5 MHz signal. The 84dB is not a design barrier, it comes form the choice of internal precision and could be changed for better or worse. The sine calculation is at the moment capable of 80 million separate 16 bit sine values per second, from 24 bit input angle, making it suitable for concurrent multifrequency applications[4].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
204
G. Poola and J. Toomessoo
Fig. 4 Using current prototyping platform
for ASIC design
conversions, that are possible, enables the use of time multiplexing the sine block for multifrequency signal generation. For 10 MSPS DAC up to eight parallel sines can be calculated, while using Spartan3 integrated multipliers to form the final output signal. IV. APPLICATION
Fig. 2 Current application for the platform 0 -10
1024p DFT 16bit f=1.5625 MHz
Implementing the reusable platform, with FPGA, enables rapid research and product development. Reusing existing infrastructure enables researchers to concentrate on developing new methods at lower cost. Described platform makes it possible to proceed with ASIC prototyping and ASIC based portable device platform design.
-20 -30 -40 -50 -60 -70 -80 -90 -100 -110 -120 -130 -140 -150
Fig. 3 1024 point DFT of 16 bit signal f=1.5625 Figure 2. displays current solution on the platform intended for multi frequency - multi source bioimpedance analyzer. Host PC based solution makes it possible to use existing proprietary software tools to design and test different sampling methods, processing algorithms and visualization ways for bioimpedance data. The graph in Figure 3. is simulation output of proposed signal generator. It has achieved SNR of 84dB with 32 bit internal calculation precision. The high rate of angle to sine
A. Prototyping platform The prototyping platform will enable design and production of hand held scale ASIC solutions for bioimpedance measurement. This prototyping phase is possible with Advanced RISC Machine (ARM) based System on Chip (SoC) providing Media Independent Interface (MII) and LCD control for display integration. These system level interfaces are common on many network oriented SoCs, providing fast and seamless integration of existing components to a prototyping platform. Second task in prototyping phase is software porting to embedded platform host PC. Figure 4. displays one of the possible ways ARM based SoC can be connected to existing FPGA over MII interface. The PC based software would have to be integrated in to ARM firmware based on GNU Linux operating system. Providing a clear Application Programming Interface (API) for future software modules defining the product functionality.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Inherently Synchronous Data Acquisition as a Platform for Bioimpedance Measurement
B. Bioimpedance SoC Integrating existing platform FPGA with ARM processor in one SoC will bring further scale-down and reduce power consumption. Bioimpedance measurement SoC would make wide range of products possible. Different signal generation configurations (different ranges or sets of frequencies) and different interpretations of bioimpedance data would allow the SoC to be used for different purposes. Figure 5 shows the integration of the bioimpedance SoC (BioSoC) into a portable product platform. This platform would enable different bioimpedance solutions to be developed, by developing in fact only the software modules. That makes the platform functionality useful in achieving the set goals, and making the development faster and cheaper. V. CONCLUSIONS The designed synchronous sampling platform enables to: collect, process and transport very large amounts of sampling data. This makes it possible to demodulate extremely small bio-modulations form very high frequency carrier signals reaching 5 MHz. The fully synchronous calculation based DDS design enables the use of high resolution digital-to-analog converters, making it possible to produce low noise carrier signals.
205
The phase by phase approach in design of bioimpedance measurement devices, enables to commercialize the result of every intermediate platform, lowering the risk and development cost of final target platform. This approach also enables research results, in the field of bioimpedance measurement, to reach market faster in the form of final products. The research and development cost reduction and reduction in overall Bill Of Materials (BOM), will hopefully reduce the cost of bioimpedance based medical equipment, making them widely available for: hospital patient monitoring, tissue analysis, medical organ diagnostics, sport diagnostics and self health monitoring devices.
ACKNOWLEDGMENT The authors thank prof. Mart Min and Toomas Parve for their help and support. The work has been partly financed by the Estonian Science Foundation under the grants 5892, 5897, and 7212, and is connected with the project 2.2 of the ELIKO Competence Center in Electronics-, Info- and Communication Technologies, Tallinn, Estonia.
REFERENCES 1.
2. 3.
4.
5.
Gordon R, Land R, Min M et al (2005) A virtual system for simultaneous multi-frequency measurement of electrical bioimpedance. Int. Journal of Bioelectromagnetism vol. 7, 2, (Special Issue: Proc. 2005 BEMNFSI Conf., Minneapolis, MN, USA, May 12-15, 2005), Omnipress, pp 243-246. Dudykevych T, Gersing E, Thiel F and Hellige G (2001) Impedance analyser module for EIT and spectroscopy using undersampling. Physiological Measurement, No 22, 2001, IoP Publ. Ltd, UK, pp 19-24. Hartov A, Mazzarese R A, Reiss F R et al (2000) A multichannel continuously selectable multifrequency electrical impedance spectroscopy measurement system. IEEE Trans. on Biomedical Engineering, vol. 47. No.1, January 2000, pp 49-58. Min M, Parve T, Annus P, Paavle T. (2006) A method of synchronous sampling in multifrequency bioimpedance measurements,: Proc. of the IEEE Instrumentation and Measurement Technology Conf. IMTC2006, Sorrento, Italy, 24-27 April 2006, pp 1699-1703. Min M, Parve T, Poola G (2005) Methods of multi-frequency bioimpedance measurement in implantable and wearable devices. Proc. 3rd European Medical & Biological Engineering Conf. EMBEC 2005, Nov. 20-25, 2005, Prague, Czech Republic, 6 p. (CD) Author: Gustav Poola
Fig. 5 SoC based bioimpedance measurement device
Institute: Street: City: Country: Email:
Dep. of Electronics, Tallinn University of Technology Ehitajate tee 5 Tallinn Estonia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Parameter Optimization in Voltage Pulse Plethysmography M. Melinscak
Abstract— Measurement optimization is being studied when short voltage pulses are stimulating bio-tissue and the transient process is sampled in order to measure the tissue volume changes. The measurement sensitivity depends on the ratio of the sampling instant to the time constant of the transient process, (T/τ) and on the ratio of the current sensing resistance to the resistance of the electrode-skin interface (R0/RSX). With variations of R0/RSX and T/τ the sensitivity changes from negative to positive values, while it equals zero for certain R0/RSX and T/τ ratios. The sensitivity is greater when positive while than negative but it depends on T/τ and R0/RSX. For negative sensitivity, T and R0 can be chosen to maximize the sensitivity and minimize its variations.
uo[v]
Polytechnic of Karlovac, Karlovac, Croatia
Keywords— voltage pulse plethysmography, electrode tissue interface, tissue resistance variation
Fig. 2 The output voltage for normal and deep breathing respiration [1]
normal breathing
3 2,5 2 1,5 1 0,5 0 -0,5 -1 0
I. INTRODUCTION
2
u 1 (t)
Plethysmography is a method of measuring volume changes in bio-tissue. Impedance plethysmography is most often applied in measuring volume changes due to blood pulsation (Fig. 1) or respiration (Fig. 2) whereby the biotissue resistance changes. In this study the voltage pulse plethysmography is being examined. Short voltage pulses activate the bio-tissue and the characteristics of the transient process are measured (Fig. 3, Fig. 4).
4
6
t[s]
8
10
U0
R0
C
u (t) RSX
u 0 (t)
R
ΔRSX CX
u 1 (t) T
u 2 (t)
u 2 (t)
Fig. 3 Block diagram of voltage pulse plethysmography
0,5 0,4 0,3 0,2 0,1 0 -0,1 -0,2 -0,3 -0,4 -0,5
20 15
0,5
1
1,5
t[s]
Fig. 1 The output voltage for blood pulsation in the arteries [1]
2
u(t) u2(t)
5 0 -0,06
0
T
10
u2, u [V]
u0[V]
deep breathing
-0,04
-0,02
-5
0
0,02
0,04
0,06
-10
t[ms]
Fig. 4 Characteristic waveforms of voltage pulse plethysmography as in Fig. 3
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 198–201, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Parameter Optimization in Voltage Pulse Plethysmography
199
The measurement sensitivity depends on the voltage sampling instant, T and resistance of the current sampling resistor. Dependence of the sensitivity with y=R0/RSX as a parameter is in Fig. 7 and with x=T/τ as a parameter is in Fig. 8. δR=-10 % δR=20%
δR=0 %
.
1
≈
≈
δR=-20 % δR=10%
1,2
u(t)/U0
The electrode-skin interface can be modeled with a fiveelement circuit that can be simplified to a three-element circuit due to the circuit symmetry (Fig. 5) [2]. The impedance of the electrode-skin interface is frequency dependent [3]-[5] and at high frequencies [2], [6] which are applied in pulse plethysmography, can be replaced by a series RC-circuit (Fig. 5), where the variable resistance, ΔRSX is added in series with RSX simulating the resistance change due to the change of the bio-tissue volume. This resistance change is rather small, typically 0.1-0.5 % [6], [7].
0,8 0,6 0,4 0,2 0 0
Fig. 5 Equivalent circuits for electrode-skin interface [2]
8
10
and y=1
⎞ ⎟ ⎟⎟ ⎠
The voltage variation with constant current sampling resistance R0 and ΔRSX/RSX as a parameter is shown in Fig. 6. It is evident that all curves intersect at one point, that is, the measurement sensitivity equals zero for certain T/τ ratio with fixed R0/RSX as a parameter. The relative measurement sensitivity is defined as the variation of the sampled voltage variation du (t=T) normalized to the impulse amplitude U0, relative to the relative change of the bio-tissue resistance RSX: (2)
y=1
y=1.5
y=2
6,00
8,00
0,25 0,2
S
0,15 0,1 0,05 0 -0,05 0,00
2,00
4,00
10,00
T/t
Fig. 7 Measurement sensitivity S with y=R0/RSX as a parameter x=1
x=3
x=5
x=7
x=9
0,12 0,1 0,08 0,06
S
0,04 0,02 0 -0,02 -0,04 0
x
⎛ x ⎞ − S = (du / U 0 ) /( dR / R) = − ⋅⎜ − 1⎟⎟ ⋅ e 1+ y 2 ⎜1+ y ( y + 1) ⎝ ⎠
y=0.5
0,3
(1)
where y = R0 / RSX , δR = ΔR SX /RSX , x = T / τ , τ = R SX C X
y
6
Fig. 6 Voltage dependence u(t)/U0 with constant sampling resistance R0
Fig.3 shows the block diagram of the voltage pulse plethysmography. One electrode of the circuit is grounded while voltage sample is taken on the other electrode. The analysis is performed for this implementation. The other possible implementation is when the current sampling resistor R0 is grounded. The analysis for this implementation is very similar describing in fact the same transient process. A simple analysis results in the expression for voltage u(t): ⎛ y ⎜ ⋅e u (t ) = U 0 ⋅ ⎜1 − ⎜ 1 + y + δR ⎝
4
T/t
II. OPTIMIZATION METHODS
x − 1+ y +δR
2
0,5
1
1,5
2
R0/RSX
Fig. 8 Measurement sensitivity S with x=T/τ as a parameter
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
200
M. Melinscak
Dependence of the sensitivity on R0/RSX ratio is defined S R = (dS ) /[dy / y ] = as: = −
[y
3
y ( y + 1) 5
⋅e
−
x 1+ y
(4)
⋅
(
]
)
+ y 2 ⋅ (1 − 3 ⋅ x ) + y ⋅ x 2 − 2 ⋅ x − 1 + x − 1
and is shown in Fig. 11. The curve SR for a certain T/τ passes through zero and for this value R0/RSX the sensitivity curve S has a minimum.
Fig. 9 Surface plot of measurement sensitivity S depending on T/τ and R0/RSX.
In Fig. 9 is a surface plot of measurement sensitivity S depending on T/τ and R0/RSX. Dependence of the sensitivity on the relative variation of the sampling instant dT/T is defined as: (3)
and is shown in Fig. 10. The curve ST for a certain R0/RSX passes through zero and for this value of T/τ the sensitivity curve S has a minimum. y=0.5 0,06 0,04 0,02 0,00 -0,02 ST -0,04 -0,06 -0,08 -0,10 -0,12 -0,14 0,00
2,00
y=1
4,00
y=1.5
6,00
y=2
8,00
10,00
T/τ
Fig. 10 Dependence of the measurement sensitivity on voltage sampling instant, ST with y=R0/RSX as a parameter
x=1
0,50
x=3
x=5
1,00
x=7
x=9
1,50
2,00
R0/RSX
Fig. 11 Dependence of the measurement sensitivity on R0/RSX ratio, SR
x
− y⋅x 1+ y [ ( ) ] ST = (dS ) /( dT / T ) = − ⋅ 2 ⋅ 1 + y − x ⋅ e ( y + 1) 4
0,07 0,06 0,05 0,04 0,03 SR 0,02 0,01 0,00 -0,01 -0,02 -0,03 0,00
with x=T/τ as a parameter
III. CONCLUSION The measurement reliability can be improved in view of maximizing the measurement sensitivity and minimizing the sensitivity variations regarding the variations of the sampling instant, T and the variations of the current sampling resistance to the bio-tissue resistance ratio, R0/RSX. The positive sensitivity for a small T/τ ratio is considerably greater than the negative sensitivity but it varies considerably with a change in T/τ and R0/RSX ratios. For the negative sensitivity T and R0 can be chosen so that at the same time the absolute sensitivity is maximum (the minimum of the sensitivity curve) whereby the variations with T and R0 are minimized.
REFERENCES 1. Melinscak M, Santic A. (2007) Features of voltage pulse plethysmography, The fifth IASTED International Conf. on Biomedical Engineering, Innsbruck (accepted for presentation) 2. Santic A, Kovacic D, Bilas V. (1999) Some new aspects of electrical impedance and pulse plethysmography, European
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Parameter Optimization in Voltage Pulse Plethysmography
3. 4. 5. 6.
Medical and Biological Engineering Conf. EMBEC, Vienna, pp. 114-115 Franks W, Schenker W, Schmutz P et al. (2005) Impedance characterization and modeling of electrodes for biomedical applications, IEEE Trans Biomed Eng 52(7):1295-1302 Klyuev AL, Rotenbergb ZA, Batrakov VV. (2005) Impedance of a passive iron electrode in a solution containing a reducing agent, Russ J Electrochem 41(1):87-90 Geddes LA. (1972) Electrodes and the measurement of bioelectric events. John Wiley & Sons Santic A. (1990) Pulse plethysmography in the blood pressure measurement at the finger, 6th IMEKO Conf. on Measurement in Clinical Medicine, Sopron, pp. 29-31
201 7. Santic A, Stritof T, Bilas V. (1998) Plethysmography measurements using short current pulses with low duty-cycle, 20th Ann. Int. Conf. of the IEEE EMBS, Hong Kong, pp. 1889-1892 Address of the corresponding author: Author: Martina Melinscak Institute: Street: City: Country: Email:
Polytechnic of Karlovac Ivan Mestrovic 10 Karlovac Croatia
[email protected]
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Separation of electroporated and non-electroporated cells by means of dielectrophoresis J. Oblak1, D. Krizaj2, S. Amon2, A. Macek-Lebar2 and D. Miklavcic2 1
2
Institute for Rehabilitation, Ljubljana, Slovenia University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
Abstract— By exposing cells to high voltage electric pulses, cells’ membrane permeability increases significantly. Phenomenon is known as electroporation and is widely used in biotechnology, biology and medicine, as a way of introducing into a cell molecules which otherwise are deprived of membrane transport mechanisms. Besides cell membrane permeability, the cells’ geometrical and electrical properties change significantly due to electroporation. These changes have a huge impact on dielectrophoretic force, which could allow us to separate the electroporated and non-electroporated cells. Usually, a test whether a cell is electroporated or not is performed by exposing cells to a dye. After such test cells are most often not useful for further use. For this reason cell separation based on dielectrophoretic force could be very useful, because cells are not destroyed or changed due to dielectrophoresis. In this study we report the results of an attempt to separate the electroporated and non-electroporated cells by means of dielectrophoresis. In several experiments we managed to separate the electroporated and non-electroporated cells suspended in a medium with conductivity 0.174 S/m by exposing them to a non-uniform electric field at a frequency of 2 MHz. Because experimental results did not match theoretical predictions entirely we presume that cell membrane permittivity decreases after electroporation for at least ten times.
method for manipulation and separation of microscopic particles is dielectrophoresis [5]. Dielectrophoresis (DEP) is a phenomenon in which a force is exerted on a polarisable particle, e.g., a biological cell, when it is subjected to a nonuniform electric field [6]. In addition to cell manipulation and separation, the dielectrophoresis spectra may be also used to derive electric properties of the cell [7]. In this study we report on measurements on dielectrophoresis behavior of electroporated and non-electroporated cells.
Keywords— electroporation, dielectrophoresis, separation, membrane permittivity
where R is the radius of the cell, εmed is the absolute permittivity of the suspending medium, E is the electric field acting on a cell and Re[ƒCM] is the real part of the ClausiusMossotti factor [6]. The Re[ƒCM] describes MaxwellWagner relaxation and is given by:
I. INTRODUCTION In biotechnology, biology and medicine, it is sometimes important to be able to introduce desired extracellular molecules that are normally cell membrane impermeant. Electroporation is a widely-used technique for delivering a large variety of impermeable molecules, such as drugs [1] and genes [2] into cells, both in vitro and in vivo. This is a phenomenon during which exposure of a cell to an electric field results in a significant increase in its membrane permeability [3]. Normally we test if cell is successfully electroporated or not by exposing cells to a dye (propidium iodide, trypan blue, lucifer yellow...) [4]. Further the cell is destroyed and, as such, can not be used. In many cases, researchers would prefer a more convenient method which would allow separation of electroporated and nonelectroporated cells from the cell suspension. A very useful
II. MATERIALS AND METHODS The dielectrophoretic force is generated through the interaction of the non-uniform electric field and the induced electric dipole of the cell. The direction and magnitude of the dielectrophoretic force depends on the dielectric properties of the cell and the suspending medium. If we assume that a biological cell is a spherical particle the dielectrophoretic force is defined as:
FDEP = 2πR 3ε med Re[ƒ CM ]∇ E 2 ,
ƒ CM =
ε 'cel −ε 'med ε 'cel +2ε 'med
,
(1)
(2)
where ε'cel and ε'med are the complex permittivities of the cell and medium and are defined as: ε' = ε–j(σ/2πf), where ε is permittivity, σ is conductivity and f is the frequency of the electric field. Therefore, the Re[ƒCM] is frequency dependent. The most important frequency of Re[ƒCM] spectra is the crossover frequency f0, at which Re[ƒCM] changes its sign, either from the negative to the positive or vice versa (dielectrophoresis crossover). Aside from the fact that after the electroporation the cell membrane becomes more permeable, the cells’ geometrical [8] and electrical properties [9] change, which has a large impact on the magnitude and the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 178–181, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Separation of electroporated and non-electroporated cells by means of dielectrophoresis
179
sign of Re[ƒCM]. Taking into account that the electroporated cells has a different Re[ƒCM] spectra than the nonelectroporated one, we expected that it would be possible to separate cells at a specific frequency of the electric field.
era (Visicam, Visitron Systems, Germany). Each experiment was repeated at least three times.
A. Experiments
According to Eq. (1), the frequency dependent Re[ƒCM] has the most significant impact on cell motion caused by dielectrophoresis. If Re[ƒCM] of a cell is negative, the cell is subjected to negative dielectrophoresis and moves away from the high-field intensity. However, if Re[ƒCM] of a cell is positive, the cell is subjected to positive dielectrophoresis and moves toward the high-field intensity. Re[ƒCM] was calculated from Eq. (2) for the electroporated and nonelectroporated cells suspend in medium M1. Another crucial parameter in Eq. (1) is the gradient of the of the electric field. Several structures are suitable for development of large non-homogeneities of electric field [6]. In this work we used castellated electrode structures. Areas of high- and low-field intensities were determined numerically, using finite element modeling software FEMLAB [www.comsol.com].
The conductivity of the medium has a significant role on the dielectrophoretic force [6] and electroporation [10]. For this reason, medium M1 with conductivity σM1 = 0.174 S/m was prepared. For preparation of medium M1 the following ingredients were dissolved in 200 ml distilled water: 272.2 mg KH2PO4, 348.4 mg K2HPO4, 19.02 mg MgCl2 and 17115 mg sucrose. The conductivity of prepared medium was measured by a Conductometer MA 5950 (Metrel, Slovenia). All experiments were carried out at room temperature (23°C). Mouse melanoma cell line, B16F1, was grown for four days in Eagle's minimum essential medium supplemented with 10% fetal bovine serum (Sigma-Aldrich Chemie GmbH, Deisenhofen, Germany). To meet the required concentration for electroporation, t.i. 2 × 107 cells/ml [10], the cells were diluted with medium M1. A 50 μl drop of diluted cell suspension (containing 106 cells) was placed between two parallel plate stainless steel electrodes spaced 2 mm apart and exposed to electric pulses. A train of eight rectangular pulses amplitude: 230 V, duration: 100 μs, repetition frequency: 1 Hz, was generated and monitored with the Cliniporator (IGEA, Carpi, Italy). Parameters of electric pulses for reversible electroporation in medium M1 were chosen according to Ref. [10]. The dielectrophoretic manipulation of cells was performed by a castellated electrode array, which was fabricated on a 500 μm thick wafer of Pyrex glass using microtechnology processing steps. The microelectrode structures were fabricated in the Laboratory of Microsensor Structures, Faculty of Electrical Engineering, University of Ljubljana, Slovenia. The fabrication of the module with microelectrode array was described in detail previously [11]. The cell suspension was diluted with a medium M1, since the most appropriate concentration to observe cell motion under the microscope is 2 × 106 cells ml. After the electroporation cells were left for 5 min to swell and reseal before being examined by dielectrophoresis. Sinusoidal signals of magnitude 7 Vpp were applied to the pair of electrodes over the frequency range 5 kHz - 50 MHz using a function generator 33250A (Agilent, USA). Cells exposed to generated non-uniform electric field were observed under the transmission microscope (Zeiss 200, Axiovert, Jena, Germany). The frequency of the electric field was varied and the images were recorded with a cam-
B. Numerical calculations
III. RESULTS In Fig. 1, calculated Re[ƒCM] and f0 of the electroporated and non-electroporated cells in medium M1 are illustrated. As we can see, cells are exposed to a negative dielectrophoresis at low frequencies and to a positive dielectrophoresis at high frequencies of the electric field. The calculated crossover frequency f0 is around 0.4 MHz for the nonelectroporated cells and around 0.2 MHz for the electroporated cells.
Fig. 1 Re[ƒCM] and f0 of the electroporated and non-electroporated cells in medium M1, calculated according to the electric properties of cells and suspending medium.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
180
J. Oblak, D. Krizaj, S. Amon, A. Macek-Lebar and D. Miklavcic
The distribution of the non-uniform electric field generated with the castellated electrode array at applied voltage between the electrodes is shown in Fig. 2. If cells are exposed to a positive dielectrophoresis, they will move toward the edges of the electrodes where the high-field intensity is generated. Exposing cells to a negative dielectrophoresis would direct the cell motion towards the middle of electrodes where low-field intensity areas are. Experiments revealed intense cell motion by exposing cells to the dielectrophoresis in medium M1. At low frequencies of applied electric field, cells are exposed to negative dielectrophoresis. At a frequency of about 0.4 MHz, the non-electroporated cells started to move toward the edges of the electrodes. The movement of the electroporated cells however was different from the theoretical predictions (Fig. 1). Electroporated cells were exposed to negative dielectrophoresis up to the frequency 10 MHz. Therefore, crossover frequencies of the non-electroporated (0.4 MHz) and electroporated (10 MHz) cells differ for at least an order of magnitude. We repeated each experiment at least three times by increasing and decreasing the frequency of the applied signal and the results of experiments were repeatable. IV. DISCUSSION By comparing theoretical and experimental results we conclude that the calculated f0 of the non-electroporated cells matches the experimentally obtained crossover frequency. At low frequencies the non-electroporated cells move toward the areas of the low-field intensity and at high frequencies they move toward the areas of the high-field intensity. Experimentally determined f0 of the nonelectroporated cells is between 0.3 MHz and 0.5 MHz, which corresponds well to the calculated crossover frequency f0 = 0.4 MHz. The calculated f 0 = 0.2 MHz of the electroporated cells did not match the experimentally determined
Fig. 2 Plot of the numerically calculated non-uniform electric field generated with the castellated electrode array. Dark shades represent areas of high-field intensity, whereas bright shades those of low-field intensity.
Fig. 3 Calculated Re[ƒCM] and f0 of the electroporated and nonelectroporated cells in medium M1, considering 10 to 20 times lower permeability of cell membrane after electroporation. crossover frequency. The electroporated cells are exposed to negative dielectrophoresis up to a frequency of about 10 MHz. According to theoretical predictions we expected that the crossover frequency f 0 of the electroporated cells is lower than that of the nonelectroporated ones. However, the experiments revealed just the opposite. The theoretical predictions for the electroporated cells were based on studies [8, 9] and on the single-shell model of a spherical cell [12]. Therefore, one possibility is that we did not succeed to describe a biological cell with the single-shell model, because the model was too simple. A more likely option is that at least one property of the electroporated cell from studies [8, 9] is not correct. At the electroporation, significant changes appear particularly on the cell membrane. There were many studies investigating the electric conductivity of the membrane, however, there are not many studies on a cell membrane permittivity after electroporation. It is interesting that the cell membrane permittivity in particular has a significant role on f 0 . If we require the calculated f 0 of the electroporated cells suspended in medium M1 (see Fig. 3) to correspond to the experimental results, the cell membrane permittivity must have been reduced after the electroporation. Considering that the non-electroporated and electroporated cells have different f0, the cells could be separated with dielectrophoresis at a frequency of about 2 MHz, see Fig. 3. In several experiments cells suspended in medium M1 were exposed to dielectrophoresis at frequency of 2 MHz. On the basis of cell arrangement around the microelectrode structures it was easy to determine if the cells were electroporated or not, see Fig. 4.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Separation of electroporated and non-electroporated cells by means of dielectrophoresis
181
Fig. 4 Separation of electroporated and non-electroporated cells at a frequency of 2 MHz. Non-electroporated cells are exposed to positive dielectrophoresis, therefore they move toward the edges of the microelectrode. On the other side, electroporated cells move toward the middle of the two adjacent electrodes because they are exposed to positive dielectrophoresis.
V. CONCLUSION The aim of our study was to investigate the separation possibilities of the non-electroporated and electroporated cells by means of dielectrophoresis. It was obvious from the experimental results that the behavior of the non-electroporated and electroporated cells exposed to dielectrophoresis is different. Since the experimental results did not match the theoretical predictions we believe that a cell membrane permittivity decreases after electroporation for at least ten times of its value before the electroporation. By exposing the cells suspended in medium M1 to the non-uniform electric field at frequency of 2 MHz, we succeed to separate the non-electroporated and electroporated by means of dielectrophoresis.
ACKNOWLEDGEMENTS Dielectrophoresis experiments could not have been preformed without preparation of the microelectrode structures made by Darko Lombardo and the members of Laboratory of Microsensor Structures, Faculty of Electrical Engineering, University of Ljubljana, Slovenia. This research was supported by Slovenian Research Agency.
3.
Teissie J, Rols MP (1993) An experimental evaluation of the critical potential difference inducing cell membrane electropermeabilization. Biophys J 65: 409-413 4. Kotnik T, Macek-Lebar A, Miklavcic D, Mir LM (2000) Evaluation of cell membrane electropermeabilization by means of nonpermeant cytotoxic agent. Biotechniques 28:921-926 5. Lynch PT, Davey MR (1996) Electrical Manipulation of Cells. Chapman & Hall, New York 6. Morgan H, Green NG (2003) AC Electrokinetics: colloids and nanoparticles. Microtechnologies and Microsystems series, Institute of Physics Publishing, Bristol 7. Marszalek P, Zielinsky JJ, Fikus M, Tsong TY (1991) Determination of electric parameters of cell membranes by a dielectrophoresis method. Biophys J 59:982-987 8. Pavlin M, Leben V, Miklavcic D (2006) Electroporation in dense cell suspension—Theoretical and experimental analysis of ion diffusion and cell permeabilization. Biochim Biophys Acta 1770:12-23 9. Pavlin M, Kanduser M, Rebersek M, Pucihar G, Hart FX, Magjarevic R, Miklavcic D (2005) Effect of Cell Electroporation on the Conductivity of a Cell Suspension. Biophys J 88:4378-4390 10. Pucihar G, Kotnik T, Kanduser M, Miklavcic D (2001) The influence of medium conductivity on electropermeabilization and survival of cells in vitro. Bioelectrochemistry 54:107-115 11. Lombardo D, Vrtacnik D, Krizaj D (2004) Microstructures for manipulation of micro and submicron particles using dielectrophoresis. Proc 40th International Conference on Microelectronics, Devices and Materials MIDEM 2004: 163-168 12. Pauly VH, Schwan HP (1959) Uber die impedanz einer suspension von kugelformigen teilchen mit einer schale. Z Naturforsch 14b:125-131 Author: Jakob Oblak
REFERENCES 1.
2.
Marty M, Sersa G, Garbay JR, Gehl J, Collins CG, Snoj M, Billard V, Geertsen PF, Larkin JO, Miklavcic D, Pavlovic I, Paulin-Kosir SM, Cemazar M, Morsli N, Soden DM, Rudolf Z, Robert C, O’Sullivan GC, Mir LM (2006) Elect Electrochemotherapy – An easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: Results of ESOPE study. Eur J Cancer Suppl 4:3-13 Neumann E, Kakorin S, Toensing K (1999) Fundamentals of electroporative delivery of drugs and genes., Bioelectrochemistry and Bioenergetics 48:3-16
Institute: Street: City: Country: Email:
Institute for Rehabilitation Linhartova 51 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Conducting Implant in Low Frequency Electromagnetic Field B. Valic1,2, P. Gajsek2 and D. Miklavcic3 1
E-NET Okolje, Ljubljana, Slovenia Institute of Non-ionizing Radiation, Ljubljana, Slovenia 3 University of Ljubljana/Faculty of Electrical Engineering, Ljubljana, Slovenia 2
Abstract— Based on the medical images, a finite element model of the human body with intramedular nail in the femur was prepared. The model was exposed to low frequency (50 Hz) electromagnetic field with the intensity of ICNIRP reference levels for general public. Calculated current density distribution inside the model was compared to ICNIRP basic restrictions for general public. From the results it can be seen hat the implant increases current density in the region where it is in contact with soft tissue. The increase is significant. ICNIRP basic restrictions are exceeded in limited volume of the tissue in spite of compliance with ICNIRP reference levels for general public. On the other hand, the region, where significant increase in the current density is observed, is limited to few cubic centimeters only. Keywords— numerical modeling, finite elements, dosimetry, implants, low frequency exposure
I. INTRODUCTION With the increasing number of implants in modern medicine an important question has to be clarified: Could ICNIRP basic restriction levels [1] be exceeded when implant is present in the body in spite of fulfilling ICNIRP reference levels. In the recent years there were some papers published about the problems of electromagnetic field distribution in the human body with implant. For example in [2] they calculated SAR distribution and temperature change around a metallic plate in the head of a RF (100 to 3000 MHz) exposed worker. Virtanen et al. [3] calculated SAR enhancements due to ring and rod shaped metallic implants at mobile frequencies (900 and 1800 MHz). In 2006 Virtanen published also a review paper [4] about the interactions between RF electromagnetic fields and passive implants. But all of the papers are dealing with RF electromagnetic field, whereas for low frequency electromagnetic field there is no publication. To determine the influence of an implant on electromagnetic field distribution inside a human we used numerical modeling to calculate electromagnetic field distribution in a human with intramedular nail in the low frequency (50 Hz) electromagnetic field. We included intramedular nail, used to fix broken cancelous bones, because it is one of the long-
est implants used. Beside intramedular nail, also bones of the right leg were included: femur, patella, fibula and tibia. Material properties were derived from the literature. By appropriate boundary conditions electromagnetic field intensity at the reference levels for general public was generated inside the model. For comparison of the result and the influence of the implant on electromagnetic field distribution also models without the implant were calculated. II. MATERIALS AND METHODS A. Geometry of the model The geometry of the model is based on the female images form the Visible Human Data Set (VHDS), available from National Library of Medicine, National Institutes of Health, USA. They are taken in the interval of 0.33 mm with 0.33 mm resolution. Each of 5186 images consists of 2048 × 1216 pixels with 24 bit color depth (Fig. 1). Numerical modeling was performed with Finite Element Method program package Comsol Multiphysics, Comsol, Sweden. Numerical modeling stated with defining which tissues we will include in the model. The more tissues we include, more detailed the solution will be but this means also that mesh will become denser and more complicated. We included only soft tissue for the whole model and bones in the region near the implant only. Pro-
Fig. 1 An example of the images from the VHDS. The shown one (avf2300.raw) is taken in the region below the knees.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 218–221, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Conducting Implant in Low Frequency Electromagnetic Field
gram package requirements lead to seven soft tissue objects: head, upper and lower part of the torso, two arms and two legs. The build-up of each object started with selection of images, starting with every 150th image (the distance between the images 5 cm). After we manually cleared the surrounding of desired object (tissue type) in every image, a custom written algorithm replaced the border of the object with polyline with predefined number of nodes. Finally nodes of all polylines were connected together to obtain three dimensional object. After the first iteration we analyzed the object and included additional images in the area where the geometry was not accurate enough. The geometry of the femur is shown in Fig. 2. The geometry of the intramedular nail was based on Xray images of a woman with broken right femur. Images were taken at University Medical Centre, Ljubljana, Slovenia. Before we were able to include the nail in the model we had to orient X-ray images with respect to the objects, based on VHDS images. The intramedular nail inside the femur is shown in Fig. 2 (right). The calculation region was defined by the block with the dimension 5 × 5 × 8 m. Because of large scale geometry (the ratio between the smallest and the largest dimension in the geometry) automatic mesh algorithm implemented in Comsol Multiphysics was unable to generate mesh. By splitting the geometry in two geometries it was possible to decrease the scale. As shown in Fig. 3 instead of one geometry we have two geometries: big geometry is a block with dimension 5 × 5 × 8 m with a cut out in the lower centre part, and small geometry, with the same dimension as cut out: 0.5 × 0.5 × 2 m. The geometry of the human body is situated in the small geometry. In both geometries, after minor correction of human body geometry and fine tuned parameters of mesh generation algorithm we were able to
Fig. 2 Geometry of the right femur (left) is based on 15 images. On the
219
Fig. 3 In lower part of the big geometry (left) there is visible cut out, where by identity boundary conditions small geometry (right) is inserted.
generate the mesh. Final mesh consisted of 40486 elements in small geometry and 988 elements in big geometry. Quasi-static electromagnetic application mode was used, valid when the largest dimension in the geometry is one order of magnitude smaller than the wavelength of electromagnetic field. To obtain the electromagnetic field with the values of ICNIRP reference levels for general public at 50 Hz (5 kVm-1 for electric field and 0,1 mT for magnetic flux density) required boundary conditions were defined. Since all materials in the model have the same permeability (Table 1) and current density in the model is very low (we can neglect induced magnetic field), the magnetic field distribution inside the model is homogeneous. This means that it is necessary to calculate magnetic field distribution only in the small geometry and not in both geometries, so we set magnetic boundary conditions on the boundaries of the small geometry to -0.1 μT in x direction. For electric field we applied 40 kV on the upper boundary of the big geometry and grounded the lower boundary (all other four boundaries were set to insulation) resulting in electric field 5 kVm-1 in z direction. Using special boudary conditions, called identity boundary conditions on the corresponding boundaries in big and small geometry it is possible to connect both geometries to behave as one when the model is calculated. Dielectric properties of the tissues in the model were taken from the data reported by Gabriel [5, 6]. Because of data dispersion at 50 Hz in the literature we made parameterization for tissue specific conductivity and permittivity. It was shown that only the value of tissue specific conductivity is important.
right side there is intramedular nail inserted in the femur.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
220
B. Valic, P. Gajsek and D. Miklavcic
Table 1 Material properties of objects included in the model Material soft tissue bone implant
σ (Sm-1) 0,2 – 0,4 0,005 – 0,009 4,032 ×106
εr 6
7
10 – 10 103 – 104 1
μr
source
1 1 1
5, 6 5, 6 Comsol Multiph.
To compare the results and determine the influence of the intramedular nail on electromagnetic field distribution inside the body we calculated also a model without the intramedular nail. III. RESULTS Using Finite element method we evaluated the influence of the intramedular nail in the femur on the distribution of low frequency (50 Hz) electromagnetic field and compared the results of the model with and without the intramedular nail to ICNIRP basic restrictions for general public. From the Fig. 4 it can be seen that human body disturbs electric field. The boundary conditions define the unperturbed electric field strength with the value of ICNIRP reference level for general public (5 kVm-1). Because electric field strength inside the human body is low in comparison to the surrounding area electric field strength is increased in the region near and above the head. The values are up to 20 kVm-1, which is 4 times higher than in unperturbed field. At low frequencies (up to 100 kHz) ICNIRP basic restrictions limits the value of current density. At 50 Hz the limit for head and trunk is 2 mAm-2.
Fig. 4 Electric field distribution in the big (left) and small (right) geometry is altered by human body: it is very low inside the human body (blue), but quite high near and above the head. Colorbar in Vm-1 covers the range from 0 to 6 kVm-1, a little above ICNIRP reference level for general public (5 kVm-1). The transparent area is where electric field is higher than 6 kVm-1.
Fig. 5 Current density is shown in the vertical crossection through the centre of the human body and 5, 10 and 15 cm to the right. Colorbar in mAm-2 covers the range from 0 to basic restriction for ICNIRP general public exposure (2 mAm-2). From the Fig. 5 it can be seen that current density inside the head and torso is lower than ICNIRP basic restriction for general public. In the legs below the knees current density is higher than basic restriction (transparent). However this is irrelevant since the basic restriction for current density is valid only in head and trunk. The current density inside the bones is low due to their low conductivity. In the last cross section at the right the top of the intramedular nail partially out of the bone is shown and it is in touch with soft tissue. Just above the top of the intramedular nail there is a small area in the soft tissue, where current density is higher than basic restriction (white). More detailed influence of the implant is shown in Fig. 6. First and third row presents data for the model with the implant and the other two for the model without it. As shown on the left of the figure we varied soft tissue specific conductivity from 0.2 Sm-1in the first and second row to 0.4 Sm-1in the third and fourth. In the first column, the crossection 0.6 m above the ground is shown, which is just above the knees. In the implanted leg current density is lower than in the other leg, similar is seen the second column showing crossection 0.7 m above ground. In the fourth column the results just above the intramedular nail are shown, where ICNIRP basic restriction for general public is exceeded. But only 2 cm higher (column 5) there are only minor differences between the results of the models with implant and without them. This shows that the influence of the implant is limited to the region, where the implant is situated and that this influence fades quickly with distance. The maximum 10 g averaged current density in the model is located just above the implant. It is 7 mA/m2, 10 times higher than in the model without the implant. As already mentioned, ICNIRP basic restriction for current density for general public is 2 mA/m2.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Conducting Implant in Low Frequency Electromagnetic Field
221
Fig. 6 Current density in crossections parallel to the ground 0.6 m (just above the knees), 0.7, 0.8, 0.88 (just above the intramedular nail), 0.9 (just above the femur) and 1 m above the ground is shown for the model with (first and third row) and without (second and fourth row) the implant for two different specific conductivity of soft tissue (0.2 Sm-1in the first and second row and 0.4 Sm-1in the third and fourth row). The colorbar is the same as in previous figure.
IV. DISCUSSION AND CONCLUSIONS Using numerical modeling we determined the influence of intramedular nail on electromagnetic field distribution inside a human at low frequency electromagnetic field. In the region, where intramedular nail is in touch with soft tissue, there is significant increase in current density – it is 10 times higher than in the model without the implant. This leads to exceeding the ICNIRP basic restrictions on current density for general public. Except in the leg, where intramedular nail is implanted there is no observable difference in current density distribution in other parts of the body. We defined the geometry of the model from two different medical images (VHDS for human body and X-ray for implant). It would be better to build the geometry of the model from anatomical images of patients with implant, but whole body 3D images of implanted patients were not in the reach of us. Low frequency exposure was chosen because there is no paper available dealing with low frequencies, whereas for RF exposure there are few presented. A low number of papers in this field, and most of them are less than few years old, indicates that future work on this model is necessary to obtain more valuable data. For example, we should use different exposures (direction, strength, frequency) or include different implants in the model.
ACKNOWLEDGMENT Thanks to asist. mag. Anže Kristan at University Medical Centre, Ljubljana, Slovenia for patient images.
REFERENCES 1. 2.
3. 4. 5. 6.
ICNIRP (1998) Guidelines for limiting exposure to time-varying electric, magnetic, and electromagnetic fields (up to 300 GHz). Health Phys 74:494-522 McIntosh RL, Anderson V, McKenzie RJ. (2005) A numerical evaluation of SAR distribution and temperature change around a metallic plate in the head of a RF exposed Worker. Bioelectromag 26:377-388 Virtanen H, Huttunen J, Toropainen A, Lappalainene R. (2005) Interaciton of mobile phones with superficial passive implants. Phys Med Biol 50:2689-2700 Virtanen H, Keshvari J, Lappalainen R. (2006) Interaction of Radio Frequency Electromagnetic Fields and Passive Metallic Implants – A Brief Review. Bioelectromag 27:431-439 Gabriel C, Gabriel S, Corthout E. (1996a) The dielectric properties of biological tissues: I. Literature survey. Phys Med Biol 41:2231-2249 Gabriel C, Law RW, Gabriel S. (1996b) The dielectric properties of biological tissues: II Measuerements in the frequency range 10 Hz to 20 GHz. Phys Med Biol 41:2251-2269 Author: Dr. Blaz Valic Institute: Street: City: Country: Email:
E-NET Okolje Kajuhova 17 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Effect of Modulated 450 MHz Microwave on HumanEEG at Different Field Power Densities R. Tomson, H. Hinrikus, M. Bachmann, J. Lass, and V. Tuulik Department of Biomedical Engineering, Technomedicum of Tallinn University of Technology, Tallinn, Estonia Abstract⎯ The experiments on the effect of modulated microwaves on human EEG were carried out on two different groups of 14 and 7 healthy volunteers exposed to 450 MHz microwave radiation modulated at 40 and 1000 Hz frequencies. The field power densities at the scalp were 0.16 mW/cm 2 for the first and 0.9 mW/cm 2 for the second group. The EEG analysis performed for individuals showed that increase in the EEG rhythm energy in both groups was comparable: up to 40% at lower and up to 30% at higher level of the field power density. Microwave caused statistically significant changes in the EEG rhythms energy for 20% of subjects in the first and for 14% of subjects in the second group. Our results suggested that effect of microwave on EEG didn’t enlarged with increase of the applied power density.
field power densities. In this paper we concentrated on evaluation of the EMF effect at different field power densities. A hypothesis is that changes caused by microwave in EEG and number of subjects affected increase at higher level of microwave field power density. The relative changes in EEG rhythms energy were selected as a quantitative measure to evaluate the effect. Two identical modulation frequencies 40 and 1000 Hz were applied at two different levels of microwave power.
Keywords⎯ EMF effect, nonthermal effect, EEG analysis.
Microwave exposure conditions were the same for all subjects in a group, except field power densities. The 450 MHz microwave radiation was 100% pulse modulated at 40 and 1000 Hz frequencies (duty cycle 50%). The 1 W or 10 W output power was guided by a coaxial cable to the 13 cm antenna, located 10 cm from the left side of the head. Estimated by the measured calibration curves, the field power densities at the skin were 0.16 mW/cm2 for the first and 0.9 mW/cm2 for the second group. The calculated SAR values were about 0.35 W/kg for the first [1] and 2 W/kg for the second group.
P
P
P
P
I. INTRODUCTION The increasing applications of telecommunication devices roused problem of possible effects of the radio frequency electromagnetic field (EMF) on human brain physiology. High-level microwave radiation causes heating of tissues and a microwave thermal effect. Effect of a microwave at levels lower than thermal limit on human nervous system have become a subject of discussions and the reports of possible non-thermal EMF effects are often contradictory. The difficulties in independent repeating of the experimental results cause doubt in these effects and mechanisms behind the effects are still unclear. During resent years our studies have been focused on the effect of modulated low-level microwaves on human EEG theta, alpha and beta rhythms and mental behaviour [1-6]. Based on our previous studies, high variability between EEG signals of individuals and very different sensitivity to microwave among subjects did not allowed to reveal statistically significant effect of microwave for a group but only for individuals. During all experiments we used a field power density 0.16 mW/cm2 and estimated SAR values 0.35 W/kg were lower the thermal effect limit. A rate of persons affected by microwave has been between 12 and 30%. Our expectation was that the microwave effect and rate of sensitivity would be higher at higher P
P
II. METHODS A. Microwave exposure
P
P
P
P
B. Subjects The experiments with different modulation frequencies were carried out on two different groups of healthy volunteers. The first group included 14 persons (aged 2124, 8 male and 6 female); the second group 7 persons (aged 19-21, 3 male and 4 female). All the subjects selected were without any medical or psychiatric disorders. Persons who declared tiredness or sleepiness before the experiment were excluded. All subjects passed the experimental protocol with exposure and sham. During each double blind test session, the exposed and sham-exposed subjects were randomly assigned. The subjects were not informed of their exposure, however, they were aware of the possibility of being exposed. Computer randomly assigned also the succession of modulation frequencies.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 210–213, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Effect of Modulated 450 MHz Microwave on HumanEEG at Different Field Power Densities
The experiments were conducted with understanding and written consent of each participant. The study was conducted in accordance with the Declaration of Helsinki and has formally approved by the local Medical Research Ethics Committee. C. Recording protocols and equipment The experimental protocols were identical for two groups except differences in field power (Fig. 1). 1 cycle 2 min Ref 1min CI
MW 1 min CI
2 cycle 2 min Ref 1 min CI
MW 1 min CI
10 cycle 2 min Ref 1 min CI
MW 1 min CI
Fig. 1 Time-schedule of the recording protocol during ten cycles of microwave exposure at fixed modulation frequency: first half-period of the cycle passive without microwave (Ref) and second half-period active with microwave (MW); signals for analysis were selected inside comparison intervals (CI) during first 30 s of the half-periods.
Firstly, the reference EEG was recorded during 1 minute. After that, the microwave exposure, modulated at first modulation frequency, was applied during 1 minute. Relaxation pause after the stimulation by microwaves (as reference for the following cycle) was also 1 minute. The procedure of the reference-exposure cycle was repeated ten times at fixed modulation frequency. During ten cycles of the microwave exposure, the modulation frequency was always the same. Second, the same procedure of 10 exposure cycles was applied at second modulation frequency. The procedure of this step was identical to the previous one, except that the modulation frequency was different. EEG recordings were continuously performed during all experimental protocol (40 min). The selection of the 40 or 1000 Hz as first or second modulation frequency was randomly assigned. The second protocol (sham) included the same steps (EEG recording during 40 min), except that the microwave power was switched off. The Cadwell Easy II EEG measurement equipment was used for the EEG recordings. The EEG was recorded using 9 electrodes, which were placed on the subject's head according to the international 10-20-electrode position classification system. The channels for analysis were chosen to cover the entire head: frontal FP1, FP2; temporal T3, T4; parietal P5, P4; occipital O1, O2; and the reference electrode Cz. The EEG recordings were stored on a computer with a 400 Hz sampling frequency.
211
D. Data analysis At first, the energies of four basic EEG rhythm frequencies theta (4-7 Hz), alpha (8-13 Hz), beta1 (1520Hz) and beta2 (21-38 Hz) were extracted from the total EEG signal (0.5 - 48 Hz) by filtering. The elliptic bandstop filters with 50 dB attenuation in the stop-band were used. Further analysis followed the method, described in our previous studies [1, 3]. The EEG energies of the exposure cycle half-periods with and without microwave were compared. The first 30 s of recovery and exposure halfperiods of the cycle were selected as comparison interval. The relative change of the EEG energy for a cycle was calculated as S=(s1/s2 – 1) x100 %, B
B
B
B
where s1 and s2 were the average energies inside the comparison intervals with and without exposure respectively. The parameter Sm as an average value of S over 10 cycles of exposure at the same modulation frequency for a subject was selected as a quantitative measure of the microwave effect. Processing of the signal was performed in the LabVIEW programming and signalprocessing environment. Statistical evaluation was performed according to the “zero hypothesis”. For sham recordings, signal segments with and without exposure are completely equivalent. The natural variance of the energies of the segments σ could be obtained as the mean of their squared differences for sham 2 −2 is an f-distributed recordings. The quantity x = (S m ) σ random quantity and respective p-values are obtained by means of the cumulative f-distribution. Post hoc analysis the modified Bonferroni correction was applied with a 0.05 confidence level. B
B
B
B
B
B
III. RESULTS Values of the parameter Sm were calculated for all subjects at two modulation frequencies in theta, alpha, beta1 and beta2 rhythms and F1-F2, T3-T4, P3-P4 and O1-O2 channels for exposed and sham recordings. Calculated values of the parameter Sm in T channels as average for the whole groups are presented in Fig. 2. Main trend is decrease of the EEG energy with microwave exposure in theta and increase in alpha, beta1 and beta2 rhythm frequencies. Maximal increase reaches 17% in the first and 12% in the second group. Main trend is decrease of the EEG energy with microwave exposure in theta and increase in alpha, beta1 and beta2 B
B
B
B
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
212
R. Tomson, H. Hinrikus, M. Bachmann, J. Lass and V. Tuulik
Group 1; T channels
theta
25
50
alpha
S%
S%
20
10
10 0
5
-10
0 sham
-5
40 Hz
3
4
5
6
7
8
9
10 11 12 13 14
sham
Group 2, individuals: beta 1; T channels
alpha beta1
20
2
-30
theta
25
1
-20
1000 Hz
Group 2; T-channels
50
40 Hz
40
1000 Hz
30
beta2
20
15
S%
S%
1000 Hz
30
beta2
15
40 Hz
40
beta1
20
sham
Group 1; individuals: beta1, T channels
10
10 0 -10
5
1
2
3
4
5
6
7
-20
0 sham
40 Hz
-30
1000 Hz
Fig. 2. Average values of calculated parameters Sm
Fig. 3. Calculated parameters Sm in the EEG beta1 rhythm energy
in the EEG T-channels for the whole groups.
in T- channels for individuals.
rhythm frequencies. Maximal increase reaches 17% in the first and 12% in the second group. Changes in the EEG beta1 rhythm energy for individuals in T channels are presented in Fig.3. In the case of first group there are 3 subjects with obvious increase (30-40%) in the EEG beta rhythm energy at 40 Hz modulation frequency. In the second group two persons demonstrate increase in EEG beta rhythm energy (15-30%) also at 40 Hz modulation frequency. No obvious increase revealed at 1000 Hz modulation frequency. The EEG beta1 rhythm was affected in majority only by 40 Hz modulation frequency. The results demonstrate that in majority of experiments the microwave exposure causes increase of the EEG energy level. Statistical analysis for the whole group was performed for average values of changes for 10 cycles (parameter Sm). The analysis didn’t reveal significant differences between sham and exposed results for the groups. Statistical analysis for individual subjects was performed for parameters S calculated for every exposure cycle before averaging over ten cycles of exposure at a fixed modulation frequency. Summary of statistical analysis for individuals as identification numbers of subjects with significant changes is presented in Table 1.
Tab. 1. Identification numbers of subjects with statistically significant differences between sham and exposed results after Bonferroni correction (p<0.05)
B
B
B
B
1st group; sham 1st group; 40 Hz 1st group; 1000 Hz 2nd group; sham 2nd group; 40 Hz 2nd group; 1000 Hz
B
Theta
Alpha
Beta1
Beta2
-
-
-
-
-
-
2; 10; 14
2
-
-
-
-
-
-
-
-
-
6
6
6
-
-
-
6
B
Absence of statistically significant differences between calculated measures for individuals in the case of sham recordings confirm that the changes are really introduced by microwave. Most sensitive to exposure was EEG beta rhythm – all statistically significant changes took place in EEG beta rhythms.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Effect of Modulated 450 MHz Microwave on HumanEEG at Different Field Power Densities
213
IV. DISCUSSION
ACKNOWLEDGEMENT
Changes in the EEG energy caused by microwave at field power densities lower and higher than the thermal limit are comparable. The values of relative changes are even slightly higher (30-40%) at lower than at higher microwave power (15-30%). Number of subjects significantly affected is 3 at lower and 1 at higher power. Rate of sensitivity is 20% for the first and 14% for the second group. Sensitivity to microwave didn’t increase with rising applied field power density. Microwave modulated at 40 Hz causes more notable changes and for more number of subjects than modulated at 1000 Hz microwave. Dependence of microwave effect on modulation frequency has been indicated also in our previous studies at 7, 14 and 21 Hz modulation frequencies [3, 7]. Our experiments didn’t reveal stronger effect of the microwave exposure at higher power density. The results suggest that effect caused by modulated microwave doesn’t have a linear dependence on the level of applied radiation power. Non-monotonic dependence of the effect on microwave radiation level has been reported also by other researchers [8]. Numbers of subjects affected by microwave as nonspecific stressor 14-20% are not low compared to the rate of multiple chemical sensitivity (exposures to multiple chemically unrelated compounds at doses far below those established to cause harmful effects occurrence) estimated to be between 2 and 10 % in the general population [9].
These studies have been supported by the Estonian Science Foundation grant No 6632.
REFERENCES 1. 2.
3.
4.
5.
6. 7.
V. CONCLUSIONS Results of our experimental study do not confirm the raised hypothesis about stronger microwave effect at higher field power density. Experimental data showed that 450 MHz microwave modulated at 40 and 1000 Hz caused comparable changes in the EEG at levels of the field power densities lower and higher than thermal limit. Increase in applied microwave power didn’t result in increase of the effect or number of affected individuals. Microwave caused statistically significant changes in the EEG rhythms energy for 20% of subjects at lower and for 14% of subjects at higher field level. Our results suggest that the microwave exposure causes significant changes in human EEG, affects larger part of population than chemically unrelated compounds. The mechanism of the findings is not clear and the effects needs further investigation.
8.
9.
Hinrikus, H., Parts, M., Lass, J., Tuulik, V. (2004): Changes in Human EEG Caused by Low Level Modulated Microwave Stimulation. Bioelectromagnetics 25: 431-440. Bachmann, M., Kalda, J., Lass, J., Tuulik, V., Sakki, M., Hinrikus, H. (2005): Non-linear analysis of the electroencephalogram for detecting effects of low-level electromagnetic fields. Medical & Biological Engineering & Computing, 43:142-149. Lass, J., Hinrikus, H., Bachmann, M., Tuulik, V. Microwave radiation has modulation frequency dependent stimulating effect on human EEG rhythms. Proceedings of the 26th Annual International Conference of the IEEE EMBS San Francisco, USA, 2004, pp. 4225-4228. Bachmann, M., Säkki, M., Kalda, J., Lass, J., Tuulik, V., Hinrikus, H. (2005): Effect of 450 MHz microwave modulated with 217 Hz on human EEG in rest. The Environmentalist, 25: 165-171. Lass, J., Tuulik, V., Ferenets, R., Riisalo, R. Hinrikus, H. (2002) Effects of 7Hz-modulated 450 MHz electromagnetic radiation on human performance in visual memory tasks. International Journal of Radiation Biology, 78: 937-944. Rodina A., Lass J., Riipulk J., Bachmann T., Hinrikus H. (2005) Study of effects of low level microwave field by method of face masking. Bioelectromagnetics, 26: 571-577. Hinrikus, H. Bachmann, M., Tomson R., Lass, J. (2005): Nonthermal effect of microwave radiation on human brain. The Environmentalist, 25: 187-194. Gapejev, A., Chmeris, N., Fesenko, Y. and Khramov, R., (1994) Resonance effects of a low-intensity modulated extremely high frequency field. Change in the motor activity of the unicellular protozoa Paramecium Caudatum, Biophysics, 39: 73-84. Cullen, M.R. (1987) Workers with multiple chemical sensitivities. Occupational medicine: state of the art reviews. Philadelphia: Hanley & Belfus, Inc.; 2: 655-661. Author: Hiie Hinrikus Institute: Department of Biomedical Engineering, Technomedicum of the Tallinn University of Technology Street: 5 Ehitajate Rd City: Tallinn Country: Estonia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
EMF Monitoring Campaign in Slovenian Communes B. Valic1, 2, J. Jancar1 and P. Gajsek1 1
Institute of Non-ionizing Radiation, Ljubljana, Slovenia 2 E-NET Okolje, Ljubljana, Slovenia
Abstract— To give the Slovenian communes and their inhabitants the possibility to obtain information about electromagnetic fields in their neighborhood, Forum EMS – an independent project aimed to inform general public about electromagnetic fields and their biological effects – started monitoring campaign in 2005. In communes expressed the interest, remote monitoring station was installed for one week. The value of electric field intensity was stored every minute 24 hours a day and. For each location all collected data were evaluated and presented to the interested public as an article in commune bulletin and on internet, where the data for all locations are available. In last two years more than 35 communes participated in this campaign. The monitoring campaign showed that typical electromagnetic field exposure due to GSM base stations in urban area is low. Maximum values reach 2 % of reference level for I. region of Slovenian legislation, which is 0.2 % of ICNIRP reference level for general public. Due to the vicinity of radio and TV broadcasting tower in one case, instead of GSM probe, wide band probe was used. In this case, measured electric field was 40 % of reference level for I. region of Slovenian legislation (4 % of ICNIRP reference level for general public). Keywords— Monitoring, Electromagnetic fields, Dosimetry.
Sevnica, Trebnje, Trebnje, Vojnik and Domžale; and in 2006: Puconci; Šempeter-Vrtojba, Nova Gorica, Izola, Piran, Koper, Semič, Kočevje, Šk. Loka, Bled, Cerklje, Šenčur, Naklo, Litija, Trbovlje, Hrastnik, Žalec, Šoštanj, Slovenj Gradec, Črnomelj nad Kranj. II. MATERIALS AND METHODS The remote monitoring system used in this campaign is intended for permanent monitoring of wide frequency band EMF at desired location. It consists of remote monitoring station and central unit. The station measures the value of EMF in predefined intervals and stores them temporarily. Once or twice a day all data are transmitted to the central unit through bi-directional GSM link. The central unit stores data and is used to configure parameters of the station. Detailed description of the system is given in next chapter. A. Remote monitoring station
I. INTRODUCTION
Remote monitoring station PMM 8055 (PMM, Italy) [1] is autonomous automatic monitoring system. It consists of measuring and controlling electronics, probe, GSM modem, rechargeable batteries and case with solar cells.
With the increase of electromagnetic field (EMF) sources, it is becoming more and more important to constantly monitor EMF in living environment. Forum EMS – project in Slovenia aimed to objectively and independently inform people about EMF and their biological effects – decided to start an EMF monitoring campaign in Slovenian communes in year 2005. All Slovene communes were invited to join the campaign, 20 of them participated in monitoring every year. During the year EMF remote monitoring station is located for one or two weeks in those communes. After the monitoring, for every location a report is prepared, where average and maximum values of electric field strength are present. To inform the public about the results, the article in commune bulletin is prepared and data are available also on the internet. Till now nearly 40 communes take part in this campaign: in 2005: Maribor, Šmartno ob Paki, Brezovica, Zreče, Grosuplje, Novo mesto, Ljubljana, Tržič, Velenje, Kamnik,
Figure 1 Remote monitoring station.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 234–237, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
EMF Monitoring Campaign in Slovenian Communes
Thanks to solar cells and rechargeable batteries station could be used indoor and outdoor independent of power source. The control software allows fully automatic system operation with wide variety of user defined settings. It monitors the field strength over a wide frequency band with regard to the probe. We use three different probes. Two are narrow band E field probes with measuring range between 0.03 and 30 V/m for two different frequency bands: GSM (925 – 960 MHz) and UMTS (2110 – 2170 MHz). One is wide band E field probe (100 kHz do 3 GHz) which is less sensitive (0.3 – 300 V/m). Measurement uncertainty of probes is given in Table 1. Table 1 Measurement uncertainty Probe type GSM Wideband UMTS
Measurement range [V/m] 0.03 – 30 0.3 – 300 0.03 – 30
Frequency range [MHz] 925 – 960 10 – 2500 2110 – 2170
Expanded measure. uncertainty ± 1,8 dB ± 2,6 dB ± 1,8 dB
The field strength is monitored continuously with a custom time-averaging period, according to Slovenian legislation [2] we use 6 minutes period. The measurements are stored at predefined time periods, we use 1 minute. Through bi-directional GSM link results are automatically downloaded to central unit periodically every 12 hours. During the activity of GSM connection, all measurements are marked; since it is necessary to exclude them form the results. The GSM modem allows data transfer to the central unit as well as modifications of all parameters of the station. There are two ways how to change parameters. First this could be done through web application running on the server, which is part of the central unit. Changed parameters are sent from the server to the station through GSM link. Second this could be done by SMS from mobile phone. Using special commands it is possible to change parameters of the station or even receive SMS from the station with different data, for example current measured field value.
235
• an internet server, where the parameters of the remote monitoring station could be modified and where all data are available to the user; • dedicated E-Smoguard software to manage the application (requires Linux). There are two main tasks for internet server. First is to allow surveying stored data (shown in Figure 2) and the second is to allow modifications of remote monitoring station parameters using Graphical User Interface (shown in Figure 3). In most of the communes, GSM probe was used, since GSM base stations are the most important sources of EMF. Only in the case of Puconci, where radio and TV broadcasting tower is located in the middle of the village, broad band field probe was used to evaluate EMF due to the radio and TV tower.
Figure 2 An example of monitoring results as available on internet. Green line presents electric field strength (V/m) whereas the red one presents the reference level according to Slovenian legislation [2], which is, for GSM frequency, 12.9 V/m. On axis x date and time of measurement is shown.
B. Central unit The central unit E-Smoguard 1.02 (Clampco Sistemi, Italy) consists of [3]: • two GSM modems (one for SMS and one for data transfer); • a terminal management unit which receives the data transmitted from remote monitoring station and converts it into a protocol accepted by the internet server;
Figure 3 GUI for modifications of remote monitoring station parameters.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
236
B. Valic, J. Jancar and P. Gajsek
III. RESULTS Results of EMF monitoring in Slovenian communes show that the values of electromagnetic fields are well below ICNIRP reference level for general public [4] and reference level for I. region of Slovenian legislation [2], which is 0.01
0.10
E (V/m)
10 times lower. The highest maximum value was measured in Domžale, where the maximum was 1.66 V/m, whereas the maximum average value 1.05 V/m was measured in Maribor. The value 1.66 V/m presents only 1.7 % of reference level for I. region of Slovenian legislation. The results for Puconci, marked with * are discussed on next page. 1.00
10.00
100.00
Maribor Šmartno ob Paki Brezovica
12.9 V/m
Zreče
42.4 V/m
Grosuplje Novo mesto Ljubljana Tržič Velenje
Emax (V/m)
Kamnik Sevnica
Eavg (V/m)
Trebnje Vojnik Domžale Puconci * Šempeter Vrtojba Nova Gorica Izola Piran Koper Semič Kočevje Škofja Loka Bled Cerklje Šenčur Naklo Litija Trbovlje Hrastnik Žalec Šoštanj Slovenj Gradec Črnomelj Kranj
Figure 4 Maximum and average results. GSM probe was used, which measures signals between 925 MHz and 960 MHz. Green line presents reference level for I. region of Slovenian legislation at 950 MHz; the red line presents ICNIRP reference level for general public at same frequency. In Puconci (marked with *) wideband probe was used because of the proximity of radio and TV broadcasting tower. Wideband probe measured EMF between 10 and 2500 MHz. Monitoring locations are listed in chronological order: from Maribor to Domžale in year 2005, from Puconci to Kranj in year 2006.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
EMF Monitoring Campaign in Slovenian Communes
In commune Puconci, the station was located in the village Pečarovci where radio and TV broadcasting tower is situated. Because of this, wideband probe was used to measure also EMF at radio and TV frequencies. Measured values were higher in Pečarovci than at other monitored locations: 5.1 V/m for maximum value and 3.8 V/m for average value. Reference level is different than in the case of GSM system because of different frequency: according to Slovenian legislation [2], it is 8.6 V/m and according to ICINIRP it is 28 V/m for general public. Taking into account this reference level, EMF reaches 35 % of reference level for I. region of Slovenian legislation and 3.5 % of ICNIRP reference level for general public. Based on the results of monitoring campaign in years 2005 and 2006 we can conclude that: • base stations do not present significant EMF source which should exceed current reference levels (Slovenian or international); • EMF intensities due to base stations are low on all monitored locations. On average they are below 1 % of reference level for I. region of Slovenian legislation [2]; • the highest measured EMF value due to base stations was 1.7 % of reference level for I. region of Slovenian legislation, measured in Domžale;
237
• the highest average EMF value due to base stations was below 1 % of reference level for I. region of Slovenian legislation, measured in Maribor.
ACKNOWLEDGMENT This work was support by Forum EMS – project in Slovenia aimed to objectively and independently inform people about EMF and their biological effects.
REFERENCES 1. 2. 3. 4.
PMM 8055S Remotely operated station for monitroing electromagnetic fields form 5 Hz up to 40 GHz (2002) at http://www.pmm.it Uredba o elektromagnetnem sevanju v naravnem in zviljenjskem okolju. UL RS, 70/1996 E-Smoguard at http://www.e-smoguard.net ICNIRP (1998) Guidelines for limiting exposure to time-varying electric, magnetic, and electromagnetic fields (up to 300 GHz). Health Phys 74:494-522 Author: Blaz Valic Institute: Street: City: Country: Email:
Institute of Non-ionizing radiation Pohorskega bataljona 215 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Measurements of background electromagnetic fields in human environment T. Trcek1, 3, B. Valic2, 3 and P. Gajsek3 1
University of Ljubljana/Faculty of Electrical Engineering, Ljubljana, Slovenia 2 E-NET Okolje, Ljubljana, Slovenia 3 Institute of Non-ionizing Radiation, Ljubljana, Slovenia
Abstract— An extensive measurement campaign was carried in Slovenia to evaluate background electromagnetic field intensity in different living environments. As electromagnetic field meter personal exposure meter Antennessa DSP 090 with analysis software for data processing and analysis was upgraded with commercial GPS receiver and custom made data logger unit to save geographical data of the measurements. Using this measuring system, mounted on a car roof or bicicle, eight locations across Slovenia were measured: cities Ljubljana and Maribor, part of city Kranj, part of city Koper with nearby villages Labor, Hrvatini and Tinjan and village Čelje near Ilirska Bistrica. The measured frequency spectrum was limited to GSM, DCS, UMTS, FM and TV signals. From the results it is clear that typical electromagnetic field exposure is low. Maximum values merely reach 3 % of reference levels for I. region of Slovenian legislation, which is 0,3 % of ICNIRP reference levels for general public. The only exception is Tinjan, where radio tower is located. Measurements done in the vicinity of this radio tower exceeded 5 V/m, which is upper detection limit of measuring system. The results of this measurement campaign will be presented on interactive map, open to general public through internet, where everyone will be able to obtain the measured values of every single measurement. Keywords— Antennessa DSP 090, Electromagnetic fields, Dosimetry
I. INTRODUCTION In modern lifestyle we are exposed to many different environmental factors: physical, chemical and biological factors from environment, which have great impact on our quality of life. At the same time public wants to have the information about quality of living environments. Both these factors stimulate different measurement campaign, inbetween also of background electromagnetic fields. Measuring background electromagnetic field (EMF) is an effective way to determine the overall exposure of general public to EMF. At the Institute for Non-ionizing radiation we used personal dosimeter with commercial name Antennessa DSP 090 for this purpose [1]. It is an example of selective instrument for measuring personal exposure to EMF through a longer time period. To expand the functionality of the personal dosimeter we connected it to the cus-
tom made data logger with GPS receiver. It allowed us to measure background EMF in human environment to determine the real exposure of general public to EMF in different living environments in Slovenia. Final goal of this research was to represent measurements as an array of points on an interactive map. The color of the points is an indicator of the real value of measured EMF, which is obtained by clicking on the point. In the future, the results will be available on internet. II. MATERIALS AND METHODS For EMF measuring specific measuring systems and methods are needed. One of such systems for selective EMF exposure measurement is Antennessa DSP 090. Figure 1 shows selective EMF measurement with Antennessa DSP 090, where contribution for each component of the EMF spectrum is shown. Personal exposure meter used as a measurement system for measuring background of EMF in human environment needs to be upgraded with GPS module and data logger unit to save the geographical coordinates of the measurements. Whole measurement system includes dosimeter Antennessa DSP 090, commercial GPS receiving module and custom made logger module used for GPS data storing and managing GPS module. Part of measurement system is also software installed on personal computer, handling measured data. All system components are shown in figure 2.
Figure 1: Selective EMF measurement with DSP 090
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 222–225, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Measurements of background electromagnetic fields in human environment
223
A. Antennessa DSP 090 Antennessa DSP 090 is personal exposure meter for personal EMF. Measurement is focused on FM, TV and wireless communications as mobile phones and their base stations. These signals are main players and contribute the major part of EMF in urban environment. Measurement is selective within each frequency band. Main characteristics of Antennessa DSP 090 are: • • • • • • • • • •
registering EM signal 24 hour per day in adjustable intervals; insensibility to interferences with user activity; 40dB dynamic range with sensitivity 0,05-5V/m; isotropy; FM, TV, GSM, DCS and UMTS frequency range; distinction between base and subscriber station; adjustable sampling interval from 3 to 255 seconds; memory autonomy for 7000 measurement records; few days battery autonomy; small size and weight (450g).
B. GPS module GPS model consists from two parts. First part is a commercial GPS receiving module for receiving satellite signal and determining geographical coordinates. Second part is data logging and managing unit, used for mange GPS receiving module and logging GPS data. This unit was developed according to our requirements. Our basic requirements were: • • • • • •
memory autonomy comparable to DSP 090; data should be stored permanently (flash card); battery autonomy comparable to DSP 090; geographical coordinates from GPS should be converted to Gauss-Krueger coordinates; DSOP 090 LED synchronization pulse should be used to trigger pulse for data logging; supported connectivity with PC.
C. Measurements and data processing Prior to starting measurements the locations for measurements were defined. To obtain representative sample it is necessary to cover enough population as well as to include different living environments and EMF sources. Including large cities we included enough population and different EMF sources. But measurements in cities only are not representative enough and rural areas should be included too. In rural areas it is appropriate to include areas with at least one strong source of EMF.
Figure 2: Measurement system Considering all mentioned criterion we chosen following places for our measurement: Ljubljana, Maribor, part of Kranj, part of Koper with nearby villages: Labor, Hrvatini and Tinjan and Čelje near Ilirska Bistrica. The most part of measurements was realized by car with measurement system mounted on roof carriers. Some parts of Ljubljana with prohibited car traffic were measured by bicycle. During measurements occasional transfer of recorded data to computer was necessary due to the safety reasons. Data from dosimeter are written in a .mes format recognized by Antennessa software. Software allows exporting data to standard MS Excel format. Coordinates from data logger unit are stored in .txt format and must be combined with Excel data after transfer to computer. After combining, data without pairs value-coordinate were eliminated. It was also necessary to filter all measurements that are too close. We decided to filter out all measurements that are less then 20 meters apart. Program code for filtering was done by MS Visual Basic. It first calculates distance from one measurement to each other and then eliminates all measurements that are closer then 20 meters to comparable measurement. In principle each measurement has to be compared to each measurement, therefore number of comparisons rises with the square of measurements. For instance, in Ljubljana where more then 12000 measurements were acquired, 144 million of comparisons are needed. Calculation time was approximately half an hour. After filtering, measurement points were drawn into graph for fast survey. For Ljubljana, graph is shown in figure 3.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
224
T. Trcek, B. Valic and P. Gajsek
106000
104000
102000
100000
98000 458000
460000
462000
464000
466000
468000
Figure 3: Graphically presented filtered measurement points
The accuracy of filtered measurement points was verified on The Environmental Agency, Ministry of the Environment and Spatial Planning of the Republic of Slovenia. Coordinates obtained from GPS device always have some deviation, which is normally less than 1 m. In the case of inappropriate position of visible satellites deviation arises. Several measured points with obviously wrong position were displaced to the most credible position; those with doubtful position were deleted. An example of filtered and corrected measurements is shown in figure 4. When multiple frequency and/or source exposure is present Slovenian legislation [2] directs that all measurement results must be converted from V/m to percent of maximum values defined. Safety index SI is calculated using equation: Figure 4: Measurement points represented on map
SI=∑ (Ei/Emi)2,
(1)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Measurements of background electromagnetic fields in human environment
where SI is safety index in percents, Ei is measured value of frequency component and Emi is maximum allowed value of frequency component. Maximum allowed value Emi is frequency dependent and has two values at each frequency. Slovenian legislation [2] differentiates the area between I. region area and II. region area. II. region area is where normal EMF exposure is allowed and is the same as ICNIRP reference levels for general public [3]. I. region area is where safety is increased, which means that allowed EMF exposure is decreased for factor 10. For measured frequency bands the values of Emi in V/m are shown in table 1.
were measured with values of 0.97 V/m, or 0.57% of I. region reference levels [2]. First value was measured in Studenci, precisely on Valvasorjeva street near base station. Second value was also measured in Studenci near base station on Koresova street. Like in Ljubljana, average values for Maribor show that GSM system is the greatest contributor. Average value of GSM signal in Maribor is 0.1 V/m, or 0.006% of I. area region reference levels. Other average values are at lower detection limit of measuring system (0.05 V/m) and are therefore not point of our interest. IV. CONCLUSION
III. RESULTS Measuring system consisting from Antennessa DSP 090, GPS receiving module and data logger was used to measure background EMF in human environment at 8 different locations across Slovenia. Totally almost 20000 measurements were captured. For each measurement following data were stored: EMF value for six different frequency ranges, coordinates, date and time. After the filtration (locations closer than 20 meters were excluded) 12059 measurements remained. For majority of locations the most important EMF source is GSM system with FM signals close. For Ljubljana maximum GSM signal was measured in Ljubljana-Tomacevo, where electric field strength was 2.28 V/m, or 3.1 % of I. region area reference levels [2]. Close to this was maximum FM signal, measured at Ljubljana Castle. Nearby there is radio diffusion system located and measured electric field strength was 1.57 V/m or 3.3 % of I. region area reference levels. Average values show that the most part of overall EMF exposure is contributed by GSM system. In Ljubljana average value of GSM signal is 0.127 V/m, or merely 0.01% of I. region area reference level. The results in Maribor are similar to those in Ljubljana. The value of maximum FM signal is 0.42 V/m or 0.42% of I. region area reference levels [2] and was measured nearby radio diffusion system in Tezno on Nikova street. In the frequency range of GSM signals two equivalent maximums
From the measurements it is clear that overall EMF exposure in human environment is low. Maximum values hardly riches 3% of I. region reference levels of Slovenian legislation [2], and are almost hundred times lower than allowed. Comparing results of different frequency ranges show that the major contributor to overall EMF exposure is GSM system. Only exception is Tinjan, where measurements were done in the vicinity of radio diffusion system. As expected for this location very high values of EMF were measured. Values were greater then 5 V/m, which is upper detection limit of measuring system. Our measurements also shows that exposure to EMF in largest cities are greater then in smaller cities or in rural areas. Beside measurements of additional locations the future work will focus on determining the number of people exposed to certain EMF values rather than the area with these EMF values.
ACKNOWLEDGMENT This work was partially support by Ministry of the Environment and Spatial Planning of the Republic of Slovenia.
REFERENCES 1. 2. 3.
Table 1: Maximum allowed value of EMF in V/m frequency\area FM TV 3 TV 4 & 5 GSM DCS UMTS
I. region area 8.6 8.6 9.3 12.9 18.2 19.0
225
II. region area 27.5 27.5 29.7 41.1 58.1 61.4
Antennesa at at http//www.antennessa.com Uredba o elektromagnetnem sevanju v naravnem in žviljenjskem okolju. UL RS, 70/1996 ICNIRP (1998) Guidelines for limiting exposure to time-varying electric, magnetic, and electromagnetic fields (up to 300 GHz). Health Phys 74:494-522 Author: Blaz Valic Institute: Street: City: Country: Email:
Institute of Non-ionizing radiation Pohorskega bataljona 215 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Numerical Assessment of Induced Current Densities for Pregnant Women Exposed to 50 Hz Electromagnetic Field A. Zupanic1, B. Valic2 and D. Miklavcic1 1
University of Ljubljana/Faculty of Electrical Engineering, Laboratory of Biocybernetics, Ljubljana, Slovenia 2 E-NET Okolje, Ljubljana, Slovenia
Abstract— A simple, yet anatomically realistic model consisting of five different tissue types was built to evaluate the effects of the low-frequency electromagnetic field on a pregnant woman and fetus. The model was exposed to the reference levels of the electric and magnetic field recommended by the ICNIRP guidelines. The induced current densities were calculated by finite element method and results were compared to the basic restriction recommended by the ICNIRP. The basic restrictions were met in all tissues and all orientations of the electric and magnetic fields, except (2.08 mA/m2) when the model was exposed to a vertical electric field and a sagittal magnetic field simultaneously. Since the simultaneous exposure to yields higher induced current values in the body, both should always be taken into account in low-frequency dosimetry numerical modeling. Keywords— Induced electric current, Dosimetry, Pregnancy, Finite element method.
I. INTRODUCTION Induced electric current values have been used as dosimetric measures in quantifying interactions with the electromagnetic fields in the low frequency range. The International Commission on Non-Ionizing Radiation Protection (ICNIRP) set the basic restriction for the general public at 50 Hz to 2 mA/m2 averaged over 1 cm2 perpendicular to the current flow in order to prevent unwanted nerve stimulation of the central nervous system (CNS) [1]. Since the electric current density in the body is very difficult to measure, reference levels for the unperturbed or external electric (5 kV/m) and magnetic (80 A/m) field strength were also set. If the reference levels are met, the basic restrictions for the electric current density should also always be met. The relationship between the reference levels and the basic restrictions was derived for healthy adults and do not necessarily hold for people with medical implants, children or pregnant women and the fetus. Anatomically realistic models have been used to determine the induced low-frequency currents in the human body for the last fifteen years. Several studies used highresolution whole body models to study the interactions between the electric field and the human body [2,3,4] and came to the conclusions that the limits set by ICNIRP were
not exceeded. The same conclusions were reached by Dawson et al. who studied the currents induced by a magnetic field [5] and by Ilvonen et al. who studied currents induced in the brain by the digital mobile phone batteries [6]. The first high-resolution pregnant woman model was developed recently by Cech et al. [7]. The body of the mother was segmented into 37 different tissues and the body of the fetus comprised of the skeleton, soft tissue and the CNS. The calculations demonstrated that basic restrictions were exceeded for the uterus, placenta, bladder and liver and the CNS of the fetus when exposed to the homogeneous vertical electric field. In other fetal tissues and other tissues of the mother the restrictions were met when exposed to the electric field or the magnetic field. All the research so far has concentrated on the exposure to electric or magnetic field, and not on simultaneous exposure to both fields, which is far more common. We developed a simple anatomically realistic model of a pregnant woman comprised of five different tissues and calculated the induced currents due to the exposure to a 50 Hz electromagnetic field. The results were compared to the ICNIRP basic restrictions and to the results of exposure to electric field only. II. METHODS A. Development of pregnant woman model Data for constructing the model were obtained from Shi and Xu [8]. Their CT images of a pregnant woman in the 30th week of gestation covered the portion of the body between the upper thigh and lower breast in 70 slices (without the arms), each 7 mm apart with pixel size of 0.938 mm. The segmentation of the cross-sectional images was performed by a semiautomatic algorithm and 26 different tissues were identified, out of which six tissues were used as the basis for our simplified anatomical model – skin, fat and soft tissue, muscle, uterus, placenta and fetus. The skin was later dropped from the model, after we had established that its presence does not affect the induced current values in the other tissues.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 226–229, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Numerical Assessment of Induced Current Densities for Pregnant Women Exposed to 50 Hz Electromagnetic Field
Table 1 Conductivities of used tissues assumed for 50 Hz Tissue
Conductivity σ (S/m)
Fat Muscle Fetus Uterus Placenta Other
0.020 0.240 0.310 0.381 0.656 0.216
To build individual 3D tissue models, an algorithm was written in MATLAB R2006a (Mathworks, Massachusetts, US) that used linear interpolation between points on the edges of individual tissue cross-sections. The tissue models were then constructed together in a 3D trunk model. Subsequently the trunk model was inserted into a homogeneous whole-body 3D computer model that was constructed from the cross-sectional images of The Visible Human Project (National Library of Medicine, US) [9]. The size of the homogeneous model was adapted to the pregnant model by linear scaling. The tissue electric properties in Table 1 were taken from Gabriel et al. [11,12], with some exceptions. For the homogeneous parts of the model, a conductivity of 0.216 S/m was chosen, which equals the average properties of human tissue at 50 Hz [7]. For the uterus the values reported by Gabriel (0.230 S/m) were increased to 0.381 S/m to account for the increase in blood flow from the 1st to the 30th week of pregnancy [13]. The assumed conductivity for the fetus was calculated as the average of the fetal conductivity and the conductivity of the amniotic fluid, which was not modeled as a separate tissue due to low resolution of the original images. The placental conductivity was scaled from the uteral conductivity taking into account the larger contents of blood in the placenta [13]. For quasi-static calculations of Maxwell equations the imaginary part of the complex conductivity ωε is small compared to the real part σ, therefore the relative permittivities εr of all tissues were set to 1. B. Numerical method The meshing and the calculations were performed with the COMSOL MultyphysicsTM (COMSOL, Stockholm, Sweden) software package for physical modeling, which has been developed on the basis of the finite element method (FEM). The FEM method calculates partial differential equations (PDEs) on discretized elements of different shapes and sizes, thus enabling very accurate modeling of irregular geometries. Our model of the pregnant women was discretized to 62380 tetrahedral elements of sufficient minimum quality (0.12) to guarantee accurate results [14].
227
The weak form of the quasi-static formulation of Maxwell equations was used to calculate the electric potential and the vector magnetic potential. Calculations were performed with the COMSOL Spooles direct solver. In average, the computing time was 15 minutes and 1.8 GB of memory was used. The exposures of the isolated pregnant woman to the electromagnetic field were simulated for the combination of a homogeneous 5 kV/m vertical electric field and a homogeneous 80A/m magnetic field in the frontal or the sagittal orientation, which is equal to the worst case scenario of reference levels published by ICNIRP [1]. We examined whether the basic restriction – the 1 cm2 average induced current density - were met in the modeled tissues of the mother and fetus. The average current density was determined by averaging the current components normal to the coordinate planes over circular areas of approximately 1 cm2. III. RESULTS The electric current induced in the pregnant woman is largest in the narrow regions of the body, i.e. in the feet, ankles, knees and the neck, as is shown in Figure 1. The largest values of the induced current in the trunk are reached in the placenta and at the boundary between the muscle and fat tissue at the back of the body. When the body was exposed only to the magnetic field, the highest current densities were found in the placenta and the uterus, but not in the legs or the neck. Table 2 shows the induced current densities averaged over 1 cm2 in individual tissues. The highest values of the induced current were calculated in the placenta, where the basic restrictions were not met (2.08 mA/m2), when the pregnant woman was exposed to a vertical electric field and
Table 2 Maximum current densities averaged over 1 cm2 in the fat tissue, muscle, uterus, placenta and fetus induced by frontally Hf or sagittally Hs oriented magnetic field of 80 A/m, vertical electric field E of 5 kV/m, and the combination of the electric and magnetic field. Current density averaged over 1 cm2 (mA/m2) Tissue
Hf
Hs
E
E + Hf
E + Hs
Fat
0.12
0.13
0.13
0.18
0.19
Muscle
0.78
0.87
0.52
1.03
1.14
Fetus
0.71
0.74
0.46
0.76
1.06
Uterus
0.92
1.12
0.62
0.98
1.58
Placenta
1.41
1.42
0.81
1.63
2.08
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
228
A. Zupanic, B. Valic and D. Miklavcic
Fig. 1 Electric current density distribution in sagittal cross-sections of the pregnant woman at distances x from the assumed central body plane (from left to right x = 12, 8, 4 and 0 cm). All plots are scaled to a maximum range of 2 mA/m2.
a sagittal magnetic field. In all other tissues the highest values resulted from the simultaneous exposure to the electric and magnetic fields in the sagittal orientation as well, while the lowest induced current values were calculated, when the pregnant woman was exposed to vertical electric field only. IV. DISCUSSION Electric current density distributions in a simple, yet anatomically realistic model were calculated for different exposure combinations of electric and magnetic fields. Our results show good agreement with earlier studies, when there is a possibility of comparison. For example, the induced current density values in the uterus for magnetic field exposure in the frontal and sagittal orientation by Cech et al. were determined to be 0.79 mA/m2 and 1.21 mA/m2, respectively [7], while our results are 0.91 mA/m2 and 1.12 mA/m2. A good agreement was also found between maximum current density in the muscle published by Dimbylow (the data was scaled from 1 V/m to 5 kV/m) – 1.31 mA/m2 [3] and the maximum current density obtained in our simulations – 1.23 mA/m2 (data for maximum current densities not shown). The synergistic effects of the electric and magnetic field on the total induced current density are presented in Table 2. It is clear that the induced currents from both fields are not a simple sum of the individual induced currents. In
fact, it seems that the total values are less than 150 % of the sum and in the case of the fetus even less than that (107 %). The reason for this is that the directions of the induced currents are not parallel to each other, which results in a lower total current. The basic restrictions for the pregnant woman and the fetus in our study were met in all tissues, except in the placenta for the case of the combined exposure of a vertical electric field and a sagittal magnetic field. When the electric conductivity of the placenta was lowered by 20 %, the values of the induced current fell below the ICNIRP limits. Accurate measurements of the dielectric properties of the placenta are thus needed to confirm or disprove our choice of conductivity values and whether the basic restrictions were truly exceeded in the placenta. V. CONCLUSIONS Our simulation demonstrated that the basic restrictions were exceeded in the placenta, when the pregnant woman was exposed to a vertical electric field and a sagittal magnetic field simultanouesly. Since simultanoues exposure to electric and magnetic fields causes higher currents to induce in the body, we think that both should always be taken into account in low-frequency dosimetry numerical modeling. Futher investigations into the dielectric properties of biological tissues during pregnancy and more detailed models
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Numerical Assessment of Induced Current Densities for Pregnant Women Exposed to 50 Hz Electromagnetic Field
are required to more accurately evaluate the induced current densities in the pregnant woman and fetus.
ACKNOWLEDGMENT The authors would like to thank Dr Xu of the Rensselaer Polytechnic Institute, Troy, New York for providing us with the original CT images and the segmented images of the pregnant woman. This work was supported by the Slovenian Research Agency.
7. 8.
9. 10.
REFERENCES 1. ICNIRP (1998) Guidelines for limiting exposure to timevarying electric, magnetic and electromagnetic fields (up to 300 GHz). Health Phys 74:494–522 2. Furse CM and Gandhi OP (1998) Calculation of electric fields and currents induced in a millimeter-resolution human model at 60 Hz using the FDTD method. Bioelectromagnetics 19:293–299 DOI 10.1002/(SICI)1521-186X(1998)19:5<293:: AID-BEM3>3.0.CO;2-X 3. Dimbylow PJ (2000) Current densities in a 2 mm resolution anatomically realistic model of the body induced by low frequency electric fields. Phys Med Biol 45:1013–1022 DOI 10.1088/0031-9155/45/4/315 4. Hirata A, Caputa K, Dawson TW et al (2001) Dosimetry in models of child and adult for low-frequency electric field. IEEE Trans Biomed Eng 48:1007–1012 DOI 10.1109/ 10.942590 5. Dawson TM and Stuchly MA (1998) High-resolution organ dosimetry for human exposure to low-frequency magnetic fields. IEEE Trans Magn 34:708–718 DOI 10.1109/ 20.668071 6. Ilvonen S, Sihvonen A, Kärkkäinen K et al (2005) Numerical assessment of induced ELF currents in the human head due to
11.
12.
13. 14.
229
the battery current of a digital mobile phone. Bioelectromagnetics 26:648–652 DOI 10.1002/bem.20159 Cech R, Leitgeb N and Pedliaditis M (2007) Fetal exposure to low frequency electric and magnetic fields. Phys Med Biol 52:879–888 DOI 10.1088/0031-9155/52/4/001 Shi C and Xu X (2004) Development of a 30-week-pregnant female topographic model from computed tomography (CT) images for Monte Carlo organ dose calculations. Med.Phys 31:2491–2497 DOI 10.1118/1.1778836 Valic B (2006) Vpliv implantov na porazdelitev elektromagnetnega polja v človeku. Doctoral dissertation, Univerza v LJubljani, Ljubljana Cech R, Leitgeb N and Pedliaditis M (2007) Fetal exposure to low frequency electric and magnetic fields. Phys Med Biol 52:879–888 DOI 10.1088/0031-9155/52/4/001 Gabriel S, Lau RW and Gabriel C (1996) The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz to 20 GHz. Phys Med Biol 41: 2251–2269 DOI 10.1088/0031-9155/41/11/002 Gabriel S, Lau RW and Gabriel C (1996) The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues. Phys Med Biol 41:2271–2293 DOI 10.1088/0031-9155/41/11/003 International commission on radiological protection (1975) Basic anatomical and physiological data for use in radiological protection: reference values. Pergamon, New York COMSOL at http://www.comsol.com/products/multiphysics/ (Feb 21, 2007) Author: Anze Zupanic Institute: Street: City: Country: Email:
University of Ljubljana, Faculty of Electrical Engineering Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Regenerative Effects of (-)-epigallocatechin-gallate Against Hepatic Oxidative Stress Resulted by Mobile Phone Exposure E. Ozgur1, G. Güler1 and N. Seyhan1 1
Gazi University Faculty of Medicine/Biophysics Department, Ankara, Turkiye
Abstract— Many in vivo and in vitro studies have been performed to investigate the biological consequences and to assess health risks of RFR (Radio Frequency Radiation) generated from mobile phones. Mechanism of RFR and oxidative damage and the question if antioxidants taken as nutrition can alter the oxidative damage of mobile phone damage are popular subjects tried to investigate. In this study, it was aimed to investigate whether the antioxidative effects of green tea catechins can inhibit RFR- induced free radical releases causing oxidative damage of proteins in guinea pigs’ liver tissue. RFR generated by mobile phone with 0.81 W/kg digital SAR value operating in GSM 1800 MHz frequency. Male Guinea pigs were exposed to mobile phone radiation averaged as 11.2 V/m, measured during exposure for 20 minutes in 7 days of a week. Activities of antioxidant enzymes such as superoxide dimutase (SOD), glutathione peroxidase (GSH-Px), and level of malondialdehyde (MDA) were measured in liver guinea pigs which divided into four groups as control, EGCG-treated, mobile phone-exposed and both mobile phone-exposed and EGCG-treated. As a result, both antioxidant enzyme activities and free radical levels of the mobile phone exposed and mobile phone exposed with EGCG groups changed significantly ( p < 0.05). Keywords— mobile phone, Radio Frequency, free radical, EGCG, antioxidant.
I. INTRODUCTION Radio frequency, or RF, refers to that portion of the electromagnetic spectrum in which electromagnetic waves can be generated by alternating current fed to an antenna. Wireless telephones are hand-held phones with built-in antennas, often called cell, mobile, or PCS phones. These phones are popular with callers because they can be carried easily from place to place. Wireless telephones are two-way radios. When you talk into a wireless telephone, it picks up your voice and converts the sound to radiofrequency energy (or radio waves). The radio waves travel through the air until they reach a receiver at a nearby base station. The base station then sends your call through the telephone network until it reaches the person you are calling. When you receive a call on your wireless telephone, the message travels through the telephone network until it reaches a base station close to your wireless phone. Then the base station sends
out radio waves that are detected by a receiver in your telephone, where the signals are changed back into the sound of a voice [1]. Radiofrequency fields of cellular mobile phones may affect biological systems by increasing free radicals, which appear mainly to enhance lipid peroxidation, and by changing the antioxidant defense systems of human tissues, thus leading to oxidative stress which can be depressed by antioxidants such as green tea catechins, (-)epigallocatechin-gallate (EGCG). Epigallocatechin gallate belongs to the family of catechins. It contains 3 phenol rings and has very strong antixoidant properties. EGCG, possess the most potent antioxidant activity of the catechins is the main active component if green tea leaves. Its possible benefit as a nutritional chemopreventive agent for cancer, atherosclerosis, and neurodegenerative diseases is generating increased scientific interest. Epigallocatechin gallate may provide health effects by protecting our cells from oxidative damage from free radicals. A number of chronic disease have been associated with free radical damage, including cancer, arteriosclerosis, heart diseases and accelerated aging. Epigallocatechin gallate can interfere with many enzyme systems [2,3]. In this research, effects of RFR generated by mobile phones on Malondialdehyde (MDA, an index of lipid peroxidation) were used as markers of oxidative stress-induced hepatic impairment. Superoxide dismutase (SOD) and glutathione peroxidase (GSH-Px) activities were studied to evaluate the changes of antioxidant status. II. MATERIAL AND METHOD A. Exposure Details In this investigation, 40 each three-month-old Guinea pigs divided into four groups as control, EGCG-treated, mobile phone-exposed and mobile phone-exposed with the treatment of EGCG. Both mobile phone exposure and EGCG-treated (intraperitoneally) groups were exposed to RFR, radiated from Nokia 3210 mobile phone operated in GSM 1800 MHz. During the exposure of every Guinea pig, external E fields were measured by NARDA EMR 300 and type 8.3 probe. Measurements were taken for duration of 10
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 214–217, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Regenerative Effects of (-)-epigallocatechin-gallate Against Hepatic Oxidative Stress Resulted by Mobile Phone Exposure
minutes per 2 seconds and the data saved to the computer connected to device via fiber optic cable. The whole data were averaged before statistical analysis. Guinea pigs were exposed to RFR averaged as 11.2 V/m for 10 minutes a day during 7 days and analyzed for the effects on liver tissue activities of SOD, GSH-Px and the levels of MDA.
Table 1 Liver oxidant and antioxidant levels in all groups. Results are expressed as mean±standart deviation MDA (nmol/mg.pr)
Controls
B. Liver collection, storage and biochemical process After the last day of mobile phone exposure, liver tissues were removed after decapitation of control and exposed animals. They were immediately frozen in liquid nitrogen and stored at -80ºC until analysis. MDA level were determined in the homogenate [4], GSH-Px activities in the supernatant [5] and SOD activity in extract samples according to the methods described elsewhere [6,7]. Protein measurements were made at all stages according to the method of Lowry et al.
215
EGCGtreated Exposure group Exposure group with EGCG
SOD (U/mg pr)
GSH-Px (U/mg pr)
39,47888 ± 3,296332
31,00308 ± 2,491115
15,350475 ±1,011348066 55,3026 ±8,572794953
29,32763 ± 4,969521 16,54091 ± 3,094963
28,584 ± 4,840640217 28,634 ± 3,122963
49,681525 ±4,800276674
22,40058 ± 3,529115
25,938125 ±3,548873601
21,60233333 ± 7,989324846
C. Other Experimental Features Approximately 3 month old male guinea pigs, weighing 250-300 g, were obtained from Turkey Hifzissiha Institute, Ankara, Turkiye. They were maintained on a 12:12 h light – dark cycle and housed in the suitable cages where temperature 22-24ºC and humidity levels were controlled and were fed ad libitum on a standard lab chow and carrot. Since placing more than one animal in a cage would create a stress factor, only one animal was placed in each cage during each RFR exposure period.
Figure 1 MDA level of liver
under the exposure of RFR and treatment of EGCG
D. Statistical Analysis Data were expressed as mean ± standard deviation. Statistical analyses with Kruscal Wallis were performed on the data of the biochemical variables to examine differences III. RESULT Significant differences were found in mobile phone exposure groups with controls for all parameters. SOD activities decreased statistically with respect to controls, exposure group and exposure group with EGCG. However, decrease of GSH-Px is insignificant. MDA level changed significantly in both mobile phone radiation groups and EGCG- treated groups. The results are presented in Table 1 and Figure 1-3.
Figure 2 SOD activities of liver under the exposure of RFR and treatment of EGCG
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
216
E. Ozgur, G. Güler and N. Seyhan
Figure 3 GSH-Px activities of liver under the exposure of RFR and treatment of EGCG
IV. CONCLUSIONS The radiofrequency (RF) signals emitted from mobile phones are considered as a type of non-ionizing electromagnetic radiation and are perceived as health risk. More focus has been brought on the potential effect of RFR to elicit biological response, in association with its detrimental effects on widespread application and long-term induction. The biological effects as a result of RF exposure include changes in cell membrane function, metabolism, and cellular signal communication, activation of proto-oncogene and cell death. There are scientific evidences on the health hazards due to RF-EMFs, like neurodegenerative disorders, brain tumors and cancer. Most of these disorders are result of increased free radicals which are atoms or groups of atoms with an odd (unpaired) number of electrons and can be formed when oxygen interacts with certain molecules. Once formed these highly reactive radicals can start a chain reaction, like dominoes. Their chief danger comes from the damage they can do when they react with important cellular components such as DNA, or the cell membrane. Cells may function poorly or die if this occurs. To prevent free radical damage the body has a defense system of antioxidants. Antioxidants are molecules which can safely interact with free radicals and terminate the chain reaction before vital molecules are damaged. Although there are several enzyme systems within the body that scavenge free radicals, these may be inadequate for some cases, so Antioxidants must be supplied in the diet. EGCG is one of the powerful antioxidants reducing the level of free radicals causing cell damage [9]. In light of the results of these studies, we performed a study on mobile phone radiation to analyze the effect of not only RF fields but also EGCG on free radical levels and
antioxidant enzyme activities on an in vivo system, guinea pigs. Yurekli et al. studied effects of GSM base transceiver station (BTS) frequency of 945 MHz on oxidative stress in rats and they conclude that MDA and GSH levels and SOD activities changed significantly [10]. Other study owing to Ozguner et al. was to investigate the effect of mobile phone on the level of Malondialdehyde (MDA), and nitric oxide (NO) and activities of superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GSH-Px) activities. Also they ask if a antioxidant (to Caffeic acid phenethyl ester (CAPE)) applied change the effect of mobile phone radiation. So they conclude that tissue MDA and NO levels increased, SOD, CAT and GSHPx activities were reduced in mobile phone exposed group, while CAPE treatment reversed these effects [11]. Moreover, in our study, the increased levels of MDA and NO and the decreased levels of hepatic SOD, GSH-Px and MPO activities demonstrate the role of oxidative mechanisms in mobile phone-induced liver tissue damage, and, EGCG via its free radical scavenging and antioxidant properties, ameliorates oxidative liver injury. These results show that EGCG can exhibit a protective effect on mobile phoneinduced and free radical mediated oxidative liver impairment in guinea pigs.
REFERENCES 1. FDA at http://www.fda.gov/cellphones/ 2. Leung LK, Su Y, Chen R et al.. (2001) Theaflavins in Black Tea and Catechins in Green Tea Are Equally Effective Antioxidants. J Nutr. 131: 2248-51 3. Waltner-Law ME, Wang XL, Law BK et al. (2002) Epigallocatechin Gallate, a Constituent of Green Tea, Represses Hepatic Glucose Production. J Biol Chem.: 277:34933-40 4. Wasowics W, Neve S, Peretz A (1993) Optimized steps in fluorometric determination of thiobarbituric acid reactive substances in serum: importance of extraction pH and influence of sample preservation and storage. Clin Chem,: 39, 2522-2526 5. Paglia DE, Valentine WN (1967) Studies on the quantitative and qualitative characterization of erythrocyte glutathione peroxidase. J Lab Clin Med:70, 158-169 6. Sun Y, Oberly LW, Li,Y (1988) A simple method for clinical assay of superoxide dismutase. Clin. Chem: 34, 497-500 7. Durak I, Yurtarslani Z, Canbolat O et al.(1993) A methodological approach to superoxide dismutase (SOD) activity assay based on inhibition of nitroblue tetrazolium (NBT) reduction. Clin. Chem Acta:214,103-104 8. Lowry OH, Rosenbough NJ, Farr AL et al.(1951) Protein measurement with Folin-phenol reagent. J Biol Chem:193, 265-275
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Regenerative Effects of (-)-epigallocatechin-gallate Against Hepatic Oxidative Stress Resulted by Mobile Phone Exposure 9. Ravindra T, Ahuja YR, Bhargava SC, Lakshmı NK (2006)Epidemiological Study to Assess The Risk of RFEMFs from Mobile Telephony, 4th International Workshop on Biological Effects of Electromagnetic Field, Crete, Greece, 2006, pp 1205-1208 10. Yurekli AI, Ozkan M, Kalkan T et al.(2006) GSM base station electromagnetic radiation and oxidative stress in rats. Electromagn Biol Med:25,177-88
217
11. Ozguner F, Altinbas A, Ozaydin M (2005) Mobile phoneinduced myocardial oxidative stress: protection by a novel antioxidant agent caffeic acid phenethyl ester. Toxicol Ind Health: 21,223-30 Author:
Elcin Ozgur
Institute: Gazi University Faculty of Medicine, # Biophysics Department Street: Besevler City: Ankara Country: Turkiye Email:
[email protected]
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
The Relation Assessment Between 50 Hz Electric Field Exposure-Induced Protein Carbonyl Levels and The Protective Effect of Green Tea Catechin (EGCG) A. Tomruk1, G. Guler1 and N. Seyhan1 1
Gazi University Faculty of Medicine/Biophysics Department, Ankara, Turkiye
Abstract— The aim of this study was to determine the oxidation of proteins, measuring the protein carbonyl levels (PCO) as biomarkers of oxidative stress, and to answer whether protective effect of green tea catechin, EGCG, can reduce the protein damage initiated by free radicals in guinea pigs’ liver tissues under 50 Hz 12 kV/m E Field exposure. Guinea pigs, weighted 250-300g, were used in the study. Protein carbonyl levels (PCO) were measured spectrophotometrically, by Levine method with slight modifications. Mann-Whitney U test was applied for statistical analysis. As a result, both 50 Hz 12 kV/m E Field exposure and EGCG-administrated (intraperitoneally) groups’ liver protein carbonyl levels were found nonsignificantly decreased than control groups. However 50 Hz 12 kV/m E Field exposure + EGCG-administrated group’s liver protein carbonyl levels were found significantly decreased. In the view of these results, it was concluded that E-Field exposure may inhibit the formation oxidized protein, reducing the protective effect of EGCG. Keywords— ELF-E Field, protein carbonyl level (PCO), EGCG, catechin, liver.
I. INTRODUCTION Since Extremely Low Frequency (ELF) Electromagnetic Fields (EMF) are capable of inducing electric current and fields in the tissues of exposed subjects, researches in possible biological effects of these fields have increased [1,2].Some of them demonstrated that ELF Electric and Magnetic Fields can affect the rates of DNA, RNA and protein synthesis[3-8]. Scientists have also suggested that ELF Electric and Magnetic Fields cause an increase in free radical activity in living organisms, leading to formation of excessive amounts of active oxygen forms. Active oxygen forms, attending enzyme activities, gene expressions and affecting membrane structures and functions, can cause vital damages in biomolecules [9,10]. Reactive oxygen species (ROS) may damage all types of biological molecules. Oxidative damages to proteins, lipids or DNA may all be seriously deleterious and may be concomitant. However, proteins are possibly the most immediate vehicle for inflicting oxidative damage on cells because they are often catalysts rather than other mediators; thus the effect of damage to one molecule is greater [11,12].
Protein carbonyl content is the most general indicator and the most commonly used marker of protein oxidation. The use of the protein carbonyl contents as biomarkers of oxidative stress provide some advantages due to their early formation and their relative stability in comparison with the measurement of other oxidation products. While oxidized proteins are degraded by cells within days, lipid peroxidation products are detoxified within minutes [11,13-15]. Antioxidants are chemicals that reduce the rate of oxidation reactions. Oxidation reactions are chemical reactions that involve the transfer of electron from one substance to an oxidizing agent. Antioxidants can slow these reactions either by reacting with intermediates and halting the oxidation reaction directly, or by reacting with the oxidizing agent and preventing the oxidation reaction from occurring. Recent studies have showed that antioxidants play an important role as health promoters in many diseases, such as cancer and aging [16]. Green Tea is one of the most popular beverage because its consumption has been suggested to have many beneficial health effects, including the prevention of cancer and heart diseases [17]. Most of the therapeutic benefits of green tea are due to the catechins, which are polyphenols with a flavanoid structure. Polyphenols are “free radical scavengers” which eliminate hydroxyl radicals, superoxide anion radicals, 1,1-Dipehnyl-2-picrylhydrazyl (DPPH) radicals and other radicals. Four major green tea catechins exist, namely (-)-epicatechin (EC), (-)-epigallocatechin (EGC), (-)epicatechin gallate (ECg), (-)-epigallocatechin gallate (EGCG). The catechins comprise approximately 30% to 42% of the total green tea solids, and a typical cup of green tea contains between 300 and 400 mg polyphenols, of which 10-30 mg is EGCG. EGCG has been shown to be potent antioxidants in many chemicals and biochemical studies [18,19]. The aims of this study were the demonstration of oxidative stress, using protein carbonyl contents as biomarkers and the determination of antioxidative effect of green tea extract, EGCG, in the condition of oxidative stress in guinea pigs’ liver tissues, under 50 Hz, 12 kV/m E Field exposure.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 230–233, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
The Relation Assessment Between Electric Field Exposure-Induced Protein Carbonyl Levels and The Protective Effect of Catechin
II. MATERIAL AND METHOD A. Animals and Exposure System Electric fields were applied to guinea pigs in wooden cages with dimensions of 80cm x 80cm x18cm. For vertical field exposure circuit; copper plates were mounted on the top and the bottom faces of the cages. The copper plate spacing was 18 cm and the dimension of the plates was 80cm x 80cm x 0.2 cm. The positive terminal of the power supply was always connected to the upper plate and the negative terminal to the lower plate. Potential differences were controlled continuously throughout the experiment and were kept constant with the aid of a 3 digit LED display of power supply voltage (TETA T-994 DC&AC). Also, a multi-meter connected to the circuit was used to double-check the potential difference between the parallel plates. The arrangement was chosen so as to keep the distance between the copper plates small with respect to their dimensions in order to generate homogeneous electric field in the exposure space. The magnitude of the electric field in the cages of guinea pigs was determined not only by theoretical calculation, but also by measurements using a NARDA EFA 300 electric field probe. Four groups of 7 male white guinea pigs were exposed to 50 Hz 12 kV/m vertical electric fields. Each group was exposed daily for 8 hours for 7 days. The electric field exposure time was from 9 a.m. to 5 p.m. Twenty-eight guinea pigs were exposed according to the exposure period while 7 guinea pigs, which were not exposed to any electric field, formed a control group. B. Measurement of Protein Carbonyl Levels We followed the method described by Levine et al. [20] with slight modifications. Briefly, two tubes of 0.5 ml homogenized liver tissues were taken; one was marked as “test” and the other as “control”. Protein in the liver tissues was precipitated by the addition of 20% TCA (w/v). Precipitated protein was collected by centrifugation. The supernatant was carefully aspirated and discarded. 0.5 ml of 10 mM 2,4-dinitrophenylhydrazine (DNPH) prepared in 2.5M HCl was added to the test sample and 0.5 ml of 2.5M HCl alone was added to the control sample. The contents were mixed thoroughly and incubated in the dark (room temperature) for 1 hour. The tubes were shaken intermittently every 10 minutes. Then 0.5 of 20% TCA (w/v) was added to both tubes and the mixture. The tubes were then centrifuged at 11000 g for 3 min to obtain the protein pellet. The supernatant was carefully aspirated and discarded. Finally the precipitates were washed three times with 1 ml
231
of ethanol: ethyl acetate (1:1, v/v) to remove unreacted DNPH and lipid remnants. The final protein pellet was dissolved in 0.6 ml of 6M guanidine hydrochloride and incubated at 37 ◦C for 10 min. Guanidine was adjusted to pH 2.3 with HCl instead of trifluoroacetic acid used by Levine et al. and this was the slight modification previously mentioned. The insoluble materials were removed by centrifugation. Carbonyl content was determined by taking the spectra of the representative samples at 355–390 nm. Each sample was read against the control sample (treated with 2.5 M HCl). The carbonyl content was calculated from peak absorption (370 nm) using an absorption coefficient (e) of 22,000 M−1 Cm−1. The protein carbonyl content was expressed as nmol/mg protein. The protein content was determined by the Lowry method using BSA as standard [17]. Protein levels in the samples were prepared in a concentration of 1 mg/ml. As the samples were 0.5 ml, results were obtained by multiplying the data by two. C. Other Experimental Features Approximately 3 month old male guinea pigs, weighing 250-300 g, were obtained from Turkey Hıfzıssıha Institute. They were maintained on a 12:12 h light – dark cycle and housed in the suitable cages where temperature 22-24ºC and humidity levels were controlled and were fed ad libitum on a standard lab chow and carrot. D. Statistical Analysis Statistical analyses were carried out using SPSS software (SPSS 11.5 for windows, SPSS Inc., Chicago, USA).All data were expressed as mean ± SEM. Values were compared using the Mann-Whitney U test. III. RESULTS Protein carbonyl levels in nmol/mg of protein are given in Figure 1. Figure 1 illustrates that protein carbonyl levels decreased non-significantly for green tea catechin, EGCgadministrated group (2.17 ±0.14) and 50 Hz 12 kV/m experiment group (1.92 ± 0.15) when compared to control group (2.33 ±0.27). However; the result of EGCg - administrated group under 50 Hz 12 kV/m was found decreased significantly (1.08 ± 0.10; p=0.015; p<0.05) when compared to control group (2.33 ±0.27).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
232
A. Tomruk, G. Guler and N. Seyhan
Table 1 Protein carbonyl levels of guinea pigs’ liver tissues in experiments and control groups Groups
Protein carbonyl levels(nmol/mg protein) means± SEM
Control
2.33 ±0.27
EGCG
2.17±0.14
E Field
1.92±0.15
E Field + EGCG
1.08±0.10
stress. We found decreased PCO levels in 50 Hz 12 kV/m exposure group’s liver tissues. Data of this study are given in Table1 as mean ± SEM. Oxidation of protein reactions are initiated by ROS [11,12] and ROS formations have increased via ELF-EMF. However, Table1 illustrates that there was a decrease in PCO levels. These results can be explicated as ELF-E Field may affect the early formation of stable oxidized protein, changing the direction of the oxidized proteins formation’s reaction or leading to structural damage in protein synthesis. These suggestions have been verified with the study on the inhibition effect of ELF-E Fields on protein synthesis[26].
protein carbon yl level (n m ol/m g protein)
3
REFERENCES
2,5
1.
2 1,5
2.
1
3.
0,5
4.
0 control
EGCG
E Field
EGCG+E Field
groups
Fig. 1 Protein carbonyl levels of experiments and control groups (∗: p< 0.05)
IV. CONCLUSIONS
5. 6. 7. 8.
Metal-catalyzed protein oxidation, characterized by carbonyl (PCO) formation, loss of protein thiol (-SH) groups, nitrotyrosine (NT) and advanced oxidation protein products (AOPP) are major molecular mechanisms, causing structural changes in poteins. ROS can damage several amino acid residues. Oxidative damage to amino acid residues or to peptide backbone of proteins can generate PCO products. In view of the fact, measurement of PCO has been used a sensitive assay for oxidative damage to proteins [12, 21, 22, 23]. According to studies on ELF - EMFs’ biological effects, it has been known that ELF-EMF can cause an increase in free radical formation, leading to lipid peroxidation and oxidative stress. It has been also suggested that ELF-EMF prolong the life of free radicals [9,24,25]. In addition to these reasons, antioxidants have an important role in the reduction of oxidation reactions. In this study we used the green tea extract, EGCG, in the determination of the mechanisms of antioxidants against the ELF-E Field exposure and we measured the PCO, metalcatalyzed protein oxidation, as biomarkers of oxidative
9.
10. 11. 12. 13. 14. 15. 16.
Kaune W T, Forsythe W C, (1988) Current densities induced in swine and rat models power frequency electric fields. Bioelectromagnetics 9:1 Kaune W T, Gillis M F, (1981) General properties of the interaction between animals and ELF Electric Fields. Bioelectromagnetics 2:1 Liboff A R, Williams T, Strong D.M, Wistar R, (1984) Time-varying magnetic fields: effects on DNA synthesis. Science 223:818-820 Goodman R, Abbot J, Henderson A S, (1987) Transcriptional patterns in the X chromosome of sciara coprophila following exposure to magnetic fields. Bioelectromagnetics 8:1-7 Canseven A G, Atalay Seyhan N, (1996) Is it possible to trigger the collagen synthesis by electric current in skin wounds?. Ind. J Biochem Biophys 33:223-227 Canseven A G, Atalay Seyhan N, (2005) Effects of ambient ELF magnetic fields: variations in collagen synthesis of guinea pigs’ skin and scaling from animals to human. Gazi Med J 16:160-165 (Turkish) Guler G, Seyhan N, (1996) Changes in hydroxiproline levels in electric field tissue interaction Ind J Biochem Biophys 33: 531-533 Guler G, Seyhan N, Ozogul C, Erdogan D, (1996) Biochemical and structural approach to collagen synthesis under electric fields. Gen Physiol Biophys 15: 429-440 Sobczak A, Kula B, Danch A, (2002) Effects of Electromagnetic Field on Free- Radical Processes in Steelworkers.Part II: Magnetic Field Influence on Vitamin A, E and Selenium Concentrations in Plasma. J Occup Health 44: 230-233 Halliwell B, Gutteridge J M C, (2001) Free radicals in Biology and Medicine.Oxford Science Publications Dalle-Donne I, Rossi R, Giustarini D, Milzani A, Colombo R, (2003) .Protein carbonyl groups as biomarkers of oxidative stress. Review Clinica Chimica Acta 329:23-38 Dean R T, Fu S, Stocker R, Davies M J, (1997) Biochemistry and pathology of radical- mediated protein oxidation. Biochem J 324:118 Berlett B S, Stadtman E R, (1997) Protein oxidation in aging, disease and oxidative stress. J Biol Chem 272:20313-20316 Grune T, Reinheckel T, Davies K J A, (1996) Degradation of oxidized proteins in K562 human hematopoietic cells by proteasome. J Biol Chem 271: 15504-15509 Siems W G, Zollner H, Grune T, Esterbauer H, (1997) Metabolic fate of 4-hydroxynonenal in hepatocytes: 1,4-dihydroxynonene is not the main product. J Lipid Res 38:612-622 Ilhan A, Gurel A, Armutcu A, Kamisli A, Iraz M, Akyol O, Ozen S, (2004) Ginkgo Biloba prevents mobile phone- induced oxidative stress in rat brain. Clinica Chimica Acta 340: 153-162
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Relation Assessment Between Electric Field Exposure-Induced Protein Carbonyl Levels and The Protective Effect of Catechin 17. Yang C S, Landau J M, (2000) Effects of tea consumption on nutrition and health. J Nutr 130:2409-2412 18. Mackenna J D, Jones K, Hughes K, (2002) Botanical Medicines: The desk Reference for Major Herbal Supplements. The Haworth Herbal Press 19. Yang C S, Maliakal P, Meng X, (2002) Inhibition of carcinogenesis by tea. Annu Rev Pharmacol Toxicol 42:25-54 20. Levine R L, Garland D, Oliver C N, Amici A, Climent I, Lenz A G, Ahn B, Shaltiel S, Stadtman E R, (1990) Determination of carbonyl content in oxidatively modified proteins. Meth Enzymol 186 :464-478 21. Stadtman E R, Levine R L, (2000) Protein oxidation. Annu N Y Acad Sci 899:191-208 22. Shacter E, (2000) Protein oxidative damage. Methods Enzymol 319: 428-436 23. Kayali R, Cakatay U, Telci A, Akcay T, Sivas A, Altug T, (2004) Decrease in mitochondrial oxidative damage parameters in the streptozotocin-diabetic rat.Diabetes/Metabolism Research and Reviews 20:315- 321
233
24. Canseven A G, Coskun S, Seyhan N, ( 2005) ELF magnetic fields’ effects on lipid peroxidation in lung and kidney IFMBE Proceedings. Vol. 11, 3rd European Medical & Biological Engineering Conference-EMBEC’05.Prague, Czech, 2005, pp 4748-4752 25. Gutteridge J M J, (1995) Lipid peroxidation and antioxidant as biomarkers of tissue damage. Clin Chem 41:1819-1828 26. Guler G, Atalay Seyhan N , Altan, N, (1997) Is It Possible to Inhibit the Effect of Free Radicals with Electric Fields ?. Suppl. 1, Medical & Biological Engineering & Computing, World Congress on Medical Physics and Biomedical Engineering, Nice, 1997, pp 46 Author: Arin Tomruk Institute: Gazi University Faculty of Medicine, Biophysics Department Street: Besevler City: Ankara Country: Turkiye Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Advancing in the quality of the cells assigned for Autologous Chondrocyte Implantation (ACI) method A. Barlic1, D. Radosavljevic2, M. Drobnic2 and N. Kregar-Velikonja1 1
Educell d.o.o., Ljubljana, Slovenia Department of Orthopaedic Surgery, University Medical Centre, Ljubljana, Slovenia
2
Abstract— Tissue engineering offers new strategies in developing treatments for the repair of hyaline cartilage. Autologous chondrocyte implantation (ACI) method is initially based on cultivation of chondrocytes in vitro, followed by injection of the obtained cell suspension below the periosteal flap into the cartilage defect. The problem that appears by cell expansion in a monolayer culture is dedifferentiation of the cells – changing from hyaline to more fibroblastic-like phenotype. After using 2-dimensional collagen/fibrin scaffold seeded with cells, which could not maintain chondrocyte phenotype, we are now examining the potential of alginate/agarose hydrogel. The phenotype status of the cells is monitored by a real-time PCR assay which enables the observation of changes in gene expression patterns. The results of our study suggest that the 3-dimensional environment of the tested scaffold provides suitable environment for chondrocyte redifferentiation. Keywords— tissue engineering, chondrocyte, phenotype, 3dimensional scaffold, real-time PCR. I. INTRODUCTION
During the past decade exciting new strategies have emerged for treatment of patients, through a combination of many developments in biology, material science, engineering and medicine. This is so-called tissue engineering, which approaches are mainly focused on the restoration of pathologically altered tissues and organs based on the transplantation of cells in combination with supportive matrices and biomolecules [1,2]. One such approach is Autologous Chondrocyte Implantation (ACI), used to repair localized articular cartilage lesions. Shortly after first reported cases [3], the same operation was preformed also in [4]. Briefly, by arthroscopy, a small biopsy of cartilage is taken from the unburdened articular surface, followed by the isolation of chondrocytes and their cultivation under strict aseptic laboratory conditions in a biotech company Educell (Ljubljana, Slovenia). Once the cells have multiplied sufficiently, suspension of chondrocytes (ChondroArt TM – 1D) is implanted into the lesion, under a flap of periost or collagenous membrane. In the next generation product, ChondroArt TM – 2D, the cells are seeded on collagen/fibrin scaffold that still represents a
monolayer environment, but at least facilitates surgical procedure. The last generation product, still under development, is ChondroArt TM – 3D that will provide a three-dimensional environment for chondrocytes, thus a better phenotype of the cells. On the other hand it will enable fixation into the defect by arthroscopic or at least minimally invasive techniques. It is very well known that chondrocytes expanded in monolayer culture undergo a process termed dedifferentiation, where chondrocytic phenotype of the cells changes to fibroblastic. Instead of synthesizing collagen 2, which accounts for more than 90% of total collagen in hyaline cartilage, they start synthesizing collagen 1, abundant in fibrous tissue. In addition, synthesis of aggrecan, a major component of proteoglycan extracellular matrix, is decreased [5-7]. The change is reversible if the cells are seeded in three-dimensional hydrogel, forcing the rounded morphology as the shape of the cell plays a major role in the genes that are expressed [8-11]. More recently, real-time PCR technology has been increasingly used to describe the gene expression profile of the chondrocytes [12-14]. In our study, we were quantifying expression of the genes, encoding most abundant molecules in the extracellular matrix after seeding dedifferentiated chondrocytes into alginate/agarose scaffold. II. MATERIALS AND METHODS
A. Chondrocyte culture Human chondrocytes were procured from the femoral articular cartilage in knee either as cartilage biopsies from patients or as samples procured post-mortem. The later was possible due to the prolonged viability of chondrocytes in the tissue explants if stored under the appropriate environmet [15]. The cells were isolated and expanded in monolayer culture at a concentration of 3000/cm2. They were cultivated according to the ChondroArt (Educell, Ljubljana, Slovenia) cultivation procedure until 80% confluence was reached and passaged when necessary. To study redifferentiation capability, alginate/agarose hydrogel (TBF-Banque de Tissus, Mions, France) was
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 249–252, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
250
A. Barlic, D. Radosavljevic, M. Drobnic and N. Kregar-Velikonja
prepared according to the manufacturers’ recommendations. Dedifferentiated cells of the first passage were seeded homogeneously into hydrogel at a concentration of 1x106 cells/ml and cultivated for 14 days. B. Real-time PCR RNA isolation. Total RNA was isolated from chondrocytes using the GeneElute™ Mammalian Total RNA Kit (Sigma,St. Luis, USA Germany), following the manufacturer's recommendations. Reverse transcription. Total RNA (0.3-1 µg) was reverse transcribed using High Capacity cDNA Archive Kit (Applied Biosystems, Foster City, USA) according to the manufacturer's recommendations. PCR amplification and analysis. Primers and probes for human GAPDH, collagen types I (Col 1) and II (Col 2), aggrecan (Agr) and versican (Ver) were purchased as TaqMan® Gene Expression Assays – Assays on DemandTM (Applied Biosystems, Foster City, USA). cDNA samples were analyzed in duplicate. PCR reactions were performed and monitored using an ABI Prism 7900HT Sequence Detection System and data analysis was carried out using the SDS 2.1 software (Applied Biosystems, Foster City, USA). The level of expression of each target gene was normalized to a reference housekeeping gene (GAPDH), and reported values can be used for comparative analyses among samples. Since collagen type II and aggrecan are typical markers of differentiated chondrocytes in hyaline cartilage, as opposed to collagen type I and versican, which are expressed by dedifferentiated chondrocytes, “differentiation indexes” were calculated. They are defined as the ratio of mRNA levels of collagen type 2 to 1 (Col 2/Col 1) and of aggrecan to versican (Agr/Ver), related to the expression of collagens and proteoglycans, respectively [14].
Fig. 1 Morphologic appearence of chondrocytes in a monolayer culture (A) and alginate/agarose hydrogel (B).
In a monolayer, chondrocytes adopted fibroblastic appearance, while after seeding them into the hydrogel they regained rounded morphology.After 14-day culture period, relative gene expression of Col 1, Col 2, Agr and Ver were analyzed for chondrocytes of the 1st and 2nd passage in a monolayer culture and compared to the 2nd passage cells in alginate/agarose hydrogel (Fig. 2). Col 2 expression was slightly downregulated from P1 to P2 monolayer but upregulated by 363-fold from P1 to P2 alginate/agarose grown cells. Col 1 expression in P2 alginate/agarose cells was higher by 7,2-fold and 4,5-fold than in P2 and P1 monolayer cells, respectively. Agr and Ver expressions changed in lesser extent. Agr expression from P1 to P2 monolayer cells was downregulated almost 3fold and an increase of 5-fold was observed from P1 monolayer to P2 alginate/agarose cells. Comparing to P2
III. RESULTS
To study redifferentiation capacity of previously dedifferentiated cells, cells of the 1st passage (P1) were seeded in a monolayer and into alginate/agarose hydrogel to obtain 2nd passage (P2) cells. Differences in morphology of chondrocytes in a monolayer (A) and alginate/agarose culture system (B) are shown in Fig. 1. Fig. 2 Redifferentiation of 2nd passage chondrocytes in alginate/agarose
hydrogel in comparison to 1st and 2nd passage monolayer cells. Normalized gene expressions for Col 2, Col 1, Agr and Ver are presented. Additionally, calculated differentiation indexes Col 2/Col 1 and Agr/Ver are shown.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Advancing in the quality of the cells assigned for Autologous Chondrocyte Implantation (ACI) method
monolayer, expression of Agr was 14,2-fold better in P2 alginate/agarose cells. Ver expression increased for 1,8-fold in P2 alginate/agarose cells comparing to P1 monolayer. In P2 monolayer cells expression was downregulated for approximately 2-fold. Calculated differentiation index showed an increase of 145-fold in P2 alginate/agarose cells relative to P1 monolayer cells and an increase of 321-fold in comparison to P2 monolayer cells. Again, Agr/Ver index changed less. From P1 to P2 alginate/agarose cells index increased by 4,3fold and an increase of 6,7-fold was observed from P2 monolayer to P2 alginate/agarose cells. IV. DISCUSSION
As the quality of the starting biopsy and the characteristics of chondrocytes after in vitro expansion are critical factors for the success of transplantation [16,17], significant down-regulation of cartilage-specific genes in monolayer culture should not be neglected. Suspension of chondrocytes in three-dimensional (3D) hydrogels has been shown to promote the chondrocyte phenotype. Alginate and agarose are natural hydrogels that can be used to encapsulate the cells [9-11]. To test the redifferentiation capability of chondrocytes used for ACI, dedifferentiated chondrocytes were seeded into alginate/agarose 3D scaffold. Cells were cultured for 14 days, as this incubation period is necessary for sufficient redifferentiation to occur [18]. By real-time PCR we were quantifying the expression of genes encoding collagen type 1 and 2, aggrecan and versican, because these genes have been used many times to describe phenotype of chondrocytes [12-14, 18-19]. In addition to relative gene expression values, chondrocyte differentiation indexes were calculated. Differentiation indexes were first used to describe the differences between human articular cartilage in normal and osteoarthritic joints [14], but can also be used to quantify gene expression in cultured chondrocytes, and thus increase understanding of the process of change in the differentiation status [12-13]. Furthermore, this ratio can be used to monitor chondrocytes and their gene expression profiles for experimental models or for therapeutic use in autologous chondrocyte transplantation. The results of our study confirm that alginate/agarose hydrogel represents a suitable environment for the cells cultivated according to the ChondroArt procedure (Educell d.o.o.). Previously dedifferentiated chondrocytes were able to redifferentiate, to switch its gene synthesis towards hyaline-like cartilage cells. Thus, “better quality” cells implanted into the cartilagous lesion to the patient would
251
require less time to redifferentiate in situ, consequently shortening rehabilitation period after operation. V. CONCLUSIONS
Our results corroborate previous work indicating that alginate and agarose are interesting culture systems for maintaining chondrocytes in a differentiated state and for inducing a process of redifferentiation. However, growth of cells in hydrogel alone may not lead to the cell expansion required for the purpose of tissue engineering, therapeutic cell culture or medical cartilage repair, because of their low proliferation rate. For this reason, a suitable compromise has to be found between the cell proliferation rates in a monolayer culture, and the degree of differentiation obtained in a hydrogel that is tolerable for ACI.
ACKNOWLEDGMENTS This study was supported in part by the ARRS project L7-7598.
REFERENCES 1.
Risbud MV, Sittinger M (2002) Tissue engineering: advance in vitro cartilage generation. Trends Biotechnol 20: 351-356 2. Hardingham T, Tew S, Murdoch A (2002) Tissue engineering: chondrocytes and cartilage. Arthritis Res 4 (suppl 3): S63-S68 3. Brittberg M, Lindahl A, Nilsson A, Ohisson C, Isaksson O, Peterson L (1994) Treatement of deep cartilage defects in the knee with autologous chondrocyte transplantation. N Engl J Med 331: 889-895 4. Radosavljevic D, Drobnic M, Koritnik B, Gorensek M, Pavlovcic V. (2005) Clinical overview of the ACI treated patients in the knee over 10 years. Cartilage weekend. 3rd symposium of recent advances in cartilage repair and tissue engineering. Portoroz, Slovenia 5. Benya PD, Padilla SR, Nimni ME (1978) Independent regulation of collagen typesby chondrocytes during the loss of differentiated function in culture. Cell 15: 1313-1321 6. Kuettner KE, Memoli VA, Pauli BU, Wrobel NC, Thonar EJ, Daniel JC (1982) Synthesis of cartilage matrix by mammalian chondrocytes in vitro. II. Maintenance of collagen and proteoglycan phenotype. J Cell Biol 93: 751-757 7. Bonaventure J, Kadhom N, Cohen-Solal L, Ng KH, Bourguignon J, Lasselin C, Freisinger P (1994) Reexpression of cartilage-specific genes by dedifferentiated human articular chondrocytes cultured in alginate beads. Exp Cell Res 212: 97-104 8. Glowacki J, Trepman E, Folkman J (1983) Cell shape and phenotypic expression in chondrocytes. Proc Soc Exp Biol Med 172: 93-98 9. Lee DA, Reisler T, Bader DL (2003) Expansion of chondrocytes for tissue engineering in alginate beads enhances chondrocytic phenotype compared to conventional monolayer techniques. Acta Orthop Scand 74: 6-15 10. Chubinskaya S, Huch K, Schulze M, Otten L, Aydelotte MB, Cole AA (2001) Gene expression by human articular chondrocytes cultured in alginate beads. J Histochem Cytochem 49: 1211-1219
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
252 11. Benya PD, Shaffer JD (1982) Dedifferentiated chondrocytes reexpress the differentiated collagen phenotype when cultured in agarose gels. Cell 1: 1313-1321 12. Marlovits S, Hombauer M, Tamandl D, Vecsei V, Schlegel W (2004) Quantitative analysis of gene expression in human articular chondrocytes in monolayer culture. Int J Mol Med 13: 281-287 13. Darling EM, Athanasiou KA (2005) Rapid phenotypic changes in passaged articular chondrocyte subpopulations. J Orthoped Res 23: 425-432 14. Martin I, Jakob M, Schafer D, Dick W, Spagnoli G, Heberer M (2001) Quantitative analysis of gene expression in human articular cartilage from normal and osteoarthritic joints. Osteoarthritis Cartilage 9: 112-118 15. Drobnic M, Mars T, Alibegovic A, Bole V, Balazic J, Grubic Z, Brecelj J (2005) Viability of human chondrocytes in an ex vivo model in relation to temperature and cartilage depth. Folia Biol (Praha) 51: 103-108 16. Dell'Accio F, De Bari C, Luyten FP (2001) Molecular markers predictive of the capacity of expanded human articular chondrocytes to form stable cartilage in vivo. Arthritis Rheum 44: 1608-1619
A. Barlic, D. Radosavljevic, M. Drobnic and N. Kregar-Velikonja 17. Peterson L, Minas T, Brittberg M, Nilsson A, Sjogren-Jansson E, Lindahl A (2000) Two- to 9-year outcome after autologous chondrocyte transplantation of the knee. Clin Orthop 374: 212-234 18. Gaissmaier C, Fritz J, Krackhardt T, Flesch I, Aicher WK, Ashammakhi N (2005) Effect of human platelet supernatant on proliferation and matrix synthesis of human articular chondrocytes in monolayer and three-dimensional alginate cultures. Biomaterials 26: 1953-1960 19. Grunder T, Gaissmaier C, Fritz J, Stoop R, Hortschansky P, Mollenhauer J, Aicher WK (2004) Bone morphogenetic protein (BMP)-2 enhances the expression of type II collagen and aggrecan in chondrocytes embedded in alginate beads. Osteoarthritis Cartilage 12: 559-567 Author: Ariana Barlic Institute: Street: City: Country: Email:
Educell d.o.o., Ljubljana, Slovenia Letaliska 33 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Coalescence of phospholipid vesicles mediated by β2GPI – experiment and modelling J. Urbanija1, B. Rozman3, A. Iglič2, T. Mareš4, M. Daniel4, Veronika Kralj-Iglič1 1
Laboratory of Clinical Biophysics, Faculty of Medicine, University of Ljubljana, Lipičeva 2, Ljubljana, Slovenia; Laboratory of Physics, Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana, Slovenia; 3 Department of Rheumatology, University Medical Centre, Vodnikova 62, Ljubljana, Slovenia 4 Laboratory of Biomechanics, Faculty of Mechanical Engineering, Czech Technical University in Prague, Prague, Czech Republic 2
Abstract— Collective interactions between the giant phospholipid vesicles made of POPC, cardiolipin and cholesterol after the addition of β2GPI may cause the coalescence of membrane buds to the mother cell. Using the discrete elastic model of the vesicle membrane mechanics it was shown that the coalescence of the buds depends on the adhesion strength and rigidity of the biomembrane. Keywords— Beta-2-glycoprotein-I; Apolipoprotein, elastic model, discretization.
I. INTRODUCTION The serum protein β2GPI is considered to have a multiplicity of physiological roles, among them in the process of blood clot formation. It was found that it affects the metabolism of triacylglycerol-rich lipoproteins, the function of thrombocytes, and the activation of endothelial cells ([1] and references therein) and inhibits the transformation of prothrombin into thrombin [2]. It binds to structures which contain negatively charged phospholipid molecules such as phospholipid vesicles [3,4], thrombocytes [5], thrombocytederived microvesicles and apoptotic cells [6], serum lipoproteins [7,8] and mediates cellular recognition of negatively charged phospholipid-exposing microparticles [9-11]. II. METHODS A. β2GPI antibodies β2GPI (Hyphen BioMed, France) was aliquoted and stored at -70˚C. In all experiments, the final concentration of β2GPI in phosphate buffer saline (PBS) was 100 mg/L, which is approximately half the concentration of physiological β2GPI in normal human plasma (about 200 mg/L) [12,7]. B. Giant phospholipid vesicles GPVs were prepared at room temperature (23°C) by the electroformation method [13] modified as described in
Tomšič et al. [14]. The synthetic lipids cardiolipin (1,1'2,2'tetraoleoyl cardiolipin), POPC (1-palmitoyl-2-oleoyl-snglycero-3-phosphocholine), and cholesterol were purchased from Avanti Polar Lipids, Inc. Appropriate volumes of POPC, cardiolipin and cholesterol, all dissolved in a 2:1 chloroform/methanol mixture, were combined in a glass jar and thoroughly mixed. For charged cardiolipin vesicles, POPC, cholesterol and cardiolipin were mixed in the proportion 2:2:1. For neutral POPC vesicles, POPC and cholesterol were mixed in proportion 4:1. Cholesterol was added to POPC to increase the longevity of vesicles. 10 µL of lipid mixture was applied to platinum electrodes. The solvent was allowed to evaporate in a low vacuum for 2 hours. The coated electrodes were placed in the electroformation chamber which was then filled with 3 mL of 0.2 M sucrose solution. An AC electric current with an amplitude of 5 V and a frequency of 10 Hz was applied to the electrodes for 2 hours, which was followed by 2.5 V and 5 Hz for 15 minutes, 2.5 V and 2.5 Hz for 15 minutes and finally 1 V and 1 Hz for 15 minutes. The content was rinsed out of the electroformation chamber with 5 mL of 0.2 M glucose and stored in a plastic test tube. The vesicles were left for sedimentation under gravity for one day at 4 oC. 200 to 400 μL of the sediment was collected from the bottom of the tube and used for a series of experiments. Before placing the vesicles into the observation chamber, the sample was gently mixed. C. Mathematical model The mechanical behavior of the vesicle biomembrane is described using an elastic continuum theory where the bindings between the building blocks of the membrane (phospholipids) are represented as the elastic springs. For the sake of simplicity, two dimensional problem was considered. Energetically optimal deformed shape is computed via minimization of the elastic energy of a discrete linear model of lipids interaction. The model of coalescence of phospholipid vesicles is based on the equilibrium between the two energy contributions: the adhesion energy (Ea) due to the contact, and the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 246–248, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Coalescence of phospholipid vesicles mediated by (2GPI – experiment and modelling
247
bending and stretching elastic energy (Ee) of the membrane due to deformation. The adhesion energy was taken to be depended on the contact surface area linearly (Ea = γ A). As the deformation is prescribed by the shape of the contact area, the elastic energy is linearly dependent on an elastic constant, K. Therefore, the total energy of the system is computed as the energy difference between the elastic and the adhesion energy normalized to the elastic constant (K). Coefficient Γ describes dependence of the adhesion energy on the dimensionless contact area. It was considered that the mother and bud vesicles have a form of sphere before adhesion with the radius of 1μm and 0.2 μm, respectively. The volume of the bud vesicle was taken as fixed. The thickness of the membrane was 5 nm. III. RESULTS AND DISCUSSION β2GPI caused coalescence of cardiolipin-containing as well as of POPC vesicles. Adhesion to the bottom of the observation chamber occurred simultaneously. Formation of sticky complexes was also observed in the sample containing both kinds of vesicles. This indicates that β2GPI mediates the interaction between the charged –charged, charged neutral and neutral - neutral pairs of membranes. If β2GPI was present in the solution, the bud (Fig.1A-C) had coalesced with the mother vesicle before it could detach from it (Fig.1D-F). Fig. 2 presents dependence of the ratio between the energy and the elastic constant on the magnitude of the dimensionless coalescence area for various coefficient of adhesion (Γ). Increasing the area of the contact, the elastic energy is increasing while the adhesion energy decreases. Equilibrium of the system is characterized by the minimum in the total energy. It is obvious from Fig. 2 that the area of contact in equilibrium state depends on the adhesion coeffi-
Fig. 2: Dependence between the dimensionless contact area (A) and total energy E normalized to the elastic constant (K). The equilibrium shapes of mother vesicle for various contact areas are shown. cient: i.e. is zero for no adhesion (Γ=0) and large for strong adhesion (Γ=14000 nm3). The calculated shapes of equilibrium states correspond with the shapes observed in experiment. IV. CONCLUSIONS The presented model provides a simplified analysis of the problem neglecting the liquidity of the bilayer and reducing the problem to two dimensions. However, good agreement between the experimental measurements and the model simulations show that the phospholipid vesicle shapes during the coalescence of vesicles mediated by β2GPI may be explained as an interplay between the deformation and contact energy. To describe this phenomenon in details, the mathematical model should be developed further.
ACKNOWLEDGMENT Fig. 1: The effect of
β2GPI dissolved in phosphate buffer saline (PBS) on a budding vesicle (A-F). The bud (marked by a white arrow) coalesced with the mother vesicle and remained attached to it. Bar denotes 10μm.
This research was supported by the Ministry of Education of the Czech Republic projects MSM 6840770012 and Czech-Slovenian bilateral project No. 9-06-13.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
248
J. Urbanija, B. Rozman, A. Iglič, T. Mareš, M. Daniel, Veronika Kralj-Iglič
REFERENCES 1.
2.
3. 4.
5. 6.
7. 8.
Bevers EM, Janssen MO, Comfurius P, Balasubramanian K, Schroit AJ, Zwaal RF, Willems GM. (2005) Quantitative determination of the binding of β2-glycoprotein I and prothrombin to phosphatidylserine-exposing blood platelets. Biochem J. 386:271–279. Nimpf J, Bevers EM, Bomans PH, Till U, Wurm H, Kostner GM, Zwaal RF. (1986) Prothrombinase activity of human platelets is inhibited by beta 2-glycoprotein-I. Biochim Biophys Acta. 884:142-149. Wurm H. (1984) β2-glycoprotein I (apolipoprotein H) interactions with phospholipid vesicles. Int J Biochem. 16:511-515. Chonn A, Semple SC, Cullis PR. (1995) β2-glycoprotein I is a major protein associated with very rapidly cleared liposomes in vivo, suggesting significant role in the immune clearance of non-self particles. J Biol Chem. 270:25845-25849. Schousboe I. (1980) Binding of beta 2-glycoprotein I to platelets: effect of adenylate cyclase activity. Thromb Res. 19:225-237. Price BE, Rauch J, Shia MA, Walsh MT, Lieberthal W, Gilligan HM, O'Laughlin T, Koh JS, Levine JS. (1996) Antiphospholipid autoantibodies bind to apoptotic, but not viable, thymocytes in a beta 2-glycoprotein I-dependent manner. J Immunol. 157:2201-2208. Polz E, Kostner GM. (1979) The binding of beta 2glycoprotein-I to human serum lipoproteins: distribution among density fractions. FEBS Lett. 102:183-186. Kobayashi K, Kishi M, Atsumi T, Bertolaccini ML, Makino H, Sakairi N, Yamamoto I, Yasuda T, Khamashta MA, Hughes GR, Koike T, Voelker DR, Matsuura E. (2003) Circulating oxidized LDL forms complexes with β2glycoprotein I: implication as an atherogenic autoantigen. J Lipid Res. 44:716-726.
9.
10.
11. 12.
13.
14.
Balasubramanian K, Chandra J, Schroit AJ. (1997) Immune clearance of phosphatidylserine- expressing cells by phagocytes. The role of beta2-glycoprotein I in macrophage recognition. J Biol Chem. 272:31113-31117. Moestrup SK, Schousboe I, Jacobsen C, Leheste JR, Christensen EI, Willnow TE. (1998) β2-glycoprotein-I (apolipoprotein H) and β2-glycoprotein-I-phospholipid complex harbor a recognition site for the endocytic receptor megalin. J Clin Invest. 102:902-909. Thiagarajan P, Le A, Benedict CR. (1999) β2-glycoprotein I promotes the binding of anionic phospholipid vesicles by macrophages. Arterioscler Thromb Vasc Biol. 19:2807-2811. 27.McNally T, Mackie IJ, Isenberg DA, Machin SJ. (1993) Immunoelectrophoresis and ELISA techniques for assay of plasma beta 2 glycoprotein-I and the influence of plasma lipids. Thromb Res. 72:275-286. Angelova MI, Soléau S, Méléard P, Faucon JF, Bothorel P. (1992) Preparation of giant vesicles by external AC electric fields. Kinetics and applications. Progr Colloid Polym Sci. 89:127–131. Tomšič N, Babnik B, Lombardo D, Mavčič B, Kandušer M, Iglič A, Kralj-Iglič V. (2005) Shape and size of giant unilamellar phospholipid vesicles containing cardiolipin. J Chem Inf Model. 45:1676-1679. Author: Institute: Street: City: Country: Email:
Jasna Urbanija Laboratory of Clinical Biophysics, Faculty of Medicine, University of Ljubljana Lipičeva 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Expression of Smooth Muscle Cells Grown on Magnesium Alloys S.K. Lu1, W.H. Lee1, T.Y. Tian2, C.H. Chen2, H.I. Yeh2 1
2
Institute of Mechatronic Engineering, National Taipei University of Technology, Taipei, Taiwan Departments of Internal Medicine and Medical Research, Mackay Memorial Hospital, Taipei, Taiwan
Abstract— In the present study, it compared the behavior of smooth muscle cells grown on various “magnesium alloys” materials. Human Smooth Muscle Cells (HSMC) were seeded (800 cells/mm2) onto various magnesium alloy sheets, including Mg-Al-Zn alloys (AZ31, AZ91) and Mg-Al-Mn alloy (AM60). And they were cultured with SMGS (Smooth Muscle Growth Supplement) medium. Cells seeded onto tissue culture treated polystyrene dish coated with gelatin were used as controls. Forty-eight hours later, the cells were examined by immunofluorescence microscopy. In this study, it investigated three factors: vimentin, desmin, and SMC α-Actin. The results showed that the cellularity at 48 hours, all magnesium groups were much lower than controls (p<0.05), significantly. And immunoconfocal microscopy showed that vimentin, desmin, and SMC α-Actin proteins were all less in the metal groups. Therefore, it suggested that down-regulation of vimentin, desmin, and SMC α-Actin may be a common phenomenon in HSMC grown on magnesium alloys. Keywords— magnesium alloys, human smooth muscle cells, vimentin, desmin, SMC α-Actin.
I. INTRODUCTION Previous studies showed that endothelial cells play a critical role during neointima formation post vascular stenting, the behavior of endothelial cells on material surface was investigated constantly. However, the smooth muscle cells are also the important factor of stenting therapy. Basically, laboratory investigation has demonstrated that endothelial cells are involved in the regulation of thrombosis and proliferation of subjacent smooth muscle cells. In addition, complete coverage of endothelial cell is associated with attenuation or even a stop of the growth of neointima in the injured segment [1-3, 12,13]. However, in-stent restenosis after stenting in the coronary artery is a major drawback of percutaneous coronary intervention using stent. It was owing to the dysfunction of endothelial cells or the proliferation of smooth muscle cells affected by the local environment. Therefore, it also needed to pay more attention on the study of smooth muscle cells in stenting therapy [4-7]. Recent studies have demonstrated that modification of stent materials affected the injury, regeneration, transmigration, and activity profiles of endothelial cells. However, the
related studies about smooth muscle cells were still less. Therefore, in this study, we examined the growth profiles of HSMC cultured with SMGS medium on various magnesium alloy sheets. The related expressions of vimentin, desmin, and SMC α-Actin proteins were evaluated [6,7,12,13]. II. MATERIALS AND METHODS A. Metal sheets Magnesium alloy metallic sheets, measuring 5 mm × 5 mm × 0.1 mm, were machined by wire electrical discharge machining (WEDM). Initially, the surface was polished by 600 grit enamel paper and down to 0.3μm Al2O3 powder and cleaned by ethanol [8-11]. B. Cell culture HSMC were isolated as previously described. Cells of passage 4 were seeded (800 cells/mm2) onto the metallic sheets, which were placed at the center of a 35 mm tissue culture treated polystyrene dish filled with 8 ml of culture medium with SMGS (SMGS contains fetal cattle serum 5% v/v final concentration, recombinant human basic fibroblast growth factor, recombinant human epidermal growth factor, and insulin). Forty-eight hours later, the cells were examined by immunofluorescence microscopy [14-16]. C. Immunocytochemistry For immunolabeling, cells grown on the metallic sheets were fixed with methanol at -20 °C for 5 minutes. After blocking with 0.5% BSA, the cells were incubated with the mouse anti-Vimentin antibody (1:100), rabbit antiCaldesmon antibody (1:100), and mouse anti-SMC α-Actin (1:100) at 37 °C for 2 hours, followed by incubation with a CY3 conjugated anti-mouse antibody (1:500) and CY3 conjugated anti-rabbit antibody (1:500) at room temperature in the dark for 50 minutes. Then, the cells were incubated with bisbenzamide (1:1000) in the dark for 15 minutes, mounted, and examined and recorded using a Leica DMBE epifluorescence microscope equipped with a digital camera [14-16].
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 242–245, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Expression of Smooth Muscle Cells Grown on Magnesium Alloys
243
D. Analysis For comparison of cellularity, cells grown on the metallic sheets were stained with bisbenzamide to make the nucleus visible under fluorescence microscope and were observed in x160 magnification. Images of 5 randomly selected rectangular fields. The images were then analyzed using QWIN image analysis software (Leica, Heidelberg, Germany) to count the nucleus. For this purpose, 4 separate experiments of cell culture for all types of materials were conducted. Data, expressed as mean values (±SD), were compared statistically by t-test. A p value < 0.05 was considered to be significant [8-11, 14-16]. III.RESULTS A. Cell density assays For each magnesium alloy sheet, cells were able to attach and grow on the surface after seeding, but the cell density at 48 hours following the seeding varies widely, according to the alloy compounds (Fig. 1). Regarding the material components, magnesium groups have lower values of cellularity compared to the control (Fig. 2. All vs control, p<0.01). Otherwise, in the magnesium alloys, AZ31 had lowest value of cellularity compared to the other alloys (Fig. 2. AZ31 vs AZ91 and AM60, p<0.01). In general, for the seeding of
Fig. 2. Cellularity of HSMC, expressed as mean ± SD, were averaged from at least 3 separate experiments and compared statistically by t-test
800 cells/mm2, the cellularity of each magnesium alloy is less than 50% of control group [14-16]. B. Immunofluorescence microscopy The findings of immunolabeling examination were expressed in Fig. 3 to Fig. 5. It would considered that Vimentin, Caldesmon, and α -Actin were abundant in the control group, but less expressed in the alloy groups (Fig. 35). On the other hand, the pattern of expression levels was similar among alloy groups, which the AZ31 was rare in each immunolabeling examination. These results would suggest that magnesium alloys seemed to affect the expression of HSMC phenotypes. And then, the different components ratio might play a key role to regulate the expressive amounts of HSMC phenotypes [14-16]. IV. DISCUSSIONS
Fig. 1. Blue signal is nucleus. Bar, 100μm.
This study demonstrates that the phenotypic features of smooth muscle cells, including growth profile and expression of proteins, such as vimentin, desmin, and SMC αActin are altered when grown on the magnesium alloys surface. In addition, the changes vary according to the components. All these findings have their clinical implications.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
244
Fig. 3. Red signal is Vimentin and blue signal is nucleus. Bar, 100μm.
S.K. Lu, W.H. Lee, T.Y. Tian, C.H. Chen, H.I. Yeh
Fig. 5. Red signal is α-Actin and blue signal is nucleus. Bar, 100μm. In addition, the growth profiles of AZ91 alloy are higher than other alloys, but the control groups have the most abundant expression. Those data suggest that magnesium alloy components will affect growth profile and protein expression of HSMC. It’s very according to its biodegradable feature. Therefore, how to optimize the ratio of alloy components is the key point to regulate the growth profile and protein expression [14-16]. V.CONCLUSIONS In conclusion, HSMC grown on magnesium sheets vary broad in growth profile and protein expression according to the magnesium components. Down-regulation of vimentin, desmin, and SMC α-Actin is a common phenomenon in the cells in such environments. These suggests the decreasing growth profile of HSMC in the arterial segments with magnesium alloys examined in the present study, which potentially contributes to the inhibition of proliferation and thrombosis post stent implantation [14-16].
Fig. 4. Red signal is Caldesmon and blue signal is nucleus. Bar, 100μm.
ACKNOWLEDGMENT This work is supported by grants NSC-94-2622-E-027047 from the National Science Council, Taiwan and MMHE-95003 from the Medical Research Department of the Mackay Memorial Hospital, Taiwan.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Expression of Smooth Muscle Cells Grown on Magnesium Alloys
REFERENCES 1. 2.
3.
4. 5.
6.
7. 8. 9.
Heublein B., Rohde R., Kaese V., Niemeyer M., Hartung W. and Haverich A. (2003) Biocorrosion of magnesium alloys: a new principle in cardiovascular implant technology? , Heart. 89 pp 651-656 Carlo D M., Huw G., Omer G., Nicolas P., Jan V., Marc B., Koen D., Bernhard H., Roland R., Victor K., Charles I. and Raimund E. (2004) Drug-eluting bioabsorbable magnesium stent, J Interven Cardiol. 17 pp 391-395 Witte F., Kaese V., Haferkamp H., Switzer E., Meyer-Lindenberg A., Wirth C J. and Windhagen H. (2005) In vivo corrosion of four magnesium alloys and the associated bone response, Biomaterials 26 pp 3557-3563 Maier J A, Bernardini D, Rayssiguier Y and Mazur A. (2004) High concentrations of magnesium modulate vascular endothelial cell behaviour in vitro, Biochim Biophys Acta 6-12 Maier J A, Malpuech-Brugere C, Zimowska W, Rayssiguier Y and Mazur A. (2004) Low magnesium promotes endothelial cell dysfunction: implications for atherosclerosis, inflammation and thrombosis, Biochim Biophys Acta pp 13-21 Witte F, Kaese V, Haferkamp H, Switzer E, Meyer-Lindenberg A, Wirth C J and Windhagen H. (2005) In vivo corrosion of four magnesium alloys and the associated bone response, Biomaterials 26 pp 3557-3563 Michael B, Esthie R, Iris B, Arie R, Gad K and Jacob G. (2004) Zinc reduces intimal hyperplasia in the rat carotid injury model, Atherosclerosis 175 pp 229-234 Yeh H I., Lu S K., Tian T Y., Hong R C., Lee W H. and Tsai C H. (2006) “Comparison of endothelial cells grown on different stent materials” J Biomed Mater Res 76A p.p. 835-841 Lu S K., Yeh H I., Huang C W., Tian T Y., Chen C Y. and Lee W H. (2005) Phenotypic change of HUVEC grown on stent material coated with Titanium (Ti) and Tantalum (Ta) compounds, ICBME Int. Conf. 4A3-10 Singapore
245 10. Yeh H I., Lu S K., Tian T Y., Chiang C L., Lee W H. and Tsai C H. (2004) Down-regulation of connexin43 Gap Junctions, eNOS, and VWF in endothelial cells grown on stent materials, Int J Cardiol. S29 Taiwan 11. Tian T Y., Lu S K., Yeh H I., Chiang C L., Lee W H. and Tsai C H. (2003) Phenotypic change of endothelial cells grown on stent materials of various properties, Acta Cardiol Sin. 19:S200 Taiwan 12. Mark P S., Alexis M P., Jerawala H. and George D. (2006) Magnesium and its alloys as orthopaedic biomaterials: A revies, Biomaterials 27 pp 1728-1734 13. Frank W., Jens F., Jens N., Horst A C., Volker K., Alexander P., Felix B. and Henning W. (2006) In vitro and in vivo corrosion measurements of magnesium alloys, Biomaterials 27 pp 1013-1018 14. S.K. Lu, H.I. Yeh, T.Y. Tian and W.H. Lee. (2006) Degradation of Magnesium Alloys in Biological Solutions And Reduced Phenotypic Expression of Endothelial Cell Grown on These Alloys, BioMed2006, C1-08-023, Malaysia. 15. S.K. Lu, W.H. Lee, T.Y. Tian and H.I. Yeh. (2006) Downregulated Endothelial Nitric Oxide Synthase in Endothelial Cells Grown on Biodegradable Materials, 15th ICMMB, A1.1, Singapore. 16. S.K. Lu, W.H. Lee, T.Y. Tian and H.I. Yeh. (2006) Comparison of Endothelial Cells Grown on Various Biodegradable Magnesium Alloys, ISBME, A1.1, Thailand. Address of the corresponding author: Author: Shao Kuo Lu Institute: Institute of Mechatronic Engineering, National Taipei University of Technology Street: 1, Section 3, Chunghsiao East Road City: Taipei Country: Taiwan (R.O.C.) Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Mesenchymal Stem Cells: a Modern Approach to Treat Long Bones Defects H. Krečič-Stres1, M. Krkovič2, J. Koder3, E. Maličev4, M. Drobnič5, D. Marolt4,6 and N. Kregar-Velikonja1 1 Educell d.o.o., Ljubljana, Slovenia University Medical Centre Ljubljana, Department for Traumatology, Ljubljana, Slovenia 3 University Medical Centre Ljubljana, Clinical Institute for Radiology, Ljubljana, Slovenia 4 Blood Transfusion Centre of Slovenia, Ljubljana, Slovenia 5 University Medical Centre Ljubljana, Department for Orthopaedic Surgery, Ljubljana, Slovenia 6 Columbia University, Department of Biomedical Engineering, New York, USA 2
Abstract – Human bone marrow contains a population of bone marrow stromal cells (BMSC) capable of forming several types of mesenchymal tissues, including bone and cartilage. BMSC can be isolated, purified and expanded in cell cultures in order to be subsequently implanted in vivo to facilitate bone healing. Our study was designed to a) develop autologous bone tissue constructs ex vivo – by seeding BMSC derived osteoblasts on calcium-triphosphate scaffolds, b) to apply these constructs in patients with a defect of a long bone, and c) to evaluate the healing process. Twenty patients are planed to be involved in the present clinical trial in which the efficiency of psevdoartrosis treatment by tissue engineered bone grafts will be evaluated. One patient was treated according to the study protocol, and the first results are encouraging. Keywords – mesenchymal stem cells, BMSC, bone marrow, bone healing, long bone defects, tissue engineering
I. INTRODUCTION Engineering of human substitute tissues and organs in vitro could potentially provide a safe and unlimited source of viable grafts for clinical use. Bone marrow stroma contains a population of cells (BMSC) - termed also mesenchymal stem cells (MSC) or multipotent adult progenitor cells (MAPC) – which are progenitors of skeletal tissue components such as bone, cartilage, muscle, the hematopoiesis-supporting stroma, and adipocytes [1,2]. Pluripotent mesenchymal stem cells are present in many adult tissues, however they are most abundant in bone marrow [1] and fat [3]. In vitro, BMSCs are rapidly adherent, clonogenic, and capable of extended proliferation [4]. Multipotent character of the cell population is usually confirmed by differentiation in adipogenic, osteogenic and chondrogenic culture conditions [1]. Development of methods for isolation, expansion and controlled the differentiation of BMSC offers possibilities to use these cells as an integral component of various clinical applications of tissue engineering.
Large defects of long bones are usually treated by either callus distraction method or by abundant spongioplasty using auto- or homologous bone grafts [5]. Bridging of a large bone defect by callus distraction requires time and usually an external fixator, both very inconvenient for the patients. In the case of autologous spongiopasty, the morbidity on donor site is sometimes even higher as the recipient one. The homologous bone transplantation still presents some minor risk for the disease transmission. Pain and risk of other complications are also considerable drawbacks [5]. Regardless of the technique used, a very compliant patient is needed, and still the percentage of failure is considerable. Therefore, the development of alternative methods for improved treatment of long bone defects is desirable. In the presented clinical trial we intend to evaluate the efficiency of the treatment of long bone defects using a modern approach of tissue engineering – implantation of autologous tissue-engineered bone grafts, prepared from patient’s BMSC derived osteoblasts seeded on the calciumtriphosphate scaffold. II. MATERIALS AND METHODS A. Cell isolation and expansion The clinical trial “The role of human mesenchymal stem cells for the treatment of long bones defects” was approved by the Ethical committee of The Republic of Slovenia (No. 49/09/06). Indications to include a patient in this clinical trial are: osseal defect of long bones, the absence of clinical and a laboratory sign of infection in grafting place, absence of soft tissue defects over the grafting site, patient’s compliance and a written informed consent of the patient. BMSC were isolated and expanded according to Pittinger and co-workers [1] with the following modifications. Briefly, a bone marrow aspirate (50 mL) was obtained from the posterior iliac crest of a patient and diluted in D-PBS (Invitrogen, USA) 4-times. Afterwards, it was layered on
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 253–256, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
254
H. Krečič-Stres, M. Krkovič, J. Koder, E. Maličev, M. Drobnič, D. Marolt, and N. Kregar-Velikonja
the top of density gradient solution, Histopaque®-1077 (Sigma, St. Luis, USA), and centrifuged for 15 min at 935 g. Cells from the Histopaque®-1077/plasma interface were removed, washed twice with D-MEM/F-12 (Invitrogen, USA) and resuspended in complete culture medium: Advanced D-MEM/F-12 (Invitrogen, USA), 6 % autologous serum, 4 mM GlutaMAXTM (Invitrogen, USA), 2 μg/mL Fungizone (Invitrogen, GB) and 50 μg/mL Gentamicin (Invitrogen, GB). Following standard cell counting, cells were plated in tissue culture flasks at density ~ 2 × 104 cells/cm2 and incubated in 5 % CO2 at 37 ºC, with fresh medium changes every 4-5 days. Non-adherent cells were removed with subsequent media changes, and the adherent cells (BMSC) were further cultured. Primary cultures were maintained for ~ 2-3 weeks up to 90 % confluence, trypsinized and subcultured at density of ~ 3 × 103 cells/cm2. B. Cell differentiation For osteogeneic differentiation, isolated cultureexpanded BMSCs were incubated in Advanced D-MEM/F-12 (Invitrogen, USA) supplemented with dexamethasone (100 nM) (Sigma, St. Luis, USA), 2-phosfo L-ascorbic acid (50 μM) (Sigma, St. Luis, USA) and βglycerophosphate (10 mM) (Sigma, St. Luis, USA) [6]. Osteogenic differentiation was carried out in confluent monolayer cultures of the second passage and detected by von Kossa staining [7]. Cell cultures (1 out of 16 tissue culture flasks) were treated by buffered formalin for 30 min and washed with dH2O. Silver nitrate solution (2 %) was added to samples and incubated in dark for 10 minutes. Samples were vigorously washed with dH2O and exposed to light for 15 minutes. C. Preparation of tissue-engineered bone grafts BMSC-derived osteogenic cells were trypsinized and counted. Porous calcium-triphosphate granules were used as a scaffold, pretreated in Advanced D-MEM/F-12 (Invitrogen, USA) under vacuum for 20 min to completely soak and remove the air bubbles from the granules. The granules were loaded into 2 plastic molds (two out of six wells of a 6-well plate (TPP, Switzerland) (Fig. 2), and approximately 17 × 106 cells were seeded directly onto the granules. The other 17 × 106 cells were mixed with 1.5 ml Advanced D-MEM/F-12 and 1 ml fibrinogen (Beriplast, Aventis Behring). The mixture was applied onto the granules. The granules were “glued” by inducing fibrin clot
formation with the addition of 3 ml of thrombin (Beriplast, Aventis Behring) and Advanced D-MEM/F-12 mixture (v:v = 1:9). Engineered bone grafts were 7 mL each. Cell viability was checked by MTT staining [8]. A small portion of the seeded scaffold was removed from the bone tissue graft, incubated in MTT solution and the presence of viable cells, attached to the scaffold, was evaluated by development of violet color. D. Bone tissue graft application The surgical procedure used to apply the bone tissue graft was the same as in the conventional spongioplasty. Well vascularised bed for graft must be present and all nonvascularised (dead) bone is removed. Proper necrectomy with removal of all scar tissue in region of future graft is mandatory. If the bone defect after necrectomy is larger than previously assessed from the radiographs, tissue engineered bone graft may be combined with cancellous bone graft. III. RESULTS A. BMSC isolation, expansion, osteogenic differentiation and preparation of tissue-engineered bone grafts BMSC from the patient’s bone marrow were successfully isolated, cultured and expanded to total number of ~ 35 × 106 (~ 1,5 × 106 cells/mL bone tissue construct) in two subsequent passages. Osteogenic potential of BMSC was confirmed in vitro by cultivation of the second passage cells in monolayer cultures in osteogenesis inducing conditions. Dark stained granules (according to von Kossa) indicated mineral deposition in cell culture (Fig. 1). Two bone tissue grafts, 7 mL each, were successfully prepared for implantation (Fig. 2.). Viability of BMSC in the grafts was detected by MTT staining. Live BMSC, attached to the surfaces of the granules stained violet (data not shown).
Fig. 1. Cell cultures of the second passage (stained according to von Kossa) of bone marrow derived mesenchymal stem cells A.) 14 days after induction of osteogenic differentiation B.) in basal medium (cell nuclei stained with Giemsa).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Mesenchymal Stem Cells: a Modern Approach to Treat Long Bones Defects
Fig. 2. Bone tissue grafts; BMSC-derived osteogenic cells, loaded onto the triphosphate granules and glued by fibrin glue.
B. Application of tissue-engineered bone grafts Up to date, one pilot patient (55-year old male) with large bone defect was treated according to the presented cell-based tissue-engineering approach. The patient sustained bilateral comminuted femoral fractures in a car accident two years previously. After admission to the ER, he was operated and both femurs were stabilized with intramedullary nails. Fracture of the left femur healed within normal healing time. However, fracture of the right femur did not heal at all. There were clinical and radiological signs of pseudoarthrosis. After removal of the intramedullary nail the intramedullary canal was reamed, followed by insertion of a new intramedullary nail. A separate skin incision followed nail locking. Through skin incision proper necrectomy of bone and removal of fibrous tissue was done. Well vascularised bed for graft was created. Since the defect was larger after bone necrectomy as had been observed before the surgery, additional autologous cancellous bone graft from the right iliac crest had to be harvested. Ratio between cancellous bone graft and tissue engineered bone graft (BMSC-derived osteoblasts seeded on the scaffold) was approximately 1:2. Both grafts were mixed together and implanted into the midschaft femoral defect. The graft was covered by vastus lateralis muscle, which was fixed on the intermuscular
A.) AP
A.) lat
B.) AP
B.) lat
255
septum, followed by drainage and sutures of fascia lata, subcutis and skin. Surgical wound healed without any complications and until present time (2 months post-operatively) there hasn’t been a sign of the bone instability, losening of the intramedullary nail nor locking the screws. In Figure 3, antero-posterior and lateral views of the defect of femoral diaphysis after surgical procedure are shown. As early as one month after surgery (data not shown) smoothening of the edges of the scaffold was seen. Moreover, two months after surgery the healing process continued - additional smoothening of the edges of the scaffold with some callus formation was seen (Fig. 3B). IV. DISCUSSION Despite high regeneration capacity of the bone tissue, the surgical procedures usually used in reparative osteogenesis do not necessarily result in structural and functional recovery (even after treatment with osteoplastic and osteoinductive materials). This state is associated with disintegration or insufficiency of cambial cells in the bone tissue and designated as osteogenic deficiency [8]. BMSC can be isolated, culture expanded and differentiated into specialized cells of skeletal tissues such as bone, cartilage and muscle [1,2]. By seeding on an appropriate biomaterial scaffold and implantation, BMSC can be used to improve bone tissue regeneration in the case of defects due to disease or trauma [9]. Based on encouraging reports of previous clinical studies using bone marrow stromal cells loaded on scaffolds to a) repair large bone defects [8,10,11,12] and b) to induce new bone in the upper jaw [13], a clinical trial was designed to evaluate the use of BMSC-derived osteoblasts seeded on the calciumtriphosphate scaffold for treatment of long bone defects in twenty patients. The novelty of our approach is predifferentiation of progenitor cells into bone-matrix forming osteoblasts as part of the tissue-engineered graft preparation process. Preliminary results with one patient, including successful cell culture, cell differentiation and graft preparation, and subsequent complication-free graft implantation, as well as indications of successful bone healing, show promise that our approach will result in substantial improvement of our ability to repair large defects in long bones. V. CONCLUSIONS
Fig. 3. Antro-posterior (AP) and lateral (lat) view of the defect of femoral diaphysis A) 10 days and B) 2 months after the surgical procedure.
Huge advances have been made in our understanding of the versatility of mesenchymal stem cells and their intrinsic capacity to differentiate into multiple cell lineages. Demands in the field of tissue engineering have intensified
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
256
H. Krečič-Stres, M. Krkovič, J. Koder, E. Maličev, M. Drobnič, D. Marolt, and N. Kregar-Velikonja
the effort to understand the potential therapeutic value of mesenchymal stem cells. The aim of our recent clinical trial is to evaluate the effectiveness of an autologous bone tissue engineered product (BMSC derived osteoblasts seeded on the calciumtriphosphate scaffold), focused on improved and less invasive treatment of long bone defects, offering shortening of healing period and increasing the patient’s convenience.
6. 7.
8.
9.
ACKNOWLEDGMENTS We thank Danica Gantar and Nadja Lorber for excellent technical support. The project was supported by Ministry of Defense, The Republic of Slovenia, grant number TP MIR 31 (contract No. TP MIR 06/ RR/ 12) and by Ministry of Higher Education, Science and Technology, The Republic of Slovenia (L4-6325-0311-04/4.06, 3311-01-831/476).
10. 11. 12. 13.
REFERENCES 1. 2. 3.
4. 5.
Pittenger MF, Mackay AM, Beck SC, Jaiswal RK, Douglas R, Mosca JD et al. (1999) Multilineage potential of mesenchymal stem cells. Science 284:143-147 Flanagan N (2001) Advances in stem cell therapy. Genetic Engineering News 21(9):1,29,61,66 Zuk PA, Zhu M, Mizuno H, Huang J, Futrell JW, Katz AJ, Benhaim P, Lorenz HP, Hedrick MH (2001) Multilineage cells from human adipose tissue: implications for cell-based therapies. Tissue Eng 7(2):211-228 Bianco P, Riminucci M, Gronthos S, Robey PG (2001) Bone marrow stromal stem cells: nature, biology, and potential applications. Stem Cells 19(3):180-192 Heller L, Scot LL (2001) Bone and soft tissue reconstruction. In: Bucholz RW, Heckmann JD Fractures in Adults. Lippincot Williams & Wilkins, New York
14.
Jaiswal N, Haynesworth SE, Caplan AI, Bruder SP (1997) Osteogenic differentiation of purified, culture expanded human MSC in vitro. J Cell Biochem 64:295-312 Meinel L, Hofmann S, Karageorgiou V, Zichner L, Langer R, Kaplan D et al. (2004) Engineering cartilage-like tissue using human mesenchymal stem cells and silk scaffolds. Biotech Bioeng 88:688698 Martin I, Muraglia A, Campanile G, Cancedda, Quarto R (2006) Fibroblast growth factor-2 supports ex vivo expansion and maintenance of osteogenic precursors from human bone marrow. Endo 138(10):4456-4462 Fatkhudinov TK, Gol'dshtein DV, Pulin AA, Shamenkov DA, Rzhaninova AA, Gornostaeva SA, Grigor'yan AS, Kulakov AA (2005) Reparative osteogenesis during transplantation of mesenchymal stem cells. Bull Exp Biol Med 40(1):96-99 Carter DR, Beaupre GS, Giori NJ, Helms JA (1998) Mechanobiology of Skeletal Regeneration. Clin Orthop Rel Res, S41-55 Quarto R, Mastrogiacomo M, Cancedda R, Ketepov SM, Mukhachev V et al. (2001) Repair of long bone defects with the use of autologous bone marrow stromal cells. N Eng J Med 344(5):385-386 Kadiyala S, Jaiswal N, Bruder SP (1997) Culture-expanded, bone marrow-derived mesenchymal stem cells can regenerate a critical sized segmental bone defect. Tissue Eng 3(2):173-185 Orozco L, Rodriguez L, Torrico C, Douville J, Hock JM, Armstrong RD e tal. (2005) Clinical feasibility study: The use of cultured enriched autologous bone marrow cells to treat refractory atrophic and hypotrophic nonunion fractures at http://scholar.google.com/scholar?hl=en&lr=&q=cache:ss358GFA5X cJ:www.aastrom.com/pdf/Whitepaper_Barcelona051205.pdf+Orozco+clinical+feasibility Hernandez Alfaro (2005) Clinical feasibility study: The use of cultured autologous bone marrow – derived Tissue repair cells (TRC) for maxillary sinus floor au gmentation in edetulous humans at http://www.aastrom.com/corporate/bone.cfm?pagesect=SinusLift Author: Institute: Street: City: Country: Email:
Hana Krečič Stres Educell d.o.o. Letališka 33 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Surface modification of titanium fiber-mesh scaffolds through a culture of human SAOS-2 osteoblasts electromagnetically stimulated L. Fassina1,5, L. Visai2,5, E. Saino2,5, M.G. Cusella De Angelis3,5, F. Benazzo4,5 and G. Magenes1,5 1
Dipartimento di Informatica e Sistemistica, University of Pavia, Pavia, Italy 2 Dipartimento di Biochimica, University of Pavia, Pavia, Italy 3 Dipartimento di Medicina Sperimentale, University of Pavia, Pavia, Italy 4 Dipartimento SMEC, IRCCS San Matteo, University of Pavia, Pavia, Italy 5 Centro di Ingegneria Tissutale (C.I.T.), University of Pavia, Pavia, Italy
Abstract— The surface properties of a biomaterial are fundamental in order to determine the response of the host tissue. In the present study we have followed a particular biomimetic strategy where electromagnetically stimulated SAOS-2 human osteoblasts proliferated and built their extracellular matrix on a titanium fiber-mesh surface. In comparison with control conditions, the electromagnetic stimulation (magnetic field intensity, 2 mT; frequency, 75 Hz) caused higher cell proliferation and increased surface coating with type-I collagen and decorin (9.8-fold and 11.3-fold, respectively). The immunofluorescence of type-I collagen and decorin showed their colocalization in the cell-rich areas. The use of an electromagnetic bioreactor aimed at obtaining the surface modification of the biocompatible metallic scaffold in terms of cell colonization and coating with extracellular matrix. The superficially modified biomaterial could be used, in clinical applications, as an implant for bone repair. Keywords— Electromagnetic stimulation, osteoblast, extracellular matrix, surface modification, biomimetics.
I.INTRODUCTION The properties of a biomedical surface determine the biological response of the host tissue. Castner and Ratner have reviewed the concept of “biocompatibility” and its experimental realization in the fields of biomaterials and surface science; they think the “biocompatible surfaces” of the “biomaterials that heal” as the surfaces with the characters of a “clean, fresh wound” [1]. The studies to obtain biocompatible surfaces are numerous: the non-specific protein adsorption has been inhibited [2] or a biomimetic strategy has been developed [3], for instance. In the present work of bone tissue engineering we show a biomimetic strategy that consists in the surface modification of titanium fiber-mesh with proliferated bone cells and their extracellular matrix produced in loco: during the culture period we have applied an electromagnetic wave because the osteoblastic cell function can be electromagnetically modulated in terms of proliferation and differentiation [4].
Bone tissue engineering aims at healing critical-size long bone defects and maxillofacial skeleton defects following an approach that involves the seeding and the in vitro culturing of cells within a porous biomaterial before the implantation, for instance. Although the biodegradation is a common requirement for scaffolds, this work is aimed at the alternative approach of the biointegration, i.e. the in vivo integration of a biostable scaffold [5]. Following the path of the scaffold biointegration, we have elected the titanium fiber-mesh scaffold. Titanium is a metal widely used in hip and knee replacements owing to its biocompatibility, particularly with bone. Titanium fiber-mesh has been shown to be a suitable biomaterial for the culture of marrow stromal cells in an effort to create constructs for bone replacement [6]. In vitro, titanium fiber-mesh acts as scaffold for the adhesion and the osteoblastic differentiation of progenitor cells [7]. In vivo, the material reveals itself to be osteoconductive [6]. In addition, the coating of titanium with calcium phosphates and extracellular matrix proteins enhances the bone formation, suggesting that the surface modification of titanium could play an important role in bone tissue engineering [8, 9]. Therefore, we have developed another type of surface modification in vitro that could benefit the in vivo bone formation: our aim is to enhance a bone cell culture over titanium fiber-mesh scaffolds using an electromagnetic bioreactor to coat their surface with cells and bone matrix [4]. II.MATERIALS AND METHODS Titanium fiber-mesh scaffolds: Titanium fiber-mesh sheets were harvested from Harris-Galante Porous acetabular components (Zimmer). The mesh was composed of sintered non-woven titanium fibers (fiber diameter, 440 μm ± 10 μm; scaffold density, 2.7 ± 0.1 g/cm3; scaffold porosity, 40% ± 3%) (Fig. 1). Cell culture scaffolds (diameter, 12 mm; height, 0.8 mm) were cut from the mesh with a die.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 238–241, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Surface modification of titanium fiber-mesh scaffolds through a culture of human SAOS-2 osteoblasts electromagnetically stimulated 239
Fig. 1 Unseeded titanium fiber-mesh scaffold, 20× Cells: The human osteosarcoma cell line SAOS-2 was obtained from the American Type Culture Collection (HTB85, ATCC). The cells were cultured in McCoy’s 5A modified medium with L-glutamine and HEPES (Cambrex Bio Science), supplemented with 15% fetal bovine serum, 2% sodium pyruvate, 1% antibiotics, 10-8 M dexamethasone, and 10 mM β-glycerophosphate (SigmaAldrich). Ascorbic acid, another osteogenic supplement, is a component of McCoy’s 5A modified medium. The cells were cultured at 37°C with 5% CO2, routinely trypsinized after confluency, counted, and seeded onto the scaffolds. Cell seeding: The scaffolds were sterilized by ethylene oxide at 38°C for 8 h at 65% relative humidity. After 24 h of aeration in order to remove the residual ethylene oxide, the scaffolds were placed inside the two culture systems: the “static” culture system, i.e. a standard well-plate far from the electromagnetic bioreactor, and the “dynamic” or “electromagnetic” culture system, i.e. a standard well-plate inside the electromagnetic bioreactor. A cell suspension of 4×105 cells in 100 μl was added onto the top of each scaffold and, after 0.5 h, 1 ml of culture medium was added to cover the scaffolds. Cells were allowed to attach overnight, then the static culture continued in the standard well-plate far from the electromagnetic bioreactor, and the electromagnetic bioreactor was turned on. Electromagnetic bioreactor: The electromagnetic bioreactor consisted of a carrying structure custom-machined in a tube of polymethylmethacrylate: the windowed tube carried a well-plate and two solenoids, the planes of whom were parallel (Fig. 2). The surfaces of the scaffolds were 5 cm distant from each solenoid plane, and the solenoids were powered by a Biostim SPT pulse generator (Igea), a generator of Pulsed Electromagnetic Fields (PEMFs). Given the position of the solenoids and the characteristics of the pulse generator, the electromagnetic stimulation had the following parameters: intensity of the magnetic field equal to 2 ± 0.2 mT, amplitude of the induced electric tension equal to 5 ± 1 mV, signal frequency of 75 ± 2 Hz, and pulse duration of about 1.3 ms.
Fig. 2 Electromagnetic bioreactor The electromagnetic bioreactor was placed into a standard cell culture incubator with an environment of 37°C and 5% CO2. The dynamic culture was stimulated by the PEMF 24 h per day for a total of 22 days. The culture medium was changed on days 4, 7, 10, 13, 16, and 19. Standard well-plate culture: The static culture was placed into a different incubator, where the PEMF stimulation was not detectable. The culture medium was changed on days 4, 7, 10, 13, 16, and 19. Scanning electron microscopy (SEM) analysis: Scaffolds were fixed with 2.5% (v/v) glutaraldehyde solution in 0.1 M Na-cacodylate buffer (pH=7.2) for 1 h at 4°C, washed with Na-cacodylate buffer, and then dehydrated at room temperature in a gradient ethanol series up to 100%. The samples were kept in 100% ethanol for 15 min, and then critical point-dried with CO2. The specimens were sputter coated with gold and observed at 100× magnification with a Leica Cambridge Stereoscan 440 microscope at 8 kV. DNA content: Cells were lysed by a freeze-thaw method in sterile deionized distilled water. The released DNA content was evaluated with a fluorometric DNA quantification kit (PicoGreen, Molecular Probes). A DNA standard curve, obtained from a known amount of osteoblasts, was used to express the results as cell number per scaffold. Set of rabbit polyclonal antisera: Dr. Larry W. Fisher (http://csdb.nidcr.nih.gov/csdb/antisera.htm, National Institutes of Health, Bethesda, MD) presented us with the rabbit polyclonal antibody immunoglobulins G anti-type-I collagen and anti-decorin. Purified proteins: Decorin [10] and type-I collagen [11]. Indirect immunofluorescence staining: At the end of the culture period, the scaffolds were fixed with 4% (w/v) paraformaldehyde solution in 0.1 M phosphate buffer (pH=7.4) for 8 h at room temperature and washed with PBS (137 mM NaCl, 2.7 mM KCl, 4.3 mM Na2HPO4, 1.4 mM KH2PO4, pH=7.4) three times for 15 min. The scaffolds were then blocked by incubating with PAT (PBS containing 1% [w/v]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
240
L. Fassina, L. Visai, E. Saino, M.G. Cusella De Angelis, F. Benazzo and G. Magenes
bovine serum albumin and 0.02% [v/v] Tween 20) for 2 h at room temperature and washed. Anti-decorin and anti-type-I collagen rabbit polyclonal antisera were used as primary antibody with a dilution equal to 1:1000 in PAT. The incubation with the primary antibodies was performed overnight at 4°C, whereas the negative controls were based upon the incubation, overnight at 4°C, with PAT instead of the primary antibodies. The scaffolds and the negative controls were washed and incubated with Alexa Fluor 488 goat antirabbit IgG (H+L) (Molecular Probes) with a dilution of 1:500 in PAT for 1 h at room temperature. At the end of the incubation, the scaffolds were washed in PBS, counterstained with a solution of propidium iodide (2 μg/ml) to target the cellular nuclei, and then washed. The images were taken by blue excitation (bandpass, 450-480 nm; dichromatic mirror, DM500; barrier filter, BA515) with a fluorescence microscope at 40× magnification. The fluorescence background of the negative controls was almost negligible. Extraction of the extracellular matrix proteins from the cultured scaffolds and ELISA assay: At the end of the culture period, the cultured scaffolds were washed extensively with sterile PBS (137 mM NaCl, 2.7 mM KCl, 4.3 mM Na2HPO4, 1.4 mM KH2PO4, pH=7.4) in order to remove the culture medium, and then incubated for 24 h at 37°C with 1 ml of sterile sample buffer (1.5 M Tris-HCl, 60% [w/v] sucrose, 0.8% [w/v] Na-dodecyl-sulphate, pH=8.0). At the end of the incubation period, the sample buffer aliquots were removed and the total protein concentration, both in the static and in the dynamic systems, was evaluated by the BCA Protein Assay Kit (Pierce Biotechnology). Total protein concentration was 1070 ± 125 μg/ml in the static culture, whereas 2110 ± 165 μg/ml in the dynamic culture with p<0.05. The calibration curves to measure decorin and typeI collagen were performed by an ELISA assay. Anti-decorin and anti-type-I collagen antisera were used. The results are expressed as fg/(cell×scaffold). Statistics: Results are expressed as mean ± standard deviation. One-way analysis of variance (ANOVA) with post hoc Bonferroni test was applied, electing a significance level of 0.05.
SEM analysis: The SEM images revealed that, due to the electromagnetic stimulation, the cells proliferated over the available titanium surface. Statically cultured cells were few and essentially organized in a monolayer with a thin discontinuous extracellular matrix (Fig. 3A), whereas the electromagnetic stimulation induced a 3D modeling of the cellmatrix organization: several cells coated the available titanium surface in a multilayer, the volume of the surface roughness was tending to be filled by cell-matrix clusters growing from the bottom (Fig. 3B). These observations were confirmed by the measure of the DNA content after 22 days of culture: in the static culture the cell number per scaffold grew to 8.5×106 ± 5.1×104, whereas in the dynamic culture to 16.4×106 ± 3.2×104 with p<0.05. Bone matrix analysis: The immunolocalization (green) of type-I collagen with the associated nuclear counterstaining (red) showed a more intense fluorescence in the dynamically cultured scaffolds, revealing the effects of the electromagnetic stimulation in terms of cell proliferation and building of bone matrix (Fig. 4A and Fig. 4B). The immunolocalization of decorin with the nuclear counterstaining was similar (data not shown). At the end of the culture period, in comparison with the static culture, the electromagnetic stimulation greatly increased the scaffold coating with bone proteins (Table 1).
A
B
Fig. 3 SEM images of the static (A) and dynamic (B) culture, 100×
III. RESULTS The osteoblasts were seeded onto the surface of titanium fiber-mesh scaffolds, and then cultured in an electromagnetic bioreactor for 22 days. This culture system permitted the study of the SAOS-2 cells as they proliferated and produced their extracellular matrix in an electromagnetically active environment.
A
B
Fig. 4 Type-I collagen in the static (A) and dynamic (B) culture, 40× Table 1 Matrix constituents over the scaffold surface [fg/(cell×scaffold)] Static culture (S) Dynamic culture (D) D/S Decorin
80.24 ± 3.42
908.37 ± 10.62
11.3
Type-I collagen
917.40 ± 150.25
8971.94 ± 380.12
9.8
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Surface modification of titanium fiber-mesh scaffolds through a culture of human SAOS-2 osteoblasts electromagnetically stimulated 241
IV.DISCUSSION The aim of this study was the surface modification of titanium fiber-mesh with extracellular matrix and osteoblasts to make the biomaterial more suitable to bone repair in vivo. A discussion about the concept of biocompatibility is necessary. When a biomaterial is implanted in a biological environment, a non-specific and non-physiologic layer of adsorbed proteins mediates the interaction of the surrounding host cells with the material surface. Castner and Ratner think the “biocompatible surfaces” of the “biomaterials that heal” as the surfaces with the characters of a “clean, fresh wound”: these “self-surfaces” could obtain a physiological inflammatory reaction around the biomaterials leading to normal healing [1]. In the present study we have followed a particular biomimetic strategy where the seeded osteoblastic cells build a biocompatible surface made of extracellular matrix [12]. In order to enhance the coating of the bulk titanium fibermesh, an electromagnetic wave was applied to the seeded scaffolds. The electromagnetic stimulation increased the cell proliferation around 2-fold. This result is consistent with the rise in proliferation in response to an electromagnetic wave [13]. Aaron and Ciombor reported a significant increase in the extracellular matrix synthesis when the osteoblast-like cells were subjected to an electromagnetic wave [14]. According to the preceding study, the electromagnetic bioreactor caused a significant increase in the extracellular matrix synthesis: in comparison with the static culture, the coating with type-I collagen and decorin was enhanced around 9.8fold and 11.3-fold, respectively. The immunolocalization of the foregoing matrix proteins showed their colocalization in the cell-rich areas. The electromagnetic stimulation raises the net Ca2+ flux in human osteoblast-like cells [15], and, according to Pavalko’s diffusion-controlled/solid-state signaling model [16], the increase of the cytosolic Ca2+ concentration is the starting point of signaling pathways targeting specific bone matrix genes. In this study, using an electromagnetic bioreactor, we enhanced the biomaterial coating with extracellular matrix, that is, we followed a particular biomimetic strategy where the seeded cells built a biocompatible surface, making the biomaterial very useful for the biointegration [5]. The use of a cell line showed the potential of the culture method, nevertheless, a better result could be obtained with autologous bone marrow stromal cells instead of SAOS-2 osteoblasts for total immunocompatibility with the patient. In conclusion, we could theorize that the cultured “selfsurface” could be used fresh, that is, rich in autologous cells and matrix, or after sterilization with ethylene oxide, that is,
rich only in autologous matrix in order to handle a simpler storable tissue-engineering product for bone repair.
ACKNOWLEDGMENT Lorenzo Fassina dedicates the study to Francesca Sardi.
REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
[13]
[14] [15] [16]
Castner DG, Ratner BD (2002) Biomedical surface science: Foundations to frontiers. Surf Sci 500:28-60 Holland NB, Qiu Y, Ruegsegger M et al. (1998) Biomimetic engineering of non-adhesive glycocalyx-like surfaces using oligosaccharide surfactant polymers. Nature 392:799-801 Sanchez C, Arribart H, Guille MM (2005) Biomimetism and bioinspiration as tools for the design of innovative materials and systems. Nat Mater 4:277-288 Fassina L, Visai L, Benazzo F et al. (2006) Effects of electromagnetic stimulation on calcified matrix production by SAOS-2 cells over a polyurethane porous scaffold. Tissue Eng 12:1985-1999 Fassina L, Visai L, Asti L et al. (2005) Calcified matrix production by SAOS-2 cells inside a polyurethane porous scaffold, using a perfusion bioreactor. Tissue Eng 11:685-700 van den Dolder J, Farber E, Spauwen PH et al. (2003) Bone tissue reconstruction using titanium fiber mesh combined with rat bone marrow stromal cells. Biomaterials 24:1745-1750 van den Dolder J, Bancroft GN, Sikavitsas VI et al. (2003) Flow perfusion culture of marrow stromal osteoblasts in titanium fiber mesh. J Biomed Mater Res 64A:235-241 Vehof JW, van den Dolder J, de Ruijter JE et al. (2003) Bone formation in CaP-coated and noncoated titanium fiber mesh. J Biomed Mater Res A 64:417-426 van den Dolder J, Bancroft GN, Sikavitsas VI et al. (2003) Effect of fibronectin- and collagen I-coated titanium fiber mesh on proliferation and differentiation of osteogenic cells. Tissue Eng 9:505-515 Vogel KG, Evanko SP (1987) Proteoglycans of fetal bovine tendon. J Biol Chem 262:13607-13613 Rossi A, Zuccarello LV, Zanaboni G et al. (1996) Type I collagen CNBr peptides: species and behavior in solution. Biochemistry 35:6048-6057 Fassina L, Visai L, Cusella De Angelis MG et al. (2007) Surface modification of a porous polyurethane through a culture of human osteoblasts and an electromagnetic bioreactor. Technology and Health Care 15:33-45 Bodamyali T, Bhatt B, Hughes FJ et al. (1998) Pulsed electromagnetic fields simultaneously induce osteogenesis and upregulate transcription of bone morphogenetic proteins 2 and 4 in rat osteoblasts in vitro. Biochem Biophys Res Commun 250:458-461 Aaron RK, Ciombor DM (1996) Acceleration of experimental endochondral ossification by biophysical stimulation of the progenitor cell pool. J Orthop Res 14:582-589 Fitzsimmons RJ, Ryaby JT, Magee FP et al. (1994) Combined magnetic fields increased net calcium flux in bone cells. Calcif Tissue Int 55:376-380 Pavalko FM, Norvell SM, Burr DB et al. (2003) A model for mechanotransduction in bone cells: the load-bearing mechanosomes. J Cell Biochem 88:104-112
Author: Lorenzo Fassina, Ph.D. Institute: Dipartimento di Informatica e Sistemistica Street: Via Ferrata 1 City: 27100 Pavia Conuntry: Italy E-mail:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Application of Simplified Ray Method for the Determination of the Cortical Bone Elastic Coefficients by the Ultrasonic Wave Inversion T. Goldmann1, H. Seiner2,3 and M. Landa3 1
CTU in Prague/ Faculty of Mechanical Engineering, Department of Mechanics, Biomechanics and Mechatronics, Prague, Czech Republic 2 CTU in Prague/ Faculty of Nuclear Science and Physical Engineering, Department of Materials, Prague, Czech Republic 3 Academy of Sciences of the Czech Republic/ Institute of Thermomechanics, Laboratory of nondestructive testing and material evaluation, Prague, Czech Republic
Abstract— This work contributes to the methodology of an evaluation of elastic properties of cortical bones by ultrasonic wave inversion, whilst the bone is considered to be a linear elastic anisotropic continuum. Velocities of acoustic waves are used as an input data into inverse problem and they are experimentally detected by means of the ultrasonic based pulseecho immersion technique. The geometry of bone specimens is also implicated into algorithm by the model of wave propagation through curvilinear anisotropic sample based on the simplified ray method. The stability of resulting data is evaluated by the statistical method based on the Monte-Carlo simulation. The immersion method based on the wave inversion has shown to be a reliable tool for determination of some elastic constants only, the remaining coefficients need to be measured or improved by another experimental method. The ultrasonic contact pulse through transmission technique was rated as an acceptable experimental approach for this purposes. The RUS was found to be an unsuitable method for the measurement of the elastic coefficients of the cortical bone tissue. Keywords— Simplified Ray Method, Cortical Bone, Inverse Problem, Monte-Carlo Simulation, Matrix of Elastic Coefficients.
I. INTRODUCTION The aim of this study is contribution to the methodology of an evaluation of elastic properties of cortical bone by ultrasonic wave inversion, whilst the bone is considered to be a linear elastic anisotropic continuum. Velocities of acoustic waves are used as an input data into inverse problem and they are experimentally detected by means of the ultrasonic based pulse-echo immersion technique. This method was developed on composite structures such as plates and cylindrical shells. The geometry of bone specimens is also implicated into algorithm by the model of wave propagation through curvilinear anisotropic sample based on the simplified ray method, which is an original approach and its application to the experimental determination of the bovine femoral sample is the main subject of the interest of this work. The stability of resulting data from inverse algorithm is evaluated by the statistical method based on the Monte-Carlo simulation. The suggested approach has a
potential for qualify of such measurements performed on fresh bones and also for improvement in-situ ultrasonic techniques. II. MATERIALS AND METHODS The main aim of this experiment is to deal with possibilities of the measurement of the matrix of elastic coefficients of the cortical bone by means of the dynamical, ultrasound based, mechanical tests. The methodology should be nondestructive; ultrasound based, appropriates for a rapid measurement and undemanding a sample preparation. The ultrasonic-pulsed through-transmission method with the specimen immersed in a liquid between two opposite transducers has been chosen as a suitable technique (Fig. 1). Following experiments were performed on the dry bovine femur. Dry bovine bone was used instead of a wet bone [1] for the measurement, because of the independent determination of elastic properties separately, from the natural visco-elastic behaviour of bones. The bone sample was slit into two parts along the bone axis in order to monitor just simple wave propagation through one face of the bone and each part was shape-measured on CNC milling machine (Fig. 4). During the experiment, just one particular place in a middle part of the bone localized on a medial side of the bovine femur sample was examined.
Fig. 1 Experimental set-up for immersion measurement
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 304–307, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Application of Simplified Ray Method for the Determination of the Cortical Bone Elastic Coefficients by Ultrasonic Wave Inversion
Fig. 2 (a) The ray increment according to the Huygens axiom; (b) A successive ray construction through the specimen thickness
of the current wavefront is a new point source and those newly generated wavefronts are superimpose into new wave fronts (Fig. 2). An example of a modelling of the complex interaction of a planar wave in an anisotropic curvilinear specimen is illustrated on Figure 3. The Carbon Fibre Reinforced Plastic (CFRP) tube, the material having the transversely isotropic symmetry, is introduced as a model example. The mode I was used for the determination of coefficients c22 and c12. The remaining coefficient c23 was determined by the simple contact pulse-transmission measurement. To estimate the accuracy of the optimization procedure's results, no appropriate analytical approach is available. The only possible solution is, thus, the Monte Carlo simulation, based on running the whole optimization process several times with randomly distorted input data. The Gaussian statistic made over the set of results is then expected to reveal the reliability of optimized coefficients. In this work, the wave arrival times were determined accurately.
Fig. 3 The interaction of planar wave with a strongly anisotropic tube. The selection of initial rays; the path of one ray; the complete interaction The three different modes of the measurement, modes C, D and I, were performed. Modes C and D corresponded to the horizontal positioning of the bone between the transducer and the reflector where the wave propagation in an axial plane of the bone was observed. The bone geometry was not solved in these modes, the bone geometry was considered as planar in the surrounding of a measuring position. This mode was appropriate for evaluation of 6 out of 9 elastic coefficients (c11, c33, c44, c55, c66 and c13) of the orthotropic material symmetry. These elastic constants were evaluated from measured quasi-longitudinal and quasitransverse wave velocities via the solution of an inverse problem of the Christoffel equation [2]. The mode I corresponded to the vertical configuration of the measurement. In this mode, the propagation of the planar wave was observed in the plane perpendicular to the long axis of the bone, so the bone curvature needs to be considered. This is resolved by means of Simplified Ray Method [3]. The technique [2] is based on a wavefront substitution by the closely localized energy flow (ray) in every geometrical point. The Christoffel’s equation along rays and the ray behaviour at a solid/liquid interface will be solved numerically afterwards. Rays in immersion are lines perpendicular to the wavefront – the planar wavefront is replaced by the set of respectively parallel rays. The anisotropy orientation is included in the model by the definition of the angle of anisotropy orientation in each point of the sample. The rays inside the sample are designed on the basis of Huygens axiom, thus each point
305
III. RESULTS The stability of elastic coefficients of the bovine bone sample resulting from an inverse problem optimization was evaluated by the simulation based on the Monte-Carlo statistical method [2]. Input parameters into this simulation were variations of specimen thickness, rotations of a sample (mode C,D) or a reflector (mode I), a temperature of the water bath and a density of the specimen. The Monte-Carlo simulation was repeated 30 times to generate a representative set of output data. The variability of this set is approximately expressed by the usual Gaussian statistic quantities, namely standard deviations. Obviously, the presented standard deviations cannot be treated absolutely, but they bring a valuable insight in how sensitive and stable the optimization procedure is for each particular coefficient. Final resultant coefficients cij in GPa can be expressed in the following form: ⎛ c11 c12 c13 0 ⎜ ⎜ c12 c 22 c 23 0 ⎜c c 23 c33 0 13 cij = ⎜ 0 0 c 44 ⎜ 0 ⎜ 0 0 0 0 ⎜ ⎜ 0 0 0 0 ⎝ ⎛ 27. 4 ± 1. 6 9. 1 ± 3. 5 ⎜ ⎜ 9. 1 ± 3. 5 30. 3 ± 2. 8 ⎜ 8. 3 ± 5. 3 8. 5 ⎜ 0 0 ⎜ ⎜ 0 0 ⎜ ⎜ 0 0 ⎝
0
0 ⎞ ⎟ 0 ⎟ 0 0 ⎟ ⎟ = 0 0 ⎟ c55 0 ⎟ ⎟ 0 c66 ⎟⎠ 8. 3 ± 5. 3 8. 5
(1)
0
0
0
0
0
0
0
0 0
0 0
34. 1 ± 1. 7 0 0 9. 3 ± 0. 9 0
0
7. 0 ± 0. 4
0
0
0
0 6. 9 ±
⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ 0. 5 ⎟⎠
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
306
T. Goldmann, H. Seiner and M. Landa
Fig. 6 Example of measuremnty in mode D Fig. 4 The bone specimen and its shape measurement via contact probe on milling machine
Particular coefficients of matrix (1) were evaluated from modes C, D and I (see Materials and Methods). The modes C and D (Fig. 6) were used for the determination of 6 elastic coefficients without considering the general geometry of bone specimen – without solving the ray model. The mode I served for the determination of 2 elastic coefficients. This mode corresponds to the vertical configuration of the measurement, so the wave propagation and the elastic constant evaluation of the bone specimen with the general geometry of a bone specimen must be resolved via the ray method.
The experimental procedure and the elastic constant evaluation of the mode I is subsequent. The input geometry of the bone specimen into the ray algorithm was obtained by the contact probe on milling machine (Fig. 4). During the experiment, the bone sample was rotated into the vertical position, so the wave propagation in the plane perpendicular to the bone axis could be observed. Then, the ray algorithm is solved for a different positioning of the specimen until the ray model is tuned to the measured data. This situation, the final tuning in fact, and agreement of experimental data with the ray model are demonstrated in Figure 5. The Christoffel equation along thereby obtained rays and behaviour of rays at the solid/specimen interface was numerically solved by means of the inverse problem. The remaining coefficient wasn’t possible to determine via immersion technique without additional specimen cutting. This coefficient was evaluated subsequently via simple pulse-echo contact technique. IV. DISCUSSION AND CONCLUSION
Fig. 5 The diagram of the wave propagation through the bovine bone mode I. (a) The ray model of the wave interaction with the bovine bone sample; the specimen is stationary, the reflector is rotating, (b) The comparison of the ray model and experimentally obtained data
The original contribution of this work is an application of the ray method to the evaluation of elastic constants of curvilinear anisotropic bone samples. The inverse problem for phase velocities and the sensitivity analysis of inverse approach based on the Monte-Carlo statistical simulation are also formulated in this contribution. The propose methodology is usable for the measurement of all 9 elastic coefficients of compact bone, but specimen must be cut, which is at variance with request on nondestructivity of entire process. 8 coefficients can be measured non-destructively, but general specimen shape needs to be considered, which leads to application of ray method. The tuning of ray model to experiment and measurement of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Application of Simplified Ray Method for the Determination of the Cortical Bone Elastic Coefficients by Ultrasonic Wave Inversion
specimen shape is quite laborious. This immersion technique is very suitable for quick evaluation of 5 constants of long bone non-destructively without solution of ray model. The resultant matrix of elastic coefficients (1) of the bovine dry femur evaluated in this work is in line with other data [2] and the original presumption, that the dry bone is known to be stiffer then the wet one was satisfied.
REFERENCES 1. 2. 3.
ACKNOWLEDGMENT This work was supported by the Ministry of Education project No. MSM 6840770012, the Grant Agency of Sciences of the Czech Republic under project No. IBS2076919 and the Institutional project AVOZ2076919.
307
Pithioux M, Lasayques P, Chabrand (2002). An alternative ultrasonic method for measuring the elastic properties of cortical bone. J Biomech, 35, 961–968. Goldmann T (2006) Propagation of Acoustic Waves in Composite Materials and Cortical Bone. PhD thesis, CTU in Prague, Prague Cerveny V (2001) Seismic Ray Theory. University Press, Cambridge. Author: Tomas Goldmann Institute: CTU in Prague, Faculty of Mechanical Engineering, Dept. of Mechanics, Biomechanics and Mechatronics Street: Technicka 4 City: Prague 6 Country: Czech Republic Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Bending stiffness of odontoid fracture fixation with one cortical screw – numerical approach L. Capek1, P. Buchvald2 1
Department of Applied mechanics, Technical University of Liberec, Liberec, Czech Republic 2 Department of neurosurgery, Hospital of Liberec, Liberec, Czech Republic
Abstract— The anterior screw fixation of odontoid fractures is nowadays standard clinical operation. There are less biomechanical studies dealing with the problem whether one or two screws should be used for this type of stabilisation. This study used in vitro experiment and finite element simulation to investigate the bending stiffness in the sagital plane of stabilised odontoid fracture type II of the second cervical vertebra. The comparative study of one cortical screw and the cannulated one was performed. The results indicated that there are no significant differences between them. In addition, it seems from the bending stiffness point of view, tightening the screw to higher levels is quite useless, because the stiffness is not higher, but with higher stresses we can reached diminishing of the bone’s strength and stiffness, according to Wolff’s law of functional adaptation. Thus the most reasonable might be to tighten the screws to certain moment, as in the dental surgery is performed. Keywords— cervical spine, odontoid fracture, finite element analysis, bending stiffness.
I. INTRODUCTION The upper cervical spine (vertebrae) and its junctions are slightly different from the rest of the cervical spine complex. The first cervical vertebra - atlas has no body and the second cervical vertebra - axis has its odontoid. The junctions differ as well; there are no intervertebral disks between the first and the second cervical vertebrae and the base of the skull. The fractures of the odontoid process are the most common injury of the upper cervical spine, they include about one quarter of all cervical spine fractures [1,2,3]. Flexion is generally supposed to be the loading vector causing odontoid fractures [2,3]. Fractures of the second vertebra, particularly those of the odontoid, have been analyzed from point of view of treatment [5,6]. Anderson and D´Alonzo (1974) identified three distinct types of fractures according to anatomical location of the fracture line and classified them as Type I, II and III of odontoid fractures. Unfortunately, the unstable type II is also the most common [1,2,3]. Historically posterior odontoid stabilisation was used either by wire, clamps or two transarticular screws [6]. Nowadays it is rather the domain of anterior stabilization thanks to several advantages, firstly
described by Nakanishi and Bőhler [4,7], Figure 1. Alternatively instead of two screws, only one screw might be used [4]. There are rare biomechanical studies dealing with biomechanics of the odontoid fracture stabilization. Perhaps the best analyses were made by the authors Graziano, Doherty and Sasso [7,8,9] in 1990s´, where the biomechanics of the odontoid fracture and different stabilization technique were compared by experiments in vitro. They used one or two 3.5 mm cortical screws for the odontoid fracture anteriror stabilization. No significant differences were found either in bending or rotational stiffness, from the mechanical point of view, using one or two screws in the anterior stabilization [7,8,9]. With some exaggeration we can say that after twenty years no significant changes concerning the stabilization of the odontoid process have been made. Preferably two screws are still used, regarding the better rotational stability. The aim of this article is to quantify bending stability in the sagittal plane of the following system: the body of the vertebra, the odontoid and one cortical screw. To authors’ knowledge, the numerical approach of the stabilized odontoid fracture with one cortical screw has not yet been carried out.
Fig. 1 Anterior fixation of the odontoid process with two cortical screws
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 270–273, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Bending stiffness of odontoid fracture fixation with one cortical screw – numerical approach
271
II. MATERIALS AND METHOD A. Experiment in vitro The second vertebra was removed from three human spine columns, donor ages were from 65 to 82 years. All soft tissues except of ligamentum transversum atlantis were removed. For limited number of specimens available, for obvious reason, the authors decided to produce the type II fractures of the dens using osteotome, according to [7]. The separation line was horizontal in the odontoid “necking”. The separated dens was stabilized with a single 3.5 mm cortical screw, punching apex axis. After stabilization RTG images were kept, figure 2. Tightening torque for each specimen was measured by a torque wrench, table 1. Fig. 3 Preload for different torque and friction values B. Virtual implantation
Fig. 2 Anterior fixation of the odontoid process with one cortical screws When the screw is tightened, the tightening torque is applied as a moment to the head of the screw. Due to the tightening moment a preload is generated in the screw. Table 1
Torque – stabilization with one cortical screw
Specimen
Female_82
Female_68
Male_62
Moment [Ncm]
28 ± 1
42 ± 1
55 ± 1
The simplified relationship between the tightening torque and the preload can be determined by next equation,
1 M = d 2 F0 tg (γ + ϕ ) (1) 2 where M1 is the tightening torque, d2 is the pitch diameter; γ is the angle of screwline and ϕ is the angle of friction. The amount of the preload for different tightening moments and friction coefficients varies from 400 to 1000 N, Figure 3.
The basis of the designed model is a set of digital CT data. These data are obtained from CT scanner (Philips) in 0.75 mm thick slices. Data are stored in CDs using Digital Imaging and Communications in Medicine (DICOM) format. These data are used as an input into a modeling software 3D-Doctor (Able software). CT slices undergoes segmentation to extract the vertebra information. This software provides semiautomatic algorithms nevertheless each slice must be checked and manually corrected. The edges of the inner (spongy region) and outer borders of the axis are identified and its contours are converted from a pixel map to vector map representation for each slice. Contours are imported to the software Imageware 9 (UGS co.), where each contour is joined into a surface model of the axis. Only one half was modeled, the whole model was obtained by mirroring its half about the plane of symmetry. Two different kinds of screw were modeled: the 3.5 mm cortical screw and 3.5 mm cannulated one with the same geometry of threads. To simplify the process of computation, the threads of the screw were modeled as separated annulus with a triangle section, placed in intervals of thread pitch. The screw model was then positioned inside the vertebra model and cut out from the vertebra’s model. There were three positions for the screws: 1. the solid cortical screw is punching apex dentis; 2. the solid cortical screw is whole in the spongy bone; 3. the cannulated screw is punching apex dentis. The insertion of the screw was always at the edge of the anterior caudal surface of the vertebra body below the angle 8°. At the end the odontoid was cut in the necking.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
272
L. Capek, P. Buchvald
C. Meshing generation and material properties
III. RESULTS
Semi automated process of three-dimensional meshing was made in the software I-DEAS 9 (UGS co.) with four node tetrahedral solid elements, fig. 3. The full finite element model consisted of about 200 000 elements according to complexity of the model. Parametric study of materials properties were taken from [10], table 2. Table 2
Material properties [10]
Material
Titanium
Cortical bone
Spongy bone
Young’s modulus [MPa]
117 000
10 000
100
Poisson ratio [-]
0.3
0.29
0.29
When looking on the loading phases of the odontoid by displacement in the midsagital plane, we can see that for all cases the behaviour of the models is the same. In the first phase the whole model is rotating backwards (dorsal) together with all parts. In the second phase the screw is bended and the cut odontoid is moving relatively toward the body. In the third phase the cut odontoid is going upon the vertebra body in the dorsal side of the cutting plane. The bending stiffness was determined as the slope of the most linear portion of the load displacement curve. The highest bending stiffness has been found out in the case of the cortical screw punching apex dentis, the lowest in the case of the cortical screw positioned below the cortical layer, figure 3. The cannulated screw is “in the middle”. The bending stiffness for cortical screws is after some (600N) prestress not increased contrary to the values of stress in the material. Accordance with the experiments more than ten years older is evident. They reached average stiffness 257.5 ± 12.45 and 462 ± 194 [N/mm], [9]. In our case the bending stiffness varies from 240 to 330 [N/mm] depending on the preload.
Fig. 4 Screw vertebra mesh system without odontoid The bones and screws were defined as contact bodies in software Marc 2005 (MSC.Software co.). Therefore, each node and each element face on the exterior surface of bodies were treated as potential contacting node and potential contact face, respectively. The friction coefficient was set to 0.2 according to [11]. To find out bending stiffness in sagital plane of the vertebra, these boundary conditions were applied in the following steps: 1. all nodes of the caudal surface of the vertebra’s body were constrained in all directions of movement; all nodes on the proccesus articularis inferior were constrained in the perpendicular direction of these surfaces; the screw was prestressed by function “nodal ties” in Marc software [12] with forces 400, 600, 800, 1000 N 2. the odontoid was loaded by displacement in the anterior posterior direction in midsagital plane.
Fig. 5 Bending stiffness as a function of preload IV. DISSCUSION There have been only small numbers of previous biomechanical investigations of the odontoid fractures stabilization. Graziano, Sasso and Dohery [7,8,9] demonstrated in their works that using two screws does not give higher stability to the system than using one screw. Nevertheless in the clinical experience two screws are still preferred, regarding the “better rotational stability”.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Bending stiffness of odontoid fracture fixation with one cortical screw – numerical approach
It should be mentioned that at the co-worker department two screws are still used. It is clear that using two screws can be technically difficult in patients with a small odontoid process. This was a high motivation for find out at least some particular answers to these problems. It seems from the bending stiffness point of view, that tightening the screw to higher levels is quite useless, because the stiffness is not higher, but with higher stresses we can reached diminishing of the bone’s strength and stiffness, according to Wolff’s law of functional adaptation [13]. Thus the most reasonable might be to tighten the screws to certain moment, as in the dental surgery is performed [14]. Some doubts might be seemed in using the cannulated screw instead of solid one, but it is the bending which causes the failure of the screw, therefore the stress value depend on moment of inertia of the screw. If the screw’s moment of inertia is the same, there should be no doubt about the resistance in bending. V. CONCLUSION The aim of this work was to validate more then ten years old experiments from authors Sasso and Graziano [ ], where bending stiffness was tested in vitro. The results of our simulations are in good agreement with them. In addition we tried to find out the bending stiffness as a function of tightening torque. The function should be compared with the experiment in the future work, considering the “instability” of numerical simulations. In conclusion, the results of this study indicate that there is no difference between using one solid or cannulated cortical screw as for bending stiffness.
ACKNOWLEDGMENT The authors thank MD Petr Hajek from the Department of Anatomy, Charles University in Hradec Kralove providing the specimens and other generously help.
273
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
12. 13. 14.
Koivikko M (2005) Cervical spine injuries in adults: diagnostic imaging. PhD thesis Cusick J., Yoganandan N. (2002) Biomechanics of the cervical spine Part 4: major injuries. Clinical Biomechanics 17: 1-20. Yoganandan N. et al. (2005) Odontoid fracture in motor vehicle environments. Accident analysis and prevention 37: 505 - 514. Morandi X. et al. (1998) Anterior screw fixation of odontoid fractures. Spine, Sung J.K. (2005) Anterior screw fixation using Herbert screw for type II odontoid process fractures. Journal of Korean neurosurgery 37: 345 - 349 Shilpaker S. et al. (2005) Management of acute odontoid fracture: operative techniques and complication avoidance. Neurosurgery focus Graziano G., Jaggers C., Lee B., Lynch M. (1993) A comparative study of fixation techniques for type II fractures of the odontoid process. Spine 18: 2383 - 2387 Doherty B. et al. (1993) A biomechanical study of odontoid fractures and fracture fixation. Spine 18: 178 – 184 Sasso R. et al. (1993) Biomechanics of odontoid fracture fixation. Spine 18: 1950 – 1953 Teo E., Ng H. (2001) First cervical vertebra fracture mechanism studies using finite element method. Journal of biomechanics 34: 13-21 Chen I., Lin R., Chang C. (2003) Biomechanical investigation of pedical screw – vertebrae complex: a finite element approach using bonded and contact interface conditions. Medical Engineering and Physics 25: 275 – 282 manual Marc software 2005r2 Gefen A. (2002) Optimizing the biomechanical compatibility of orthopedic screws for bone fracture fixation. Medical Engineering and Physics 24: 337 – 347 Lang L. et al. (2003) Finite element analysis to determine implant preload. The journal of prosthetic dentistry 90: 539-546
Author: Institute: Street: City: Country: Email:
Lukas Capek Department of Applied mechanics, Technical University Halkova 6 Liberec Czech Republic
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Biomechanical Analysis of Bolus Processing T. Goldmann1, S. Konvickova1 and L. Himmlova2 1
CTU in Prague/ Faculty of Mechanical Engineering, Department of Mechanics, Biomechanics and Mechatronics, Prague, Czech Republic 2 Institute of Dental Research, 1st Medical Faculty of Charles University, Prague, Czech Republic
Abstract— Clinical observations and mathematical models show that dental implants are influenced by the magnitude of loading. Therefore, the knowledge of mandible movement during mastication is important to assess occlusal and masticatory force vectors. The purpose of this study was to detect the path of movement of the lower jaw and to distinguish stages of mastication, duration of bolus processing and peak amplitude of mastication. Motion analysis was used to record threedimensional mandible movements. Individualized sensors were rigidly attached to the mandible of 51 study participants. At the beginning of the measurement, all subjects were asked to move the mandible in extreme positions (maximal opening and maximal lateral movements). Then, each subject masticated a bite of hard and soft food. Duration of bolus mastication and peak amplitude of mastication movement in mesio-distal, cranio-caudal and vestibulo-oral axes related to peak amplitude of marginal movements were evaluated for each subject. The chewing record of each subject was divided into three phases (chopping, grinding and swallowing), and the duration of mastication and number of closing movements were evaluated. Results of this pilot study suggest that masticatory movements vary in individuals. Relationships to directions and magnitudes of acting chewing force should be more precisely examined since transversally acted forces during grinding are important factors in implant overloading. Keywords— Mastication, Kinematics, 3D Motion Analysis, Bolus Processing, Experiment in Vivo
I. INTRODUCTION Mastication is a set of bite movements whereby nutrient is processed. The trajectory of these motions affects the direction and magnitude of masticatory force. This trajectory is markedly influenced by an individual’s unique chewing habits [1]. Mastication consists of several repetitive movements. First, food is bitten. Next, a detached bolus is transported posteriorly towards the pharynx during the fine crushing. Then, the bolus is mixed with saliva and is finally swallowed [2, 3]. Throughout mastication, the lower jaw midline moves along an elliptically shaped curve [4]. The size and the shape of this curve vary depending on the bolus processing phase (chopping, grinding and swallowing) and bolus character (soft, hard, fibrous, etc.) [4, 5].
This pilot study was designed to detect the path of lower jaw movement during the mastication, to determine the duration of the processing of one bite depending on its character (hard and soft aliment), and to analyze the timing of the chewing. Knowledge of mandible movement during mastication is important to assess the dominant anatomical and bite force direction during mastication. II. MATERIALS AND METHODS Motion analysis [6] was used to assess three-dimensional (3D) mandible movements. Individual sensors were designed for each subject to trace mandible movements. Black paper skin markers were placed on each subject’s face above the eyebrows, on the nose dorsum and above the upper lip. These markers were used to define the local coordinated system where the motion of the sensor was observed (Fig. 1). The x, y, z coordinates represent mandibular movement in mesiodistal (protrusion, retrusion), craniocaudal (closing, opening) and vestibulo-oral directions (laterotrusion, mediotrusion), respectively. The origin of the local (nonstationary) coordinated system was placed on the nose dorsum marker. Kinematical transformation relations between the primary (stationary) coordinated system and the local (nonstationary) coordinated systems were determined by means of a kinematical transformations matrix technique.
Fig. 1 Example of sensor and illustration of local coordinated system, skin markers and sensor
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 300–303, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Biomechanical Analysis of Bolus Processing
301
The motion of the markers and the sensor was mark scanned by three SONY DCR-TRV900E digital video camera recorders. Recorders were calibrated using a uniquely constructed calibrating cage during the scanning. The synchronization of video records and the 3D motion reconstruction were performed by the direct linear transformation (2D video records read from 3 different positions in real time give a 3D path of sensor placed on lower jaw [7]). The raw trajectory result processing was performed by software APAS (Ariel Dynamics, Inc., SAN DIEGO, CA) system. Final results and kinematical transformations between coordinated systems were evaluated by software MATLAB (The MathWorks, Inc, Natick, MA). The chewing of 55 volunteers (23 men, 32 women) with natural dentition was recorded. Each subject took one bite of a hard food (nuts - H) and started to masticate. The same process was repeated with a soft food (pastry - S). The path of the sensor was graphically expressed for each subject and for both measured motions (hard bite (H) and soft bite (S) (Fig. 2). The each curve of the mandible movement trajectory was segmented into three phases (chopping, grinding and swallowing). The total duration of bolus mastication (hard – tH; soft - tS), the duration of each of three parts (t1, t2, t3) and
the frequency of closing movements (f) were counted using these curves. The following hypotheses were tested: I. II. III. IV. V.
duration of the hard bolus processing is longer than for the soft bolus (∑t(H) > ∑t(S)), duration of each stage of hard bolus processing is longer than those for the soft bolus (t1(H) > t1(S), t2(H) > t2(S), t3(H) > t3(S)), frequency of closing movements of the hard bolus processing is greater than for the soft bolus (f(H) > f(S)), duration of the bolus processing for men is longer than for women - influence of gender and duration of the bolus processing is longer for older people - influence of age. III. RESULTS
The chewing movements of 55 subjects with natural dentition were recorded. Due to problems related to the sensor attaching to the front teeth, records of four subjects could not be used for the analysis. Therefore, statistic analyses were performed on the remaining 51 subjects.
Table 1 Table of measured durations and frequencies H - hard food, S - soft food; t1 [s], t2 [s] and t3[s] - phase durations (chopping, grinding, swallowing); ∑t - summation of t1, t2 and t3; ∑f - total frequency; Average - arithmetical average of measured quantity; Max, Min - maximal and minimal value of measured quantity; SD - standard deviation of measured quantity.
Average Minimum Maximum SD
t1[s] 6.5 1.2 17.7 4.0
t2[s] 13.9 3.1 62.0 9.7
H t3[s] 7.6 0.8 34.5 6.1
Fig. 2 An example of the lower jaw motion during H and S bite processing in the local coordinated system
Σt [s] 28.0 7.8 78 15.0
Σf [Hz] 1.5 0.8 2.9 0.5
t1[s] 4.7 0.2 11.5 3.1
t2[s] 11.3 2.5 40.1 7.9
S t3[s] 5.9 1.6 21.3 3.7
Σt [s] 21.9 6.6 52.3 11.1
Σf [Hz] 1.5 0.9 2.9 0.6
Fig. 3 Typical trajectory of chewing movements of one patient (in y axis) divided into three stages to illustrate the individuality of the bolus processing
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
302
T. Goldmann, S. Konvickova and L. Himmlova
soft one Σt(H) > Σt(S) (p= 0.001). Also, the duration of the each stage of hard bolus processing was longer than those for the soft bolus. This difference was statistically significant for t1(H) > t1(S) (p=0.005) and t2(H) > t2(S) (p=0.025), but not for t3(H) > t3(S) (p=0.064). The third hypothesis regarding the influence of the bolus character on the frequency of closing movements during the mastication (f(H) > f(S)) was not supported, because the same average number was achieved for both types of boluses. Only during swallowing phase the frequency of closing movements was significantly higher (p=0.003) for the hard bolus than for the soft one. The gender and age influenced nor time nor frequency of bolus processing, so hypotheses IV. and V. are not confirmed.
Fig. 4 Typical trajectory of chewing movements of two different patients (in y axis) divided into three stages to illustrate the individuality of the bolus processing
The path of the sensor rigidly connected to the lower jaw was graphically expressed in all axes (x, y, z) for each subject and for each measured motion (hard bite - H, soft bite - S) (Fig. 2). Curves of the mandibular movement trajectory showed that masticatory movements are markedly individual (Fig. 2, 3 and 4). Many of the curves could be easily segmented into phases of bolus chopping, grinding and swallowing (Fig. 2, 3 and 4). Conversely, some subjects chewed in the same manner most of the time (Fig. 3). Also the bolus character influenced the trajectory of lower jaw movements (Fig. 4). All measured quantities were averaged; maximal, minimal and standard deviation (SD) values were calculated (Table 1). Data were also examined from subject’s gender and age point of view. To analyze the dependence of measured quantities on the age, participants were grouped according decades (Table 2). The statistical significance was performed by paired and unpaired Student t-test and Fischer’s exact test. The age criterion was evaluated by analysis of variance. The data shown above support the first two hypotheses that the bolus character influences the process duration. The hard bolus was chewed significantly longer than the Table 2 Table of gender and age distribution Age group 20-30 31-40 41-50 51-70
Men 12 3 2 1
Women 16 7 4 3
Total 28 10 6 4
IV. DISCUSSION AND CONCLUSION Results obtained in this motion analysis confirm data from a similar study [1] concerning the influence of individual chewing habits. Overall motion of the point in the defined coordinated system can be used to reconstruct the 3D motion. The design of the experiment using one marker as a mandible position-reading instrument sufficiently served as detection of translational degrees of freedom. Three markers were used [1] in order to detect the rotational and the translational degrees of freedom, but rotational movements were not useful for the evaluation of the jaw movement parameters. The hypotheses that the bolus character affects the processing duration were confirmed. The results show the high individuality of timing of the chewing maneuver, especially during the chopping and grinding phases. The level of significance when comparing chewing duration display, that in the first phase - chopping (p=0.005) are bigger differences than during grinding (p=0.025). The timing of the swallowing (t3) was not significantly different (p=0.064). The same frequency during mastication indicates that the bolus character does not influence masticatory movements. A significant difference between frequencies was found during swallowing only and could reflect a different texture between pastry and nuts (fine particles) at the time of swallowing. Alternatively, this finding could have been influenced by salivation. These findings agree with the theory that mastication is a highly individual process influenced not only by anatomy, but also by a specific manner developed during life. Similar findings were performed by Gerstner [1]. The observed decrease of the amplitude in craniocaudal (opening, closing) movements support the hypothesis that the stage of the bolus processing influences the shape of the curve illustrating movement of the mandible and thus the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Biomechanical Analysis of Bolus Processing
direction of the actual chewing force. Computed trajectories agree with observations by Bhatka [4] that the center of lower jaw motion moves along an elliptically shaped curve. Understanding masticatory development and physiological relationships are important in determining the principal anatomical direction during closing movements and the resultant direction of the loading during mastication. Such findings can be used to plan treatment and to reconstruct defective dentition from a masticatory point of view, as well as to validate treatment procedures. Results can also affect the design and the usage of materials for the dental implants, their position in jaws and the shape of the occlusal surface of bridgeworks and dentures. The obtained information suggests that masticatory movements vary by individual. Therefore the relationship between mandible movements and direction and the magnitude of the chewing force should be more precisely examined.
ACKNOWLEDGMENT The research has been supported by the Ministry of Education project No. MSM 6840770012 and by the Grant Agency of the Czech Republic under project No. 106/06/0849.
303
REFERENCES 1. 2. 3. 4. 5.
6. 7.
Gerstner G E, Lafia C, Lin D (2005) Predicting masticatory jaw movements from chin movements using multivariate linear methods. Journal of Biomechanics, 38:1991–1999 Klepáček I, Mazánek J (2001) Klinická anatomie ve stomatologii. Grada Publishing, Avicenum, Praha Voldřich M. (1969) Stomatologická protetika. Státní zdravotnické nakladatelství, Praha Bhatka R, Throckmorton G S et al. (2004) Bolus size and unilateral chewing cycle kinematics. Archives of Oral Biology, 49:559–566 Eskitascioglu G, Usumez A. et al.. (2004) The influence of occlusal loading location on stresses transferred to implant-supported prostheses and supporting bone: a three-dimensional finite element study. The Journal of Prosthetic Dentistry, 91:253–257 Zatsiorski V M (1998) Kinematics of Human Motion. Human Kinetics. Champaign, IL Abdel-Aziz Y I, Karara H M (1971) Direct linear transformation from comparator co-ordinates into object space co-ordinates, Proc. ASP/UI symposium on close-range photogrammetry, Am. Soc. of Photogrammetry, Falls Church, VA, pp. 1-18
Author: Tomas Goldmann Institute: CTU in Prague, Faculty of Mechanical Engineering, Dept. of Mechanics, Biomechanics and Mechatronics Street: Technicka 4 City: Prague 6 Country: Czech Republic Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Changes in Biomechanics Induced by Fatigue in Single-leg Jump and Landing J. Stublar, P. Usenik, R. Kamnik, M. Munih Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Sloveni Abstract— The purpose of this study was to determine the effects on lower extremities during muscle fatigue. The experimental trial of single-leg landing and jumping was conducted in a group of healthy male subjects. The experimental protocol included series of single-leg landings from elevated platform, single-leg jumps and fatiguing process that was kept at reasonable level, by asking each subject to make total of 60 two-legged squats. The results show that there was no significant change in overall shock attenuation prior and after the experiment. However the ankle work decreased, while the hip work increased. Even though the fatigue caused significant decrease in peak moments at the hip, knee and ankle joints, the significant increase in hip range of motion resulted in constant overall shock attenuation. The results suggest that the lower extremity is able to adapt to fatigue due to redistributing work to larger hip muscles. Keywords— single-leg jump, fatigue, kinetics, kinematics, muscle
I. INTRODUCTION In dynamic maneuvers, when impacting with the environment, the mechanical shock experienced by the body, must be attenuated by the musculoskeletal system. It was shown that in sports such as basketball, netball, volleyball, football, gymnastics and aerobic dance the injuries due to landing are prevalent. It was demonstrated that the shock in landing is mainly attenuated by the eccentric muscle activity. On this basis it was hypothized that the muscle fatigue would significantly decrease the shock attenuation ability of lower extremities [1]. Various studies investigated the biomechanics of landing. In [2] it was shown that the hip and knee flexion increased and ankle flexion decreased at touchdown with fatigue during single-leg landing. Consequently, the hip joint work increased, while the ankle joint work decreased. A similar study showed that in fatiguing single-leg landing the ankle and knee flexion increased, while in hip no significant flexion changes were observed [3]. In this experiment, the joint work changed in a similar way as in [2] at early fatigue progression, while this trend reversed during further fatiguing and the joint work values returned to original values. The objectives of our study were to verify the changes of biomechanics in single-leg landing and jumping activity with induced fatigue. In the next chapter the methodology
of experimental testing of a group of students is presented. In the following section the results are outlined and discussed in the final section. II. METHODS A. Subjects Six healthy male subjects with no severe previous lower extremity injury participated in this study. The age, height and weight of the subjects, who were all left foot dominant, were 22.8 years (SD 1.0), 185.0 cm (SD 6.4) and 92.9 kg (SD 16.8), respectively. B. Data collection Ground reaction force (GRF) and kinematics data were collected during the initial preparation session. GRF data were sampled at 100 Hz from AMTI force plate (AMTI, Inc., Newton, MA, U.S.A.). The kinematics of the body segment movement were obtained by the OPTOTRAK optical system (Northern Digital Inc., Waterloo, Canada) measuring the 3-D positions of active infrared LED markers at a 100 Hz sample rate. Seven markers were placed on the left side of the body according to Table 1. Table 1 Positioning of LED markers on the human subject Marker number 1 2 3 4 5 6 7
Position Left foot (little finger) Left foot (heal) Left ankle Left knee Left hip Mid trunk – left side Left shoulder
C. Experimental protocol During the practice session subjects were asked to perform four series of single-leg landings and jumps. During each series subject went through fatiguing process. Squatting was used to induce fatigue because it is considered as a closed kinetic chain movement that typically involves mul-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 288–291, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Changes in Biomechanics Induced by Fatigue in Single-leg Jump and Landing
Fig. 1 Start position of experimental trial and landing tiple muscle groups including the hip extensors, knee extensors, and ankle plantar flexors. After minor individual warm-up, subjects were asked to stand on a 30 cm high elevated platform (see Fig. 1, left) using only their left leg to support themselves and than land as vertically as possible on the force plate (see Fig. 1, right). Participants were instructed to use a toe-heel landing strategy. After performing two single-leg landings, subjects stepped on the force plate and performed two additional single-leg jumps. Subject repeated this sequence of landings and jumps after fatiguing process of several squats (10 after first series, 20 after second and 30 after third). At the end, the subject’s fatiguing process consisted of total 60 squats. Upper extremity movement was constrained throughout experiment by asking subjects to keep their hands on their back. D. Data analysis The signals collected from active markers and force plate were interpolated and filtered using a fourth order zerophase Butterworth filter with a 5 Hz cutoff frequency [4]. The coordinate systems of all the sensors were transformed to coincide with the reference coordinate system placed on the centre point of force plate. To estimate net muscle moments during the procedure, the body was modeled as a 3-D system of four rigid body segments embodying the foot, shank, thigh and trunk (including upper extremities and head). Collected data were used to compute segmental anthropometric parameters (segment masses, mass centers and inertia tensors), based on the De Leva study [5]. From the segment position, orientation and other anthropometric data, the forces and torques acting on the joints were calculated
289
recursively using Newton-Euler inverse dynamic analysis [6]. This analysis is based on the Newton’s law that states that the sum of the external forces acting on a rigid body, and similarly, the sum of the external moments acting on a rigid body is equivalent to the time change in the linear and angular momentum of the body, respectively. Thus, the human body can be modeled as a chain of constant mass and rigid body segments whereby for each segment, the external forces and moments consist of a net force and a net moment reaction at both the proximal and distal joints and a gravitational force. Additional forces are involved in the segments where interaction with the environment occurs. Ground reaction force vectors acting from the floor on the foot were measured and thus readily used in the analysis. Each joint power at time t was given by: Pi(t) = Mi(t) × ωi
(1)
where Mi is the resultant joint moment and ωi is the joint angular velocity. Work done by each joint throughout the motion was calculated by integrating the power with respect to time during trial: Wi =
∫ P (t) dt i
(2)
In equations above, the subscript i denotes the index of particular joint. One-way ANOVA was used to test for significant differences between unfatigued and fatigued series. Significance level was set at P-Value < 0.05. Peak GRF, range of motion, peak moment, peak power and net work at each joint were compared. All measured data were processed using the Matlab 7.0 software (MathWorks, Inc., Natick, MA, U.S.A.). III. RESULTS Because of insignificant difference between the first three series, the second and the third series were excluded from further analysis. Therefore, the first series represents rested muscles and the last series the muscles after fatiguing process of 60 squats. Take-off and landing phases of the jump were studied separately. From this reason, all figures were divided into three sections separated by two dashed lines. The left section corresponds to the take-off stage (see Table 2), the middle to the mid-air stage and the right to the landing stage (see Table 3). The GRF was normalized to the subjects body weight (mass times ground acceleration), the moments and net work in all joints were normalized to the subjects mass in order to compare values between all subjects. All presented
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
290
J. Stublar, P. Usenik, R. Kamnik, M. Munih
Table
2 Group mean (SD) data for the first and the last cycle for the take-off kinematics and kinetic variables. First cycle
Last cycle
P-Value
Peak GRF 1.94 (0.18) 1.88 (0.21) 0.258 Range of motion [°] Hip 52 (6) 51 (6) 0.967 Knee 55 (5) 56 (7) 0.593 Ankle 59 (10) 54 (9) 0.129 Peak moment [Nm/kg] Hip 6.31 (1.42) 6.06 (1.05) 0.114 Knee 4.02 (0.55) 3.99 (0.55) 0.531 Ankle 3.74 (0.56) 3.44 (0.91) 0.007* Peak power [W/kg] Hip 27.72 (4.46) 25.63 (3.86) 0.044 Knee 19.85 (2.33) 19.8 (3.92) 0.810 Ankle 18.63 (5.7) 16.32 (5.56) 0.090 Net Work [J/kg] Hip 4.73 (0.83) 4.42 (0.81) 0.270 Knee 3.2 (0.48) 3.18 (0.5) 0.090 Ankle 2.74 (0.76) 2.36 (0.7) 0.060 Overall 10.6 (1.46) 9.89 (1.35) 0.090 * Indicates significant difference between first and last cycle (P < 0.05).
Table
3 Group mean (SD) data for the first and the last cycle for the landing kinematics and kinetic variables. First cycle
Last cycle
For studying the landing phase, the measurements collected from the single-leg landing from the elevated platform were used. The landings from the elevated platform proved to give more significant results in contrast to singleleg jump. Both landing kinematics and kinetics are significantly different during the last series. The most significant was the decrease in peak GRF and peak moment in hip and ankle joints. Also significant were the increase in hip and decrease in ankle range of motion, thus proving the hypothesis on shift of shock attenuation from ankle to hip joint. In Fig. 3 the increased motion in all joints during last series can be observed. This observation is rather surprising considering that, in order to flex joint, muscles must provide additional work. The effects of fatigue are clearly shown in Fig. 4 where all joint moments are significantly smaller during the landing phase. Important and significant is the observation of the net knee work shown in Fig. 5, what 2.5 GRF [times body weight]
results in Table 2, Table 3, Fig. 2, Fig. 3, Fig. 4 and Fig. 5 were averaged over all participating subjects. In the brackets the standard deviation (SD) throughout the group is stated. The difference in mean value throughout all data is smaller than the SD value, thus in order to distinguish the significant difference the P-Value parameter was used. During the take-off stage the only significant change was the decrease of peak ankle moment during the last series. Peak moments in hip and knee joints were also decreased during the last cycle, which could correspond to diminished muscles ability due the fatiguing process.
51 (17) 51 (8) 44 (8)
0.019* 0.049* 0.023*
5.99 (0.6) 3.55 (0.66) 3.35 (0.77)
<0.001* 0.437 <0.001*
-24.2 (1.64) -16.25 (1.66) -10.88 (1.68)
0.037* 0.099 0.074
4.39 (1.48) 2.36 (0.61) 1.48 (0.42) 8.19 (1.44)
0.023* 0.113 <0.001* 0.512
* Indicates significant difference between first and last cycle (P < 0.05).
1 0.5
200
400 600 Time [ms]
800
1000
Hip
0
Unfatigued Fatigued
-50 -100
Angle [°]
<0.001*
1.5
Fig. 2 Mean GRF. Time zero represents max hip flexion prior take-off. The end of the time series represents max hip flexion after landing. Two vertical dashed lines represent take-off and landing
P-Value
2.11 (0.22)
2
0 0
Peak GRF 2.37 (0.22) Range of motion [°] Hip 38 (20) Knee 48 (4) Ankle 54 (23) Peak moment [Nm/kg] Hip 6.81 (0.75) Knee 3.39 (1.14) Ankle 4.52 (1.2) Peak power [W/kg] Hip -22.13 (2.02) Knee -15.34 (2.33) Ankle -18.41 (2.33) Net Work [J/kg] Hip 3.41 (1.57) Knee 2.08 (0.68) Ankle 2.37 (1.22) Overall 7.8 (1.74)
Unfatigued Fatigued
0
200
400
600
800
1000
600
800
1000
400 600 Time [ms]
800
1000
Knee
0 -50 -100
0
200
400 Ankle
50 0 -50
0
200
Fig. 3 Mean joint range of motion. Time zero represents max hip flexion prior take-off. The end of the time series represents max hip flexion after landing. Two vertical dashed lines represent take-off and landing
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Changes in Biomechanics Induced by Fatigue in Single-leg Jump and Landing
Unfatigued Fatigued
5
Moment [Nm/kg]
0
IV. DISCUSSION AND CONCLUSIONS
Hip
10
0
100
200
300
400
500
600
700
800
900
1000
600
700
800
900
1000
600
700
800
900
1000
Knee
10 5 0
0
100
200
300
400
0
500
Ankle
5
0
100
200
300
400
500
Time [ms]
Fig. 4 Mean joint moment. Time zero represents max hip flexion prior take-off. The end of the time series represents max hip flexion after landing. Two vertical dashed lines represent take-off and landing Hip
5
Unfatigued Fatigued
0
291
The purpose of this study was to confirm the effect of lower extremity fatigue on shock attenuation. The findings support the hypothesis that the overall shock attenuation stays unchanged despite the fatigue. The significant decrease in ankle joint ability to attenuate shock is substituted by the increase of attenuated shock in the hip joint. The explanation could be found in size of the hip muscles compared to the ankle muscles. Obviously, in the time of fatigue the human neuromuscular system relays more on the larger and stronger muscles in the hip joint. In this study we observed some differences to the previous studies. We observed difference in the peak joint moment values, which were in our case almost doubled. On the other side, the findings of this study well agree with the results of studies of two-legged jumping and landing [7, 8], where the resulting values of peak moments in all joints are approximately one half compared to our results. We demonstrated that with fatigue, despite of the significant decrease of peak moments in all joints, the increase of hip joint motion range helps to maintain the same shock attenuation level as was observed in first unfatigued series.
-5
Net work [J/kg]
-10
0
100
200
300
400
500
600
700
800
900
REFERENCES
1000
Knee
2
1.
0 -2 -4
2. 0
100
200
300
400
500
600
700
800
900
1000
Ankle
2
3.
0
4.
-2 -4
0
100
200
300
400
500
600
700
800
900
1000
Time [ms]
5.
Fig. 5 Mean joint net work. Time zero represents max hip flexion prior take-off. The end of the time series represents max hip flexion after landing. Two vertical dashed lines represent take-off and landing
6.
indicates that despite the fatiguing process the knee muscles were able to produce the same amount of work, therefore significantly contributing to maintaining the same shock attenuation.
8.
7.
Mizrahi J, Susak Z (1982) Analysis of parameters affecting impact force attenuation during landing in human vertical free fall, N Engl J Med 11:141-147 Coventry E, O’Connor K M (2006) The effect of lower extremity fatigue on shock attenuation during single-leg landing, Clin. Biomech. 21:1090-1097 Madigan M L, Pidcoe P E (2003) Changes in landing biomechanics during a fatiguing landing activity, J. of Electromyogr. Kinesiol. 13:491-498 Kamnik R, Bajd T, Kralj A (1999) Functional Electrical Stimulation and Arm Supported Sit-To-Stand Transfer After Paraplegia: A Study of Kinetic Parameters, Artif. Organs 23:413-417 De Leva P (1996) Adjustments to Zatsiorsky-Seluyanov’s segment inertia parameter, J. Biomech. 29:1223-1230 Asada H, Slotine J-J E (1986) Robot analysis and control, John Wiley & Sons, Chichester Korkusuz F (2004) Comparison of landing maneuvers between male and female collage volleyball players, Clin. Biomech. 19:622-628 Hara M, Shibayama A (2006) The effect of arm swing on lower extremities in vertical jumping, J. Biomech. 39:2503-2511 Author: Institute: Street: City: Country: Email:
Jernej Stublar Faculty of Electrical Engineering, University of Ljubljana Tržaška 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Combination of microfluidic and structure-continual studies in biorheology of blood with magnetic additions E.Yu. Taran1, V.A. Gryaznova1 and O.O. Melnyk1 1
Kyiv Taras Shevchenko National University / Faculty of Mechanics and Mathematics, Kyiv, Ukraine
Abstract— Multiscale combination of microfluidic and structure-continual studies is used in order to construct the structure-phenomenological theory of stressed state in arbitrary gradient flows of dilute suspension in blood of rigid axially symmetric elongated particles possessing permanent magnetic moment. The obtained rheological equation is used to examine the revealed viscoelastic behaviour of the considered suspension, explore the possibility of control over its rheological properties with the use of an external magnetic field and investigate the dependence of the suspension effective viscosity on the hematocrit value of blood. Keywords— suspensions in blood, magnetic additions, polar fluid, microfluidic study, structure-continual study.
I. INTRODUCTION Suspensions in blood of magnetically sensitive particles can arise on the addition to blood of particles of medical substances formed on the base of magnetic carriers [1, 2]. In particular, such suspensions arise on addition to blood of ferro- or ferrimagnetic microparticles coated with either polysaccharides or proteins designed for diagnostic of hyperthermic treatment of cancer. While solving medical problems through the use of suspensions in blood, the possible consequences of biomechanical intervention into the human body should be remembered. In order to carry out investigations in this area, a magnetobiorheological model of dilute suspension in blood of rigid axially symmetric elongated microparticles possessing permanent magnetic moment is constructed in our paper owing to the combination of microfluidic and structurecontinual studies of such suspensions within the frames of the structure-phenomenological approach [3,4]. II. STRUCTURE-PHENOMENOLOGICAL STUDY OF DILUTE SUSPENSION IN BLOOD
For using the structure-phenomenological method [3, 4] in the construction of constitutive equations defining the stress state in gradient flows of dilute suspensions in blood, we suppose that blood as a suspension carrier fluid and suspended particles in it satisfy the following assumptions:
1. Rigid axisymmetric suspended particles have the same form and dimensions. 2. Characteristic length L of suspended particle is much less than the characteristic length l of the suspension macroflow region but it is much more than the characteristic dimension l of microstructural elements of the carrier fluid l << L << l .
(1)
3. No-slip condition is fulfilled on the surface of suspended particles. 4. The motion of the carrier fluid with respect to the suspended particles is slow. 5. The volume concentration of suspended particles is small; suspension is assumed to be diluted. 6. Suspended particles possess zero buoyancy. The assumption 2 is fulfilled if the size of suspended particles is significantly larger than the characteristic size of blood microstructural elements, namely, erythrocytes. The assumption 3 is of the frequent use in the study of blood flows [5]. The fulfillment of the double inequality (1) signifies that the modeling of the stress state in the considered suspension within the framework of the structure-phenomenological approach [3,4] is two-scale. In the first scale level of modeling, that is in a scale of suspended particles, the fulfillment of the inequality l << L and fulfillment of assumptions 1, 3–6, allow us to consider the interaction of blood with suspended particles as hydrodynamic interaction, and therefore allow us to study considered suspension using the Einsteinian microfluidic approach [6]. On the other hand, in the second scale level of modeling, that is in the scale of suspension macroflow region, the inequality L << l allows us to study the considered suspension in blood phenomenologically using the structurecontinual modeling of suspension as a whole.
A. Microfluidic study of suspensions in blood. As mentioned above, we consider the interaction of blood with suspended particles as hydrodynamic interac-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 257–261, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
258
E.Yu. Taran, V.A. Gryaznova and O.O. Melnyk
tion. Taking into account the complex structure of blood, we model blood by microcontinuum with internal microrotations and use, as in [5, 7], the Cowin polar fluid [8]
τij = – p δij + 2 μ dij + 2 Hij ,
(2)
Λij = α δijΨrr + (β + γ)Ψij + (β - γ)Ψji
(3)
as a rheological model of blood. In (2), (3), τij and Λij are the viscous and couple stresses tensors; p is the pressure; dij is the strain rate tensor; Ψmk = Ωm,k, where Ωm is the angular velocity characterizing the own angular velocity of blood erythrocytes; Hij = εmij (Ωm - ωm), where ωm = (1/2) εmlk vk,l is the regional angular velocity in blood and vk,l is the velocity gradient tensor; δij and εmij are Kronecker and Levi-Civita symbols; μ, k, α, β, γ are the rheological constants. As a hydrodynamic model of suspended particles a uniaxial dumbbell with axis L is utilized. According to [9], the friction coefficient ξ of dumbbell beads in the Cowin polar fluid defined by equations (2), (3) does not depend on the flow around the beads:
ξ = ξN (1+B), B=
(4)
N 02 1 + N 0 r / l 0 − N 02
.
(5)
In equations (4), (5), ξN is the friction coefficient of a dumbbell bead of radius r in slow translational motion in the Newtonian fluid with the viscosity μ, ξN = 6πμr; the coupling number N0 and the material characteristic length l0 are defined by the expressions [10] ⎛ k ⎞ ⎟⎟ N 0 = ⎜⎜ ⎝μ+k ⎠
12
(0 ≤ N 0 ≤ 1),
⎛ β +γ ⎞ ⎟⎟ l 0 = ⎜⎜ ⎝ μ ⎠
12
.
In order to model blood by the Cowin polar fluid (equations (2) and (3)), parameters N0 and l0 was evaluated in [5] as functions of the hematocrit value Cb of blood with the use of the experimental data presented in [11]. It is assumed in the paper that the suspended particles modeled by uniaxial dumbbells, are magnetically sensitive, namely, they possess a permanent magnetic moment pi = Pni, where P is the value of permanent magnetic moment; ni is the unit vector characterizing orientation of an axially symmetric suspended particle as well as the orientation of its dumbbell model in the laboratory coordinate system. It is also assumed that the suspension is diluted to such an extent that the interaction between the magnetic fields of the suspended particles, as well as the hydrodynamic interaction between them is not taken into account.
Such a modeling allows us to perform the microfluidic study of the considered suspension in blood in the scale of suspended microparticles using the procedures of the Einsteinian study [6]. In the presence of the external magnetic field H i , the dynamics of suspended dumbbell particles in gradient flows of the considered suspension in blood is defined by the hydrodynamic forces acting on the beads of the dumbbell ⎡ ⎤ k L (k ) fi = ξ ⎢( −1) vi, j n j − ni − v0i ⎥ 2 ⎣ ⎦
(
)
( k = 1, 2 )
(6)
with angular momentum
( h) M i = (1 2 ) L2ξε ijk n j ( d ks ns − N k ) ,
(7)
and also by the magnetic momentum
( m) M i = Pε ilk nk H l .
(8)
In equations (6) – (8), Ni is the vector characterizing angular velocity of suspended particle with respect to the carrier fluid, Ni = ni − ωik nk ; here, the dot over ni denotes the local time derivation and ωik is the velocity vortex tensor; v0i is the migration velocity of a dumbbell with respect to the carrier fluid. The use of equations (6)–(8) in the equations of motion of the dumbbell particles, that without regard for its moment of inertia take the form
(1) ( 2) fi + fi = 0,
(h) (m) M i + M i = 0,
allows us to obtain v0i = 0, ni = ωik nk + dik nk − dkm nk nm ni +
(9) P ( Hi − nk Hk ni ) , (10) W
where W is the rotational friction coefficient of a uniaxial dumbbell in the carrier fluid of the suspension, W = (1 2 ) ξ L2 .
According to equation (9), a migration of suspended particles modeled by uniaxial dumbbells with respect to the carrier fluid is absent. The rotational motion of suspended particles is defined by the constitutive equation (10) for the unit vector ni characterizing the orientation of suspended particles. In the frames of this microfluidic study, that is, in the first scale level of modelling, we also obtain the rate of me-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Combination of microfluidic and structure-continual studies in biorheology of blood with magnetic additions
chanical energy dissipation per unit volume of the suspension Φ = Φ 0 + n0 Φ p =
= Φ 0 + n0
ξ L2 2
(N N i
i
(11)
)
− 2d ij N i n j + d ij d ik n j n k ,
where Φ 0 is the rate of the mechanical energy dissipation per unit volume of the carrier fluid of the suspension in the absence of suspended particles; n0 is the number of suspended particles per unit volume of the suspension; Φ p is the rate of mechanical energy dissipation while flowing around the two beads of the dumbbell with the velocities U i(k ) k = 1,2 that are defined by the expressions
(
)
2
Φp =
∑ ξ U ( )U ( ) i
k
i
k
, U i(k ) = (− 1)k
k =1
the angular brackets
(
)
ables d km , nl , N s , that is,
σ ij = σ ij ( d km ; nl ; N s ) .
(14)
The final choice of arguments for the functional relation (14) is determined by the structure of terms in equation (11) and by the symmetry of a uniaxial dumbbell with respect to the midpoint of it:
σ ij = σ ij ( d km ; nl n p ; N s nq ) .
(15)
Furthermore, it follows from equation (11) that σ ij has to be the polynomial function of its arguments, linear over d km and Ni : + a4 d ij + a5 d ik nk n j + a6 d jk nk ni + a7 ni N j + a8 n j N i ,
denote averaging with the use of
the distribution function F of angular positions of suspended particles, which satisfies the equation (12)
B. Structure-continual study of suspensions in blood. In the second scale level of modelling, we take into account that the dimensions of the suspended particles are significantly smaller than the characteristic dimension of the suspension’s macroflow region, L << l . According to the structure-phenomenological approach [3, 4] used in the paper, such a property allows us to model considered suspension using the structural continuum with two internal microparameters, namely, ni and Ni , characterizing the orientation of suspended particles and their relative angular velocity. The rheological equation for stress in the suspension is postulated phenomenologically by the expression Tij = tij + n0 σ ij ,
suspension obtained in the frames of the microfluidic theory, that tensor σ ij in equation (13) must depend on the vari-
σ ij = (a0 + a1d kmnk nm )δ ij + (a1 + a2 d kmnk nm ) n j n j +
L ν i , j n j − ni ; 2
∂F ∂ + ( F ni ) = 0. ∂t ∂ni
259
(13)
where tij is the stress tensor in the carrier fluid of the suspension in the absence of the suspended particles; n0 σ ij is the stress caused by the presence of n0 suspended particles per unit volume of the suspension. It follows from equation (11), that is, from the expression for the rate of mechanical energy dissipation per unit volume of the
(
(16)
)
where ai i = 0, 8 are constant phenomenological coefficients. C. Combination of microfluidic and structure-continual studies. In the third part of the presented structurephenomenological study of stress state in the considered suspension in blood, we find the phenomenological coeffi-
(
)
cients ai i = 0, 8 in equations (13) and (16) making a comparison of the rate of mechanical energy dissipation per unit volume of the suspension defined by equation (11) that was determined in the microfluidic part of theory with the rate determined in the same way as in [4]
( h)
Φ = Φ 0 + n0 σ ij dij + n0 Ni ε ijk n j M k
within the framework of the structure-continual approach. As a consequence of this, we obtain the rheological equation of a dilute suspension with blood as the carrier fluid Tij = tij +
(
1 n0ξ L2 dik nk n j − n j Ni 2
).
(17)
It is demonstrated in [12] that such an equation as (10) has the stationary solution ( ni = 0, Ni = −ωik nk ) for steady-state shear flows of a suspension in the presence of a steady-state magnetic field H i . This means that the suspended dumbbell particles of the considered suspension in
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
260
E.Yu. Taran, V.A. Gryaznova and O.O. Melnyk
blood acquire the same stationary orientation defined by the constitutive equation
(
)
W vi,k nk − d km nk nm ni + P ( H i − nk H k ni ) = 0 (18)
under conditions of steady-state gradient flows of the suspension in the presence of a steady-state magnetic field H i . The rheological equation for stress (17) in such an anisotropic suspension takes the form Tij = τ ij + nWν i ,k n k n j .
(19)
W is a single rheological parameter in equations (18) and (19), which characterizes the interaction of suspended particles with blood modeled by the Cowin polar fluid (equations (2) and (3)). According to equations (4) and (5), taking into account the couple stresses arising in blood leads to the increase in the rotational frictional coefficient of the suspended dumbbell particles W = WN (1 + B), as compared with the value WN = (1/2) ξN L2 in a suspension with the Newtonian model of blood as the suspension carrier fluid. III. MAGNETOBIORHEOLOGICAL BEHAVIOUR OF DILUTE SUSPENSION IN BLOOD. RESULTS AND CONCLUSIONS
The obtained constitutive equations (18) and (19) are used in order to investigate an effect of a shear rate K of flow, hematocrit value Cb of blood as a carrier fluid of the suspension and strength H of an external magnetic field on a rheological behavior of the considered suspension in a steady simple shear flow vx = 0, vy = Kx, vz = 0 (K = const) in the presence of a cross magnetic field Hx = H, Hy = Hz = 0 (H = const). The calculation show that the considered suspension in blood reveals non-Newtonian dependences of the effective suspension viscosity μa and a non-zero difference of normal stresses σ1
μa ≡
Txy + T yx 2K
= μ + n0WN
1 + 4α 2 (1 + B )2 − 1 4α 2 (1 + B )
⎛ 1 + 4α 2 (1 + B )2 − 1⎞ ⎟ ⎜ ⎠ σ 1 ≡ Tyy − Tzz = n0 KWN ⎝ 2 2 2α (1 + B )
on the parameter α = KWN /(PH).
, (20)
The use of the equations (20) and (21) demonstrate that the enhancement of the shear rate K of a simple shear flow at the fixed strength H of the cross magnetic field leads to the pseudoplastic decrease of the effective suspension viscosity and to the enhancement of the first difference of normal stresses (the Weissenberg effect). Such variations of rheological characteristics of the suspension are the consequence of variation of orientation of suspended particles under the action of hydrodynamic forces and external magnetic field. The explicit dependence of W on hematocrit value Cb of blood evaluated in this paper on the base of results obtained in [5] allows us to investigate biorheological properties of the considered suspension. It is demonstrated that the effective viscosity of the suspension and the first difference of normal stresses in it are augmented with increasing the hematocrit value Cb of blood holding K and H fixed. Our calculations show as well that the considered suspension exhibits the magnetorheological properties. We obtain that the enhancement of the strength H of the magnetic field at the fixed shear rate K of the simple shear flow leads to the enhancement of the effective viscosity of the considered suspension in blood and leads in addition to the decrease of the first difference of normal stresses. In such a manner, the variation of H may be used as a control factor of the rheological behavior of suspension arised on addition to blood of particles formed on the base of magnetic carriers and possessing the constant magnetic moment.
REFERENCES 1. 2. 3. 4. 5.
32
(21)
6. 7.
Häfeli U, Schütt W and Zborowski M, editors (1997) Scientific and clinical applications of magnetic carriers. Plenum Press, New York. Gillies G T, Ritter R C, Broaddus W C et al. (1994) Magnetic manipulation instrumentation for medical physics research. Rev Sci Instrum 65:533–562. Shmakov Yu I, Taran E Yu (1970) Structure-continual approach in rheology of polymer materials. Inzh–Fiz Zhurn 18, No.6:1019–1024 (in Russian). Taran E Yu (1977) Rheological equation of state of dilute suspensions of rigid dumbbells. Prikl Meknanika 13, No.4:110–115 (in Russian). Chaturani P, Biswas D (1984) A comparative study of Poiseuille flow of a polar fluid under various boundary conditions with applications to blood flow. Rheol Acta 23, No.4:435–445. Einstein A (1906) Eine neue Bestimmung der Moleküldimensionen. Ann Physik 19:289–306. Ariman T, Turk M A, Sylvester N D (1974) The steady and pulsatile flows of blood. J Appl Mech, Trans ASME 41, No.1:1–7.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Combination of microfluidic and structure-continual studies in biorheology of blood with magnetic additions 8.
Cowin S C (1968) Polar fluids. Phys Fluids 11, No.9:1919– 1927. 9. Erdogan M E (1972) Dynamics of polar fluids. Acta mech 15, No. 3/4:233–253. 10. Cowin S C (1974) The theory of polar fluids. Adv Appl Mech 14:279–347. 11. Bugliarello G, Sevilla J (1970) Velocity distribution and other characteristics of steady and pulsatile blood flow in the fine glass tubes. Biorheol 7:85–107. 12. Taran E Yu (1978) Influence of electric field on rheological behaviour of dilute suspension of dipole dumbbells in viscoelastic fluid of Oldroyd. Mekhanika Polimerov No. 3:519– 524.
261
Address of the corresponding author: Author: Evgeny Taran Institute: Kyiv Taras Shevchenko National University / Faculty of Mechanics and Mathematics Street: Volodymyrska, 64 City: Kyiv Country: Ukraine Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Elastic Moduli and Poisson’s Ratios of Microscopic Human Femoral Trabeculae J. Hong1, H. Cha1, Y. Park1, S. Lee2, G. Khang3,and Y. Kim4 1
Department of Control and Instrumentation Engineering, Korea University, Republic of Korea 2 Department of Orthopedic Surgery, Korea University, Republic of Korea 3 Department of Biomedical Engineering, Kyung Hee University, Republic of Korea Department of Mechanical Engineering, Dan Kook University, Republic of Korea
Abstract— Multi-directional mechanical properties of human cancellous bone tissue were never measured using a compressive test with microscopic cubic specimens. In this study, a small scale compressive testing machine with nano meter resolution and a measurement system for Poisson’s ratio with sub-nano meter resolution were developed to measure accurate microscopic mechanical properties of human cancellous bone tissue (CBT). The measured mean longitudinal (E1), postero-anterior (E2), and latero-medial (E3) elastic moduli were 3.47 GPa (S.D. ±0.41), 2.57 GPa (S.D. ±0.28), and 2.54 GPa (S.D. ±0.22), respectively. ANOVA showed that the longitudinal elastic modulus (E1) was significantly (p < 0.01) greater than the postero-anterior (E2) and latero-medial (E3) elastic moduli. For Poisson’s ratios, ν12 was significantly (p <0.01) higher than ν23 and ν31. Keywords— Cancellous Bone Tissue, Trabeculae, Microscopic Multi-Directional Property Measurement, Human Femoral Head.
I. INTRODUCTION Age related bone fractures caused by the reduction of mechanical performance of CBT (Cancellous Bone Tissue) reflect tremendous sociological and economic implications on aging population. For example, hundreds of thousands of age-related hip, spine, and wrist fractures occur annually only within the USA [1]. The estimated costs of these age related fracture is more than $7 billion each year and is expected to increase exponentially with the increase of the elderly population. Knowledge of the biomechanics of cancellous bone would enhance understanding of these important orthopedic pathologies and generate strategies for prevention and treatment. Recently, it has been accepted that the orthopedic pathologies are partly caused by the altered CBT quality for the elderly. Thus, characterization of CBT mechanical properties is important in investigating unknown mechanobiological mechanism of CBT pathologies. Also, CBT is continuously remodeled in response to local alterations in biomechanical conditions. The remodeling of cancellous bone tissue could be significant to predict the age-related osseopathogenesis. As a result, the measurement of CBT me-
chanical properties is required to understand the mechanobiological response of bone cells to various osseopathology processes. The macroscopic structure of cancellous bone is composed of an interconnected series of rods and plates. The basic microstructural units of cancellous bone are called trabeculae. The thickness of trabeculae is about 100 to 640 μm for human. The trabeculae are not homogeneous structure having aligned cancellous bone tissue with its orientation and complex lacuno-canaliculae networks that osteocytes are living and communicating. Unlike cortical bone tissue, which is a relatively uniform bone tissue type and mechanically well characterized, the mechanical properties of wet CBT are not well understood despite numerous efforts since 1970s. Since the size of trabeculae is microscopic, the direct measurement of mechanical properties of human CBT is difficult task. The buckling method was used for unmachined trabeculae obtained from human distal femur (8.69 GPa) and proximal tibia (11.38 GPa) [2]. Since the method assumed a constant modulus with a slenderness ratio to deal with the inelastic buckling and used inaccurate the dimension of unmachined trabeculae, the associated errors could be significant. The three-point bending was used for machined trabeculae obtained from human tibia (4.59 GPa) and iliac crests (3.81 GPa) [3, 4]. The four-point bending was used for machined trabeculae obtained from human tibia (5.72 GPa) and vertebra (2.11 GPa) [5, 6]. Since cancellous bone tissue is heterogeneous, localized nonlinear stress distributions cause errors stemmed from using a simple elastic bending formula. In addition, the concentrated loads at the loading head and support boundaries on the machined biological tissue surface results in an amplified deflection error. For example, the elastic modulus of cancellous bone tissue could vary from 4.59 GPa to 11.38 GPa for the human trabeculae at the same anatomical site depending on the selection of the test method [2, 3]. The nanoindentation has been applied for the measurement of elastic modulus of cancellous bone tissue. The measured elastic moduli from human vertebra [7], femur [8], and distal femoral condyle [9] by the method were 13.4, 11.4, and 18.1 GPa, respectively. The
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 274–277, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Elastic Moduli and Poisson’s Ratios of Microscopic Human Femoral Trabeculae
nanoindentation method uses an elastic formula [10] with assumed isotropy and Poisson’s ratio of cancellous tissue. However, CBT could not be isotropic. Also, the Poisson’s ratio at the tissue level is never measured. In addition, ISO 14577 specifies that the nanoindentation shall be performed on a specimen surface that is smooth and free from lubricants and contaminants. The intact CBT is wet with viscous marrow and bone fluid. Thus, it could be not free from viscous fluid. Furthermore, bone tissue is known to be a time-dependent material showing creep and relaxation. Therefore, CBT could be a time-dependent material that affects unloading curve behavior during indentation test. These theoretical and experimental conditions could cause significant errors for the estimation of elastic modulus. As other experimental methods, the uniaxial tensile test of unmachined trabeculae human tibia was used. Since the condition of used specimens was dried, the experimental results from the method were not valid. As reviewed, many studies have been performed to investigate microscopic mechanical properties of CBT. However, the use of coarse microtesting machines and irregular specimens could cause significant measurement errors for the buckling, and three or four point bending. Also, the results from the nanoindentation method could have intrinsic errors due to theoretical assumptions and inappropriate experimental protocols. For the measurement of macroscopic mechanical properties of CBT, the well defined uniaxial compression test using cubic and cylindrical bone specimens has been applied. Due to the various technical difficulties for microscopic property measurement, a uniaxial compression test using microscopic cubic CBT samples is never used. If cubic CBT specimens with a dimension of 300 ㎛ is loaded up to an axial strain of 0.5 % known to be the limit of elastic range of bone tissue, for example, the resolution of loading ram displacement should less than 40 nm for the accurate measurement. In the same conditions, a resolution of less than 1 nm is required for the accurate direct measurement of Poisson’s ratio. As a result, the multi-directional mechanical properties of human CBT were never measured using a compressive test with microscopic cubic specimens. In this study, a small scale compressive testing machine was developed to measure accurate microscopic mechanical properties of human CBT. Particularly, measurements of microscopic Poisson’s ratio of bone were performed using a specially designed system, which had a sub-nano resolution. The purpose of this study was to develop a method by which microscopic multi-directional elastic moduli and Poisson’s ratios can be measured by performing three consecutive uniaxial compression tests on the same microscopic cubic specimen of CBT obtained from human femoral head.
275
II. MATERIALS
AND METHODS
Figure 1 shows a schematic diagram of the small scale compressive testing machine. The testing machine had a PZT actuator (PI Gmbh, Germany) for the axial loading. The axial loading PZT actuator could load up to 2000 N with a full displacement range of 120 ㎛. When a loading displacement was measured with a 12-bit A/D converter, the resolution of the testing machine was 30 nm. To measure the microscopic Poisson’s ratio of the specimens, two PZT actuators (PI Gmbh, Germany) were utilized. A microelectrode probe having the tip diameter of 10 ㎛ was attached to the end of each PZT actuator. The system had the resolution of 0.3 nm with a full displacement of 15 ㎛ when a 20-bit A/D converter was used. Total of twenty one cubic CBT specimens from seven fresh human femoral heads (14, 39, 59, 61, 69, 75, and 77 years) were fabricated using the micro-milling machine having a resolution of 10 ㎛ (EGX-300, Roland, Japan). Based on the trabecular trajectory of femoral head, the specimens were obtained in the direction of superior-tofovea. The wet cubic specimens (Figure 2 (a)) with a dimension of 300 ㎛ were loaded up to an axial strain of 0.5 %, which is within the elastic range. For the measurement of the microscopic Poisson’s ratio during the axial loading (Figure 2 (b) (c)), the following experimental procedures were used. To assure the subtle contact between a surface of
Fig. 1
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
276
J. Hong, H. Cha, Y. Park, S. Lee, G. Khang,and Y. Kim
specimen and the tip of microelectrodes, 5 V was applied to the one of microelectrodes. Then, the effective contact was defined when more than a current of 0.29 mA through the wet specimen was measured. Finally, the first contact was done at the starting of the experiment. Then, the contact was removed during the loading. At the maximum axial loading, the second contact was performed. By measuring the difference of contact distance, the lateral deformation could be known for obtaining the microscopic Poisson’s ratio of bone. Due to the technical difficulties treating the small specimens, orthotropic symmetries reported for macroscopic CBT [11] were assumed for the measurement of Poisson’s ratios. As indicated in Figure 3, an orthogonal coordinate was assigned to the specimens. The direction of trabecular trajectory was set to be x1 (longitudinal axis) and others were set as x 2 (postero-anterior axis) and x 3 (latero-medial axis). Poisson’s ratios based on the assumption of orthotropic symmetries were defined as in equations (1) where Δl i are the deformations in the direction of x i (i = 1, 2, 3).
ν 12 =
Δl 2 Δl 3 ,ν 23 = , ν 31 = Δl1 Δl1 Δl 2 Δl3
(a)
(1)
Fig. 3 III. RESULTS Figure 4 shows the mean stress-strain relationships for the three assumed orthotropic directions of femoral head cancellous bone tissue. Table 1 presents the mean microscopic elastic moduli and Poisson’s ratios (average of three specimens per femoral head) of cancellous tissue. The measured mean longitudinal (E1), postero-anterior (E2), and lateromedial (E3) elastic moduli were 3.47 GPa (S.D. ±0.41), 2.57 GPa (S.D. ±0.28), and 2.54 GPa (S.D. ±0.22), respectively. The mean Poisson's ratios were 0.199, 0.152, and 0.154, respectively ANOVA showed that the longitudinal elastic modulus (E1) was significantly (p < 0.01) greater than the postero-anterior (E2) and latero-medial (E3) elastic moduli. No significant difference was found between the posteroanterior (E2) and latero-medial (E3) elastic moduli. For Poisson’s ratios, ν12 was significantly (p <0.01) higher than ν23 and ν31. There was no difference between ν23 and ν31.
(b)
(c) Fig. 2
Fig. 4
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Elastic Moduli and Poisson’s Ratios of Microscopic Human Femoral Trabeculae
IV. DISCUSSION A method has been described to measure microscopic mechanical properties of human cancellous bone tissue obtained from trabeculae in femoral head. Cancellous bone tissue from femoral head exhibited anisotropy for elastic modulus and Poisson’s ratio although only twenty one specimens from seven cadavers ranged from 14 to 77 years were used in this preliminary study. Since the obtained relationships showed that E1>E2≈E3 and ν12>ν23≈ν31, the mechanical properties measured in the three directions did not exhibit as much anisotropy as might have been expected. More research using increasing number of specimens is required to fully understand the anisotropic characteristics of human cancellous bone tissue. Since there was no data using fresh trabeculae of human femoral head, the measure elastic moduli could not compare directly at microscopic level. Since the reasonably measured elastic moduli using fresh trabeculae of human tibia [2, 4], iliac crest [3], and Table 1
277
vertebra [5] range from 2.11 to 5.72 GPa, the obtained elastic moduli could be reasonable values. The measurement of Poisson’s ratio as well as its multi-directional properties of cancellous tissue was firstly performed in this study. To check the validity of the measurement system, verification tests using a rectangular parallelepiped 2024-T3 aluminum specimen (0.5 x 0.5 x 1 ㎜ 3) with Poisson’s ratio of 0.33 were performed. The measured Poisson’s ratio at 0.3 % applied strain was 0.332 ± 0.00018 over 5 times. Thus, the measured Poisson’s ratios could be accurate.
ACKNOWLEDGMENT The study was supported by the project 10022716-2006-21 of the Korean Ministry of Commerce, Industry and Energy.
REFERENCES 1.
Keaveny TM and. Hayes WC (1993) A 20-year Perspective on the Mechanical Properties of Trabecular Bone. J Biomech Eng 115:534-542 2. Townsend PR, Rose RM and Radin EL (1975) Buckling Studies of Single Human Trabeculae. J Biomech 9:99-201 3. Choi K, Kuhn JL, Ciarelli MJ, and Goldstein SA (1990) The Elastic Moduli of Human Subchondral, Trabecular, and Cortical Bone Tissue and the Size-Dependency of Cortical Bone Modulus. J Biomech 23:1103-1113 4. Kuhn JL, Goldstein SA, Choi K, London M, Feldkamp LA and Matthews LS (1989) A Comparison of Trabecular and Cortical Tissue Moduli from Human Iliac Crests. J Orthop Res 7:876-884 5. Choi K and Goldstein SA (1992) A Comparison of the Fatigue Behavior of Human Trabecular and Cortical Bone Tissue. J Biomech 25:1371-1381 6. Riemer BA, Eadie JS, Wenzel TE, Weissman DE, Guo XE and Goldstein SA (1995) Quantification of Vertebral Trabecular Bone Tissue Microstructure. 41st Annual Meeting of ORS 2:528 7. Rho JY, Tsui TY and Parr GM (1997) Elastic Properties of Human Cortical and Trabecular Lamellar Bone Measured by Nanoindentation. Biomaterials 18:1325–1330 8. Zysset PK, Guo XE, Hoffler CH, Moore KE and Goldstein SA (1998) Mechanical Properties of Human Trabecular Bone Lamellae Quantified by Nanoindentation. Tech. Health Care 6:429-432 9. Turner CH, Takano Y, Tsui TY and Parr GM (1999) The Elastic Properties of Trabecular and Cortical Bone Tissues are Similar: Results from Two Microscopic Measurement Techniques. J Biomech 32: 437-441 10. Oliver WC and Parr GM (1992) An Improved Technique for Determining Hardness and Elastic-Modulus Using Load and Displacement Sensing Indentation Experiments. J Mater Res 7:1564– 1583 11. van Riebergen B, Odgaard A, Kabel J and Huiskes R (1996) Direct Mechanics Assessment of Elastic Symmetries and Properties of Trabecular Bone Architecture. J Biomech 29:1653-1657 Author: Junghwa Hong Institute: Korea University Street: #341, Division of Life Science, Korea University, 5-1 Anam-Dong, Sungbuk-Gu City: Seoul Country: Republic of Korea Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Elasticity Distribution Imaging of Sliced Liver Cirrhosis and Hepatitis using a novel Tactile Mapping System Y. Murayama1, T. Yajima1, H. Sakuma2, Y. Hatakeyama, C.E. Constantinou3, S. Takenoshita2, S. Omata1 1
Nihon University, NEWCAT Institute, Fukushima, Japan, 2 Fukushima Medical School, Fukushima, Japan 3 Stanford University Medical School, Stanford, USA
Abstract— In the recent past, it has been indicated that the liver consistency can be useful to estimate functional reserve for hepatectomy however, it is still unknown how the liver gets hardened when it becomes chronic hepatitis and cirrhosis. In this study, pathological model rats of liver cirrhosis and hepatitis were developed and the elasticity distribution over their sections were measured using Tactile Mapping system that was specifically designed to measure the two-dimensional elasticity distribution of very thin sliced tissues. The elasticity distribution images were then compared with the conventional azanstaining image to identify the tissues. Young’s modulus of both soft normal and hard fibrotic components exist in the sections were statistically compared and it was indicated that there was no significant difference in the elasticity of both components but the content ratio of harder fibrotic matrix was higher in the liver cirrhosis. Keywords— Liver cirrosis, Hepatitis, Elasticity, Tactile Mapping, Micro Tactile Sensor.
I. INTRODUCTION Liver cirrhosis is always complicated with a hardening and the empiric evaluation of liver consistency is currently used to plan the surgical strategy. In the recent past, a correlation between the liver hardness and the degree of liver fibrosis has been indicated [1] and it has been considered that the liver consistency can be useful to estimate functional reserve for hepatectomy [2-3]. In addition, preoperative quantification of extra-cellular matrix components of laminin, type III collagen and type IV collagen is useful as an index to the hepatic regeneration activity after partial hepatectomy for liver cirrhosis [4]. What needs to be investigated at this juncture is to quantitatively measure the elasticity of liver components from the cellular level for further understanding of biomechanics of the liver hardening. We developed a tactile sensor using phase shift method [5] and measured the elasticity of living tissues from the cellular level [6,7] to organs [8], and an importance of tissue elasticity has been indicated for studying their physiological activity. Using this novel tactile sensing technology, we developed a Tactile Mapping system to measure twodimensional elasticity distribution map over a very thin
sliced tissue [9]. A micro tactile sensor (MTS), as a sensor probe of the Tactile Mapping system, was designed to detect the moment of contact with the tissue surface and to measure the elasticity with very high sensitivity so that it can measure the elasticity of very thin sliced tissues [10]. In this study, the elasticity distribution image of the liver cirrhosis and hepatitis were measured and compare the results as regards to the amount of hard and soft elasticity components. II. MATERIALS AND METHODS Animals and slice preparation: Five-week-old Wister rats were housed in individual cages with a 12-hour lightdark cycle at 22 degrees. All animals were maintained on water and rat chow ad libitum until the day before surgery. Liver fibrosis (chronic hepatitis and liver cirrhosis) was induced by intra-peritoneal administration of thioacetamide (TAA) (200 mg/kg, 3 times/wk) for 8 and 12 weeks, respectively. Laparotomy was performed all under pentobarbital (30 mg/kg) anesthesia. 100 μm thick slice was cut using super micro-slicer (ZERO-1, Dosaka-EM co. ltd., Kyoto, Japan) without any fixation nor freezing. Elasticity distribution measurement: Elasticity distribution of the sliced fibrotic liver was measured using the Tactile Mapping system (Fig. 1). The 1000 μm × 1000 μm measurement area was measured. Measurement points were set 40 μm apart in both x and y directions for a total of 676 points 26×26). Histopathologic study: After measuring the elasticity, the sections were paraffin-wax processed and were azan-stained to see the fibrous tissues stained blue differentially. The hepatic fibrosis index was then calculated to quantify the degree of liver fibrosis.
Fig. 1 Tactile Mapping system
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 286–287, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Elasticity Distribution Imaging of Sliced Liver Cirrhosis and Hepatitis using a novel Tactile Mapping System
Fig. 2 Azan-stain (left) and elasticity distribution (right) image of the chronic hepatitis
287
Fig. 4
IV. CONCLUSIONS
Fig. 3 Azan-stain (left) and elasticity distribution (right) image of the liver cirrhosis
III. RESULTS Azan stain and elasticity distribution image of chronic hepatitis and liver cirrhosis are shown, respectively, in Fig.2 and Fig.3. In both elasticity distribution images, the elastic modulus is displayed in gray scale, i.e., the lighter for the harder area. There were mainly two kinds of components, i.e. normal and fibrotic tissues, in azan-stain images of both sections. Fibrosis index (content) of the chronic hepatitis and the liver cirrhosis were found to be 26.4 % and 31.7 %. There also can be seen mainly two kinds of components, i.e. soft and hard part, in both elasticity distribution images of both sections. By comparing the azan-stain image and elasticity distribution image, the hard part well corresponded to fibrotic matrix. From the elasticity distribution images in Fig.2 and Fig.3, Young’s modulus of soft part and hard part were statistically examined as shown in Fig. 4. Young’s modulus of 100 points for hard/soft part was extracted from both figures for calculation. Young’s modulus of soft part was 20.6 ± 3.68kPa, 16.2±2.28 for chronic hepatitis and liver cirrhosis, respectively. Yound’s modulus of hard part was 42.5 ± 6.75kPa and 38.6±8.35kPa for chronic hepatitis and liver cirrhosis, respectively. There were not any significant differences in the elasticity among neither soft part nor hard part.
Pathological model rats of liver cirrhosis and hepatitis were developed and the elasticity distribution over their liver sections was measured. There existed two components within both sections, i.e., normal tissues (soft part) and fibrotic matrix (hard part). Interestingly, the calculated Young’s modulus of the soft part and the hard part were found to be the same, however, the content of the hard part was higher for the liver cirrhosis. Therefore, it was concluded that the hardening of the liver was caused not by the hardening of cells or extra-cellular matrix but by the content of soft normal tissues and hard fibrotic matrix.
REFERENCES 1.
Hatakeyama Y. et al. (2002) Fukushima J. Med. Sci. 48(2): 93-101 2. Ono T. et al. (1997) Jpn. J. Gastroenterol. Surg. 30(7):17201724 3. Kusaka K. et al. (2000) J. Am. Coll. Surg. 191(1):47-53 4. Sato N. et al. (1999) Jpn. J. Gastroenterol. Surg. 32(8):20852094 5. Omata S. and Terunuma Y. (1992) Sens. Actuators A 35(1):915 6. Miyazaki H. et al (2000) Journal of Biomechanics 33:97-104 7. Murayama Y. et al. (2004) Journal of Biomechanics 37:67-72 8. Eltaib M.E.H and Hewit J.R. (2003) Mechatronics 13:11631177 9. Murayama Y. et al. (2005) Sens. Actuators A 120(2):543-549 10. Murayama T. et al. (2004) Sens. Actuators A 109(3):202-207 Author: Institute: Street: City: Country: Email:
Yoshinobu Murayama Nihon University, NEWCAT Institute Tokusada, Tamura-machi Koriyama, Fukushima Japan
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Hip stress distribution may be a risk factor for avascular necrosis of femoral head D. Dolinar1, M. Ivanovski2, I. List2, M. Daniel3, B. Mavcic1, M. Tomsic4, A. Iglic5 and V. Kralj-Iglic2 1
Department of Orthopaedic Surgery, University Medical Center Ljubljana, Ljubljana, Slovenia 2 Institute of Biophysics, Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia 3 Technical University Kosice, Kosice, Slovakia 4 Department of Rheumatology, University Medical Center Ljubljana, Ljubljana, Slovenia 5 Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
Abstract— Avascular necrosis of femoral head (AN) is a hip disorder with various risk factors, however, the underlying mechanisms are not yet understood. In order to elucidate the effect of the mechanical factors on AN we have compared a group of hips at risk for AN and a group of healthy hips with respect to biomechanical parameters: functional angle of the weight bearing area (ϑF), position of the stress pole (Θ), index of the gradient of the contact stress at the lateral border of the load bearing area (Gp) and peak contact hip stress (pmax). The test group representing hips at risk for AN consisted of 32 male hips contralateral to the necrotic hips while the control group consisted of 46 healthy male hips. The biomechanical parameters we computed with the HIPSTRESS method (based on measurements of geometrical parameters from standard anterior-posterior pelvic radiographs). The average values of parameters pertaining to both groups were compared by the unpaired two-sided Student t-test. The functional angle of the weight bearing area was on the average larger (more favorable) in the control group (112.9º±13.5º) than in the test group (105.0º±12.4º), the difference (7%) being statistically significant (p < 0.01). The position of the stress pole was more lateral (less favorable) in the test group (15.44º±7.23º) than in the control group (11.80º±7.58º), the difference (27%) being statistically significant (p = 0.037). The index of the hip stress gradient was higher (less favorable) in the test group (-17.23º±17.16º x 103m-3) than in the control group (26.05±16.85 x 103m-3), the difference (40%) being statistically significant (p = 0.028) while we found no statistically significant difference in the peak contact stress between the two groups. Our results indicate that a less favorable steep stress distribution over a smaller load-bearing area is a risk factor in AN. Keywords— Avascular necrosis, Biomechanics, Hip stress, Femoral head.
I. INTRODUCTION Avascular necrosis of femoral head (AN) is characterized by deterioration of the bone tissue. It represents together with osteoarthritis secondary to it a serious orthopaedic problem affecting mostly young and middle aged populations [1]. In spite of numerous studies, mechanisms leading to ischemic and necrotic processes are not yet understood.
In about one third of patients the risk factors cannot be determined [2] while disorders and risk factors connected to the onset of AN include alcoholism [1], corticosteroid therapy in patients with connective tissue diseases and transplants [1], sickle cell anemia [3], HIV [2, 3], antiphospholipid syndrome [5], pregnancy [2, 6] and some others [7, 8, 9]. It was suggested that recidivant microfractures in the region of highly loaded femoral head may lead to microvascular trauma and thereby induce development of AN [10]. A question can therefore be posed whether biomechanical parameters such as stresses in the hip are important in the onset of AN. The method HIPSTRESS enables determination of resultant hip force in the one legged stance [11] and the corresponding distribution of contact hip stress [12] in a large number of patients by using the data obtained from standard anteroposterior radiographs. The method has been validated in clinical studies [13-18]. It was shown that small functional angle of the load-bearing area ϑF, unfavorable distribution of the hip stress described by lateral position of the stress pole Θ, high index of gradient of contact hip stress (large slope of the distribution at the lateral border of the load bearing area) Gp and elevated peak contact stress in the hip joint pmax may be related to increased risk for development of hip osteoarthritis. While peak contact stress pmax is a well known and frequently used quantity, the index of stress gradient Gp [13] and the functional angle of the loadbearing area ϑF [14] have only recently been given attention; therefore they are briefly described below. Index of gradient of contact hip stress Gp characterizes the slope of the distribution at the lateral border of the load bearing area. The functional angle of the load bearing area ϑF describes the amount of the articular sphere that is occupied by the load bearing area. The lower (more negative) index of gradient and the larger the functional angle of the load bearing area, the more favorable is stress distribution. In a population study it was shown [12] that the change of sign of Gp correlates well with the clinical evaluation of hip dysplasia, i.e. positive values of Gp correspond to dysplastic hips. The functional angle of the load bearing area ϑF,
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 282–285, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Hip stress distribution may be a risk factor for avascular necrosis of femoral head
which does not critically depend on the size of the pelvis and femur was proved the most relevant in samples where there is large scattering in the size of the geometrical parameters, as for example in a group of children [14] or if there is a possibility that radiographic magnification varies considerably. In these cases the effect of pmax and Gp (which strongly depend on radiographic magnification) can not be envisaged due to large scattering and poor statistical significance. It is the aim of this work to investigate the role of the above biomechanical parameters in the onset of AN. II. MATERIALS AND METHODS From the archive of the Department of Orthopaedic Surgery, University Medical Center Ljubljana, Slovenia we selected standard anterior-posterior radiographs of pelves and proximal femora of 32 adult male persons (32 hips) who were treated due to AN between 1972 and 1991. In patients who were operated due to AN, only radiographs that were taken before the operation were used. It was assumed that prior to necrosis both hips had the same geometry. As the necrotic process had already caused changes in the geometry of some hips, the hips contralateral to the necrotic ones were considered in the study. These hips are referred to as hips at higher risk for AN. For comparison, we selected radiographs of 23 male persons (46 healthy hips) pertaining to patients who had had a radiograph of the pelvic region taken at the same institution for reasons other than hip joint disease (e.g. lumbalgia). In our study we considered only male hips. As the values of peak hip stress importantly depend on the gender [19] it is important to have gender matched groups in statistical analysis. In our archives we did not find an adequate number of radiographs of female hips with AN that would fulfill the inclusion criteria to perform statistical analysis. Three-dimensional biomechanical models [11, 12] were used to estimate the peak contact stress distribution given by its peak pmax [12], location of its pole Θ [12], index of the contact stress gradient Gp [13] and functional angle of the load-bearing area ϑF [14]. The input parameters of the model for the resultant hip joint force are geometrical parameters of the hip and pelvis: interhip distance l, pelvic height H, pelvic width laterally from the femoral head center C and coordinates of the insertion point of abductors on the greater trochanter (point coordinates Tx, Tz) in the frontal plane. The model of the resultant hip force is based on the equilibria of forces and torques acting between the body segments. The three-dimensional reference coordinates of the muscle attachment points were taken from the work of Dostal and Andrews [19] and scaled with regard to the pel-
283
vic parameters (l, C, H, Tx, Tz) assessed from anteriorposterior radiographs for each individual subject. In some radiographs of the patients with AN the upper part of the pelvis was not visible. In these patients the contour was extrapolated on the basis of the visible parts. As in some hips with AN the femoral head was considerably flattened superiorly, centers of rotation on both sides, corresponding to the pre-necrotic situation were determined by circles fitting the outlines of the acetabular shells. The contact stress distribution was calculated by assuming that the hip joint consists of spherical head and hemispherical acetabular shell, the two being separated by elastic cartilage layer. When unloaded, the head and the shell are concentric while loading causes a deformation of the cartilage. The head and the shell reach closest approach in a particular point on an imaginary articular surface. This point is called the stress pole [20]. Integration of the contact stress over the load-bearing area yields the resultant hip joint force. The lateral border of the load-bearing area is defined by the coverage of the head by the acetabulum while the medial border is defined by the condition of vanishing stress [12]. The value of stress at the pole and the position of the pole on the articular sphere are calculated by solving a system of algebraic equations, one of them being nonlinear. The input parameters of the model for stress distribution are the magnitude and direction of the resultant hip joint force R and the geometrical parameters of the hip: radius of the femoral head r and Wiberg centre-edge angle ϑCE. To describe stress distribution, we determined biomechanical parameters ϑF, Θ, Gp and pmax for each hip. The parameters pmax and Gp were normalized to the body weight (WB) to extract the influence of hip geometry on stress. The average values corresponding to the test and the control group were compared by the unpaired two-sided Student ttest. III. RESULTS Table 1 shows the computed biomechanical parameters: the functional angle of the lead bearing area ϑF, position of Table 1 Biomechanical parameters (mean ± standard deviation) in the test group (32 hips contralateral to the necrotic hips) and in the control group (46 normal hips). Statistical significance was determined by the unpaired two-sided Student t-test. The more favorable value is marked with *. Parameters
Test group
Control group
ϑF [deg]
105.0 ± 12.4
112.9 ± 13.5*
Θ [deg]
15.44 ± 7.23.
Gp/WB [103m-3] -17.32 ± 17.16 pmax/WB [m-2]
2173 ± 785.
Difference p value 7%
<0.01
11.8 ± 7.58*
27 %
0.037
-26.05 ± 16.85*
40 %
0.028
2090 ± 502*
4%
0.604
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
284
D. Dolinar, M. Ivanovski, I. List, M. Daniel, B. Mavcic, M. Tomsic, A. Iglic and V. Kralj-Iglic
Table 2 Geometrical parameters (mean ± standard deviation) in the test group (32 hips contralateral to the necrotic hips) and in the control group (46 normal hips). Statistical significance was determined by the unpaired two-sided Student t-test. The more favorable value is marked with *. Parameters C [mm]
Test group 60.0 ± 10.0*
Control group 58.5 ±8.6
Difference p value 3%
0.463
H [mm]
163.0 ± 19.6*
162.4 ± 9.8
0.4 %
0.867
l [mm]
203.1 ± 17.5
199.6 ± 8.9*
2%
0.305
x [mm]
12.5 ± 7.6*
7.6 ± 6.4
65 %
<0.01
z [mm]
74.7 ± 11.3*
69.7 ± 7.7
7%
0.033
r [mm]
28.5 ± 3.1*
27.7 ± 1.7
3%
0.187
ϑCE [deg]
2173 ± 785.
2090 ±502*
13 %
<0.01
the stress pole Θ, normalized index of the contact stress gradient (Gp/WB) and normalized peak stress (pmax/WB) in the test group and in the control group. Hips in the test group are on the average less favorable with respect to all three parameters ϑF, Θ, Gp/WB and pmax/WB. The differences in ϑF (7%), Θ and Gp/WB are statistically significant (p < 0.01, p = 0.037 and p = 0.028, respectively) while the difference in pmax/WB (4%) is not (p = 0.604). The results therefore show less favorable stress distribution in hips at risk for AN while there is no significant difference in the absolute values of the contact hip stress. In order to better understand the differences in biomechanical parameters the differences in geometrical parameters were studied. Table 2 shows geometrical parameters used in the models for the above biomechanical parameters in the test group and in the control group. The center-edge angle ϑCE is smaller (less favorable) in the test group than in the control group, the difference (13%) is statistically significant (p < 0.01). However, the position of the insertion point of the effective muscle on the greater trochanter (both, the lateral component z and the inferior component x) is more extensive (more favorable) in the test group than in the control group. Both differences (7% and 65%, respectively) are statistically significant (p = 0.033 and < 0.01, respectively). Also other geometrical parameters were more favorable in the test group, however, the differences were small and statistically insignificant. IV. DISCUSSION AND CONCLUSIONS The shape of the stress distribution (described by ϑF, Θ and Gp/WB) is on average considerably and statistically significantly different in both groups. In the test group the distribution is steeper, the pole lies more laterally, the gradient index is larger (less negative) and the functional angle
of the weight bearing area is smaller than in the control group. This renders hips with increased risk for AN less favorable regarding the stress distribution, however we did not find statistically significant difference in pmax/WB. Magnification of radiographs was not known as no unit with known length was visible in the picture. Magnification may vary considerably contributing to the scattering in the measured distances and this may be one of the reasons for poor statistical significance in pmax/WB. The differences in the biomechanical parameters may be explained by the difference in the geometrical parameters. The difference in pelvic height H and width C and in the interhip distance l were very small (below 3%) and statistically insignificant while the difference in the vertical coordinate of the insertion of the effective muscle on the greater trochanter (x) was statistically significant, but this parameter does not influence much the biomechanical parameters [21]. The differences in the remaining three parameters (lateral coordinate of the insertion of the effective muscle on the greater trochanter, radius of the femoral head and center-edge angle) can however contribute to the explanation of the differences in biomechanical parameters. The centeredge angle ϑCE is the most important parameter in determination of contact stress distribution. Larger ϑCE corresponds to lower pmax/WB and smaller Gp/WB. Table 2 shows that ϑCE is statistically significantly lower in the test group (p < 0.01) indicating that pmax/WB and Gp/WB would be higher in hips at risk for AN. However, pmax/WB and Gp/WB strongly depend also on the radius of the femoral head (pmax/WB is inversely proportional to the square of r and Gp/WB is inversely proportional to the third power of r). Although the difference in the radii of the two groups is not statistically significant (p = 0.187), the difference (3%) is in favor of hips in the test group. Further, the lateral position of the insertion of the effective muscle is for 7% statistically significantly larger (more favorable) in the test group than in the control group (p = 0.033). The effect of the smaller center-edge angle is therefore counterbalanced by the effect of larger femoral head and more laterally extended greater trochanter. It has been hypothesized that transient osteoporosis of the bone marrow oedema syndrome may be the initial phase of osteonecrosis of the femoral head [20, 21] and that there may be a common patophysiology. Transient osteoporosis is connected to recidivant microfractures and microvascular trauma at highly loaded regions of the bone leading to the ischemia of the affected part of the bone [21]. Higher contact hip stress may increase the probability and the extent of microfractures of the affected bone thereby making the repair more difficult. Furthermore, the replicative capacity of osteoblast cells of the intertrochanteric area of the femur in osteonecrosis patients was found to be significantly reduced
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Hip stress distribution may be a risk factor for avascular necrosis of femoral head
comparing to patients with osteoarthritis [22]. Thereby, elevated contact hip stress could accelerate the processes leading to AN. Our study did not answer the question whether AN occurs due to common underlying mechanisms. There seems to be a heterogeneous etiology connected to different underlying mechanisms. Elevated contact hip stress with unfavorable (steep) stress distribution is yet another possible relevant mechanism which should also be considered in future investigations on onset and development of AN. Our results indicate that hips with less favorable (steeper) stress distribution are at greater risk for development of AN than hips with more uniform stress distribution.
REFERENCES 1.
Mont MA, Hungerford DS (1995) Non-traumatic avascular necrosis of the femoral head. J Bone and Joint Surg 77-A:459469 2. Mahoney CR, Glesby MJ, DiCarlo EF, Peterson MG, Bostrom MP (2005) Total hip arthroplasty in patients with human immunodeficiency virus infection: pathologic findings and surgical outcomes. Acta Orthop 76:198-203 3. JH, Aufranc OE (1972) Avascular necrosis of femoral head in the adult. A review of its incidence in a variety of conditions. Clin Orthop Rel Res 43-62 4. Kirk D (2002) High prevalence of osteonecrosis of the femoral head in HIV infected adults. Ann Int Med 137:17-25 5. Tektonidou MG, Malagari K, Vlachoyiannopoulos PG, Kelekis DA, Moutsopoulos HM (2003) Asymptomatic avascular necrosis in patients with primary antiphospholipid syndrome in the absence of corticosteroid use: A prospective study by magnetic resonance imaging. Arthritis and Rheumatism 48:732-736 6. Cheng N, Burssens A, Mulier JC (1982) Pregnancy and postpregnancy avascular necrosis of the femoral head. Arch Orthop Trauma Surg 100:199-210 7. Macdonald AG, Bissett JD (2001) Avascular necrosis of the femoral head in patients with prostate cancer treated with cyproterone acetate and radiotherapy. Clin Oncol (R Coll Radiol) 13:135-137 8. Bolland MJ, Hood G, Bastin ST, King AR, Grey A (2004) Bilateral femoral head osteonecrosis after septic shock and multiorgan failure. J Bone Mineral Res 19:517-520 9. Rollot F, Wechsler B, su Boutin le TH, De Gennes C, Amoura Z, Hachulla E, Piette JC (2005) Hemochromatosis and femoral head aseptic osteonecrosis: a nonfortuitous association. J Rheumatol 32:376-378 10. Kim YM, Oh HC, Kim HJ (2000) The pattern of bone marrow oedema on MRI in osteonecrosis of the femoral head. J Bone Joint Surg 82-B:837-841
285
11. Iglic A, Srakar F, Antolic V (1993) Influence of the pelvic shape on the biomechanical status of the hip. Clin Biomech 8:223-224 12. Ipavec M, Brand RA, Pedersen DR, Mavcic B, Kralj-Iglic V, Iglic A (1999) Mathematical modelling of stress in the hip during gait. J Biomechanics 32:1229-1235 13. Pompe B, Daniel M, Sochor M, Vengust R, Kralj-Iglic V, Iglic A (2003) Gradient of contact stress in normal and dysplastic human hips. Medical Eng Phys 25:379-385 14. Vengust R, Daniel M, Antolic V, Zupanc O, Iglic A, KraljIglic V (2001) Biomechanical evaluation of hip joint after Salter innominate osteotomy: a long-term follow-up study. Arch Orthop Trauma Surg 121:511-516 15. Mavcic B, Pompe B, Antolic V, Daniel M, Iglic A, Kralj-Iglic V (2002) Mathematical estimation of stress distribution in normal and dysplastic human hips. J Orthop Res 20:10251030 16. Mavcic B, Slivnik T, Antolic V, Iglic A, Kralj-Iglic V (2004) High contact stress is related to the development of hip pathology with increasing age. Clin Biomech 19:939-943 17. Kralj M, Mavcic B, Antolic V, Iglic A, Kralj-Iglic V (2005) The Bernese periacetabular osteotomy: clinical, radiographic and biomechanical 7-15 year follow-up in 26 hips. Acta Orthop 76:833-840 18. Dolinar D, Antolic V, Herman S, Iglic A, Kralj-Iglic V, Pavlovcic V (2003) Influence of contact hip stress on the outcome of surgical treatment of hips affected by avascular necrosis. Arch Orthop Trauma Surg 123:509-513 19. Dostal WF, Andrews JG (1981) A three-dimensional biomechanical model of the hip musculature. J Biomech 14:803-812 20. Brinckmann P, Frobin W, Hierholzer E (1981) Stress on the articular surface of the hip joint in healthy adults and persons with idiopathic osteoarthrosis of the hip joint. J Biomech 14: 149-156 21. Daniel M, Antolic V, Iglic A, Kralj Iglic V (2001) Determination of contact hip stress from nomograms based on mathematical model. Med Eng Phys 23:347-357 22. Gangji V, Hauzeur JP, Schoutens A, Hisenkamp M, Appelboom T, Egrise D (2003) Abnormalities in the replicative capacity of osteoblastic cells in the proximal femur of patients with osteonecrosis of the femoral head. J Rheumatol 30:348351 CORRESPONDENCE Author: Institute: Street: City: Country: Email:
Prof. Dr. Veronika Kralj-Iglic Institute of Biophysics, Faculty of Medicine, Uni-Lj Lipiceva 2 SI-1000 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Model for Muscle Force Calculation Including Dynamics Behavior and Vicoelastic Properties of Tendon M. Vilimek1 1
Czech Technical University in Prague, Fac. of Mechanical Engineering, Dept. of Mechanics, Biomechanics and Mechatronics, Prague, Czech Republic
Abstract— This paper presents a musculotendon model for muscle force calculation based on Hill type model including viscoelastic properties of tendon. For describing the viscoelastic properties the Poynting-Thomson discrete model was used. The applied Hill type muscle model observes all active and passive properties of skeletal muscle. Differential equation which expresses musculotendon dynamics including constants which can be numerable from experimentally measured tendon tension, creep and relaxation data. This Model is suitable to use in forward dynamics problems and dynamic optimization approaches. Keywords— Tendon, Muscle mechanics, Hill-type model, Viscoelastic properties, Soft tissue
I. INTRODUCTION Usually the two approaches for expression the tendon behavior during tension are being used. One of them is the linear model of stress-strain curve without both the toe region and the upper part of the curve with macroscopic failures. Second usual way is to use analytical equation for describing the length tension curve given from experiments [1,2]. These models are usually independent on velocity of loading, which means it does not describe the different length-tension curves in different velocity of muscle shortening. Based on experimentally measured data and discrete model of viscoelastic material, the model for describing tendon dynamic behavior was derived. II. METHODS The three-part Poynting-Thomson model expressing the viscoelastic behavior of material, (Figure1), was used for describing uniaxial tension of tendon, (1). From the stressstrain curves of tendons and from the stress-strain data measured during creep and relaxation, the viscous, η, and elastic parameters, E1 and E2, of the model were estimated. The equation (1) was arranged into the expression with time derivative tendon force and time derivative tendon length dependency (2). The A, B, C are new viscoelastic constants of the model (3) converted from, η, E1 and E2 parameters.
The other tendon parameters are tendon cross-sectional area, AT, and tendon slack length, LTs. The FT express tendon force acting into the bone and LT is instantaneous tendon length. The equation (4), express the dependency between velocity of tendon elongation, dLT/dt, velocity of elongation of musculotendon, dLMT/dt, and velocity of muscle shortening and/or lengthening, dLM/dt. The parameter α is the pennation angle of muscle fibers. The musculotendon length, LMT (velocity, dLMT/dt ), is the measurable parameter between muscle attachments into the bone.
⎛ E ⎞ dε + σ ⋅ ⎜⎜1 + 2 ⎟⎟ = η ⋅ + E2 ⋅ ε dt E1 ⎠ dt ⎝
η dσ E1
⋅
(
(1)
)
AT LT − LTs dF T dLT AT T = ⋅ A + F ⋅ B + ⋅ ⋅C dt dt LTs LTs (2)
A=
E1 E 2
B=−
η
;
(E1 + E 2 ) η
;
C = E1
dLT dLMT dLM = − cos(α ) dt dt dt
(
)
F T = F0M f v f La f act + f Lp ⋅ cos(α ) fv ⇒
dLM dt
(3)
(4) (5)
(6)
The velocity of muscle shortening dLM/dt can be obtained from equation (5) including force-velocity factor, fv (6), [3,4]. The faL is the muscle active force-length factor, the fact express muscle activation. The fpL is the passive forcelength factor of the muscle. The necessary parameter tendon slack length is difficult to measure and for first approach can be theoretically calculated for example from [5]. Also other muscle parameters hidden in equation (5) need to be known. These parameters are physiological
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 308–309, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Model for Muscle Force Calculation Including Dynamics Behavior and Vicoelastic Properties of Tendon
E1
E2 η
Fig. 1 The Poynting-Thomson model of viscoelastic behavior of tendon. From the stress-strain curves of tendons the viscous, η, and elastic parameters, E1 and E2, of the model were obtained. cross-sectional area, optimum muscle length and maximal isometric muscle force.
IV. CONCLUSIONS This paper presents a musculotendon model for muscle force calculation based on Hill type model including viscoelastic properties of tendon. For describing the viscoelastic properties the Poynting-Thomson discrete model was used. The described tendon model gives more realistic results in tendon behavior during loading. The musculotendon forces acting into the bone have usually more smooth shape. This model is good to use in musculoskeletal modeling for simulation of forward dynamics problems. Very interesting is using this model for studying the muscular forces during isometric and eccentric contraction.
III. RESULTS AND DISCUSSION The described tendon model gives more realistic results in tendon behavior during loading. This model is good to use in musculoskeletal modeling for simulation of forward dynamics problems. The Poyting-Thomson model do not assumes relaxation of material. For a musculotendon modeling it can be used because it is assumed that during ‘normal’ locomotion the tendons do not relax and have extension up to the 4-5% of strain. This model of tendon appears to be dependent of strain rate during locomotion. The musculotendon model, (2) and (5), including the viscoelastic properties of tendons need experimentally set up the constants A, B, C and cross-sectional area, AT, tendon slack lengths, LTs, of each tendon. The knowledge of muscle belly parameters is also mandatory. This musculotendon model (2) to simulation of pedaling was applied. The lower extremity musculoskeletal model including 31 muscles was used for this application. The shapes of obtained muscular forces were smoother than muscular forces obtained by muscular model without tendon dynamics. The maximum differences between the both models are in start and end of movement, where velocity of pedaling was significantly changed. Also in elbow flexion/extension movement simulations the model including tendon dynamics was used. As well as the pedaling simulation the shapes of musculotendon forces are smoother. The musculotendon model which is expressed by (2) must be apply to inverse dynamics problems with static optimization approaches in discrete form according to the solution the differential equation.
309
ACKNOWLEDGMENT This research study was supported by grant MSM 6840770012.
REFERENCES 1. 2.
3. 4. 5.
Delp S L, Loan J P (1997) Future of health insurance. A graphic-based software system to develop and analyze models of musculoskeletal structures. Comput Biol Med 25:21–34. Buchanan T S, et al (2004) Neuromusculoskeletal Modeling: Estimation of muscle forces and joint moments and movements from measurements of neural command. J Apl Biomech 20:367–395. Zajac F E (1989) Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control. Critical Reviews in Biomedical Eng 17:359–411. Vilimek M. (2005) The challenges in musculotendon forces estimation in multiple muscle systems, PhD Thesis. CTU in Prague, Prague. Vilimek M (2006) A numerical approach in estimation of tendon slack length in individual lower extremity muscles, Proc. of 30th Annual ASB Meeting, Virginia Tech, Blacksburg, VA, Abstract 46. Author: Institute: Street: City: Country: Email:
Miloslav Vilimek Czech Technical University in Prague, Fac. of Mechanical Engineering, Dept. of Mechanics, Biomechanics and Mechatronics Technicka 4 Prague Czech Republic
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Musculoskeletal Modeling to Provide Muscles and Ligaments Length Changes during Movement for Orthopaedic Surgery Planning C.A. Frigo1 and E.E. Pavan1 1
Politecnico di Milano, Bioengineering Department, Movement Biomechanics and Motor Control Lab, Milan, Italy
Abstract— The estimation of muscles and ligaments behavior can be useful in orthopaedic surgery or when a functional restoration may be reached by means of soft tissues surgery, i.e. in each case a different function is necessary to be planned for a muscle. Models of the skeletal muscle system were almost used to predict the rate of muscle-tendon lengthening during the most common tasks. In this work, a more general approach is proposed, in which individual anthropometry was considered through imaging processing, and joint kinematics captured in a movement analysis laboratory was used to animate a skeletal model, with the aim of simulating the effects of different surgical solutions on the muscle system functioning. To attain this result, the integration of different technologies, models and algorithms was required. After developing a model of the musculoskeletal and ligament system, the procedures for the pre-operative planning of both hip and knee joint replacement were simulated. A surgery planning tool based on the previously created model allowed the surgeon to plan an operation through a three-dimensional visualization of bones, by defining components’ sizes and improving their positioning by taking into account not only bone geometry but also the soft tissues spanning the articulations. Since the model is defined on a specific patient, it gives the possibility to increase model’s specificity with the aim of improving planning accuracy. The use of this planning tool can be useful both in pre-operative planning and during the surgical operation because the surgeon can develop skills in performing different operation’s steps. In this work, we considered two examples based on the model of muscles, bones and ligaments that was developed. The main steps of this procedure and the preliminary results are here presented pointing out the feasibility of the planning tool and of the model itself. Keywords— musculoskeletal modeling, orthopaedic surgery, pre-operative planning, diagnostic imaging
I. INTRODUCTION There are several situations in which the knowledge of how much the muscles and ligaments change their length can be of interest. For example, one is when a joint is replaced by a joint prosthesis, another one refer to functional orthopaedic surgery, when a different function is to be planned for a muscle. Several models have been proposed in the past [1-4] to predict muscle-tendon length in the most common motor
acts (walking, in particular). A more general approach is proposed here, in which individual anthropometry is taken into account through bio-images processing, and joint kinematics can be derived from data captured in a movement analysis laboratory. To attain this result, an integration of different technologies, models and algorithms is required. The main steps of this procedure and the preliminary results are here presented. II. METHODS A. Musculoskeletal Modeling As pointed out in the pioneer works of Scott Delp [5-6], a musculoskeletal model is a virtual representation of the body skeleton with muscles attached to the bones in a proper position, and joints reproducing at best the anatomical kinematical constraints. Although software packages are available on the market that can help designing a human musculoskeletal model, three main points have not been satisfactorily overcome in most cases: 1. Scaling the model on a individual subject (reproducing bone and joint deformities of patients if necessary); 2. Definition of joint kinematics not only in terms of rotation (which could be acceptable for revolute joints only, like the hip), but also considering the change of rotation axes that occur during motion (like at the knee, for example); 3. A proper definition of the action line of the muscle or ligament, taking into account that usually, in a range of movement, there are wrapping surfaces that interfere with the line which merely connects the origin and the insertion points. Our musculoskeletal model refers to the pelvis and lower limbs [7], and, concerning the foot, is still in a early phase of development, while the definition of hip and knee joints is relatively well advanced. B. Integration of bioimages into the musculoskeletal model Bioimages have been included in our procedure with the purpose of obtaining individual anthropometric and mor-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 292–295, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Musculoskeletal Modeling to Provide Muscles and Ligaments Length Changes during Movement for Orthopaedic Surgery Planning
phometric data from patients. The eligible solution would be to obtain MRI or CT scan from the patients, but usually the procedure is not definitively acceptable even in a clinical environment. The reason are that CT scan is an Rx based system, and MRI need a relatively long time of acquisition, so that people have to stay immobile for minutes, and this is rarely acceptable for children. In our procedure, a small number of 3-D models of the main skeletal bones are obtained through segmentation of MRI images (see Fig. 1) obtained from healthy, undeformed subjects (males and females, different age), and these constitute our data base of reference templates. Then, on the basis of traditional radiograph (from at least two orthogonal perspectives), the most suitable template is selected from our data base and adapted to the radiographs according to an optimal best match criterion, which includes local deformations as well. On each relevant bone surface the origin and insertion on soft tissue fibers are identified, referring to either muscle tendon complexes or ligaments. These points undergo the same homeomorphic transformation of the bone surface they are attached to. To provide the surgeon with a palpable model, that could help him in defining the best surgical technique in cases of severe bone deformities, stereolithography was used in some cases [8]. C. Joint kinematics Concerning kinematics of the hip joint, the spherical representation is almost unanimously accepted, and the relative rotation of femur in relation to pelvis can be defined by a fixed centre of rotation. In our model this point was the
Fig. 1
MR images of pelvis and lower legs were segmented to obtain the 3D meshes of bones as well as the insertion areas of the soft tissues spanning the hip and the knee joints, which were included in the model.
293
centre the acetabular cup, and was made to coincide with the centre of the femoral head. A more complicate procedure was developed to describe the kinematics of the knee joint [9-10]. Here it was supposed that the cruciate ligaments were the main constrains between distal femur and proximal tibia. To analyze the change of cruciate ligament length, the origin and insertion of each ligament was identified in a MRI obtained with the knees straight. Then the relative motion of distal femur and proximal tibia was analyzed under dynamic fluoroscopy. As these fluoroscopic images are 2-D and do not represent the soft tissue with a sufficient contrast, a best match between MRI and fluoroscopic images was performed, which allowed us to localize the origin and insertion of the cruciate ligaments in the fluoroscopic images. These points were used to define the relative motion between femur and tibia, which include rotation and translation of femoral condyles in relation to tibial plateau. D. Length of Muscle Tendon Fibers and Capsular Ligaments Hip joint periarticular muscles were modeled by eighteen fibers to represent gluteus maximum, minimum and medium, the piriformis, the three adductors (magnum, brevis and longus adductor), the biarticular muscles (acting at hip and knee): rectus femoris, sartorius, tensor fascia latae, long head of biceps, semimembranosus and semitendinosus (see Fig.2)
Fig. 2
Pelvis muscles and capsular ligaments of the hip joint were modeled as several fibers the length of which can be estimated during the femur motion in relation to the pelvic bones.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
294
C.A. Frigo and E.E. Pavan
Afterwards, we determined the size of prosthetic components more suitable for the specific patient, considering his morphological and biomechanical characteristics. Then, we simulated the implant positioning by moving the different components on the model. We analyzed and compared different arthroplasties: for the knee, we evaluated the effects of using liners of different thickness (12 mm, 14 mm, 17 mm, 20 mm); instead, for the hip, we compared different femoral neck length (short, medium, and long lengths) using the same stem size. To analyze the effect of different implants on soft tissues, the following movements were considered:
Fig. 3 The knee joint with the femoral and tibial prosthetic components pre-positioned for evaluating ligaments lengthening during knee extension (left) and flexion (right).
Regarding the knee joint ligaments, we reproduced in the virtual model the medial collateral ligament (five fibers), the lateral collateral ligament (two fibers) and the popliteus arcuate ligament (see Fig. 3). Then, we reproduced also the posterior cruciate ligament (two fibers) and the posterior capsula. The main deviation from a straight line were taken into account by defining proper ‘via points’ on the bone prominences. This part however is still to be more optimally refined. E. Simulations By using the model of the musculoskeletal and ligament system previously created, the procedures for the preoperative planning of both hip and knee joint replacement were developed. The surgical technique suggested by the manufacturer was followed. The most important part of this phase was the definition of the position of the prosthetic components relatively to the bones, trying to keep intact the physiological change of length of both muscle tendon complexes and ligaments. Different prosthetic components were gathered together, using a CAD tool, in order to virtually create different prosthetic assemblages. Regarding the knee joint, the femoral resection mask was considered as reference, because the surgeon can place it in precise relation to anatomical landmarks. Concerning the hip joint, the femoral prosthetic head was used as reference since it should be positioned at the hip joint centre to determine the positioning of acetabular and femoral components.
• flexion-extension and internal-external rotation of the knee (with hip fixed) in order to evaluate collateral ligaments of the knee • flexion-extension and of abduction-adduction of the hip (with knee extended), to evaluate muscles acting on the hip • hip and knee flexion-extension movement, in order to evaluate muscle tendon length changes in biarticular muscles. For each of the considered implant we calculated muscles and ligaments length and muscles lever arms both in standing position and during the simulated movements. Regarding the hip replacement planning, after comparing the different prosthesis virtually implanted, we could quantify soft tissues’ lengths increase connected with the femoral neck length, either in abduction-adduction or in flexionextension. In particular, using the small femoral neck, we measured negative variations. Also, for every kind of prosthesis, we found that lever arms changed very little in flexion-extension, and more consistently in abduction-adduction. In particular, increasing the femoral neck length, also lever arms increased. In a specific case of knee replacement planning, by comparison of data, we found that in full knee extension both liners thickness of 14 mm and 17 mm were suitable for the model: in fact, we estimated variations in ligaments’ lengths of about -3% and 1%. Same conclusions emerged in flexion: there were variations of 0.07% for a liner thickness of 14 mm and variations of 0.4% when a liner thickness of 17 mm was adopted III. CONCLUSIONS Using this kind of surgery planner, the surgeon could plan the operation through a three-dimensional visualization of bones and could work on them to define the correct size and positioning of the different components. Also, this
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Musculoskeletal Modeling to Provide Muscles and Ligaments Length Changes during Movement for Orthopaedic Surgery Planning
model can be tailored on a specific patient and gives the possibility to increase the planning specificity and accuracy. The use of this pre-operative planning tool can be useful both in pre-operative planning and during the surgical operation because the surgeon can develop skills in performing different operation’s steps. We considered two examples based on a model of muscles, bones and ligaments, that were useful to point out the feasibility of the software and of the model itself.
ACKNOWLEDGMENT The authors would like to thank Lima-Lto (Udine, Italy) for providing the CAD models of the prosthetic components.
REFERENCES 1.
2. 3.
4.
Frigo C, Pedotti A (1978) Determination of muscle length during locomotion, in: Biomechanics VI-A, vol. 2-A, E. Asmussen and K. Jorgensen (Eds.), Univ. Park Press - Baltimore, pp. 355-360 Brand RA, Crowninshield RD, Wittstock CE, Pedersen DR, Clark CR, van Krieken FM, (1982) A model of lower extremity muscular anatomy. J Biomech Eng, 104(4):304-310 Hoy MG, Zajac FE, Gordon ME (1990) A musculoskeletal model of the human lower extremity: the effect of muscle, tendon, and moment arm on the moment-angle relationship of musculotendon actuators at the hip, knee, and ankle. J Biomech, 23(2):157-69 Frigo C, Nielsen J and Crenna P (1996) Modelling the triceps surae muscle-tendon complex for the estimation of length
295
changes during walking, J Electromyography and Kinesiology 6(3):191-203 5. Delp SL, Loan JP, Hoy MG, Zajac FE, Topp EL, Rosen JM (1990) An interactive graphics-based model of the lower extremity to study orthopaedic surgical procedures. IEEE Trans Biomed Eng, 37(8):757-767 6. Delp SL, Loan JP (1995) A graphics-based software system to develop and analyze models of musculoskeletal structures. Comput Biol Med, 25(1):21-34 7. Frigo C, Pavan E, De Momi E (2004) Musculoskeletal modelling and movement analysis in preoperative surgical planning, Proc. Fourth Annual International Conference on Computer Assisted Orthopaedic Surgery, Chicago, USA, 2004, pp.287288. 8. De Momi E, Pavan E, Motyl B, Bandiera C, Frigo C (2005) Hip joint anatomy virtual and stereolithographic reconstruction for preoperative planning of total hip replacement, Proc. Computer Aided Radiotherapy and Surgery Conf., Berlin, 2005, pp 709-712 9. Pavan EE, Pascolini G, Zappata A, and Frigo C (2005) Towards a ‘no-thigh markers’ protocol of gait analysis, Proc. of Europ. Soc. of Movement Analysis in Children, ESMAC 2005, Barcellona, Spain 2005 pp 22-24 10. Pavan EE, Taboga P, Frigo C. (2006) A mobile axis knee joint model for gait analysis applications, Proc. V World Congress of Biomechanics, Munich, 2006, p. 429
Author: Institute: Street: City: Country: Email:
Carlo Frigo Bioengineering Department, Polytechnic of Milan via Golgi, 39 I-20133 Milan Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Numerical model of a myocyte for the evaluation of the influence of inotropic substances on the myocardial contractility Bernardo Innocenti, Andrea Corvi Dipartimento di Meccanica e Tecnologie Industriali, Università degli Studi di Firenze, Firenze, Italy Abstract— In order to increase the knowledge about heart and its physiology, a great number of experimental activities have being conducted. Part of such activities analyses single myocardial cells; usually, these processes present a lot of complications mainly concerning myocytes isolation, low reproducibility and high number of system related variables. Basing on these considerations the present work refers to the development of a numerical model of myocyte that permits to simulate its physiological contraction as well as pathological behaviours. The analysis of the single myocyte (the basic unit of cardiac tissue) is necessary to investigate heart as a single complex system in order to develop a numerical heart model able to simulate either the physiological and the pathological behaviour and thus overcome the experimental trials. The model enables the evaluation of contractility under three inotropic substances effect: angiotensin-II, endothelin-I and isoproterenol. It has been developed in three phases: an initial analysis of the behaviour of the sarcomere has been executed and a sarcomere model has been developed, such model permits to simulate both physiological activity of sarcomere and to analyze the inotropic effect of the three substances on it; finally a model of myocyte has been elaborated using both experimental data obtained from several trials previously defined and literature data; at the end the model has been validated both with literature results and with data obtained from subsequent experimental trials. Keywords—Numeric model, sarcomere, myocyte, contractility, inotropism.
I. INTRODUCTION The cardiac muscle is the matter of study of many researchers who make experimental activities aimed to expand the knowledge and to study the physiology about the cardiac muscle. One of the main objects of such research is the miocardial cells but their study required particular attention because there are several complications that may alter their use in experimental activity, like the isolation of the myocytes and the high random nature of the factors which regulate the cardiac contraction mechanism. The aim of this activity is the development of a mathematical model of the cardiomyocyte which simulates both physiological and pathological behaviour so to represent an useful instrument for the mathematical experimental test simulations. Moreover, such model allows to verify the
accuracy of the test results, allowing the removal of possible outlayers and therefore increasing the test repeatability. The mathematical model developed in this work also simulates the myocyte contraction in presence of three inotropic substances (isoproterenol, endothelin-I, angiotensin-II). II. CONTRACTILITY AND INOTROPISM Contractility is the property of the muscular fibres which permits to adapt their shape to the load condition without any change in their volume. Contractility of the myocardiac fibres is defined like the performance of the muscle in physiological condition for a fixed preload and afterload. The evaluation of contractility is obtained using some parameters like, generally, the percent of cell shortening, the maximal velocity of shortening and the maximal velocity of relengthening. Inotropism is the capacity of some substances, and of some factors, to change the myocardial contractility. Usually there are positive inotropic effect factors, which produce and increase in contractility, and negative inotropic effect factors, which reduce the contractility. III. MATHEMATICAL MODEL OF SARCOMERE There are several typologies of sarcomere models in literature (mechanical model, thermodynamic model, molecular model, electrostatic model); the sarcomere model developed in this work is based on a mechanical model which start from the Hill’s model[1] and it can simulate the contraction mechanism of sarcomere both in a physiological condition and in a pathological condition. Such model moreover evaluates the contractility variation in presence of three inotropic substances (angiotensin-II, endothelin-I and isoproterenol). The model considers both the intrinsic elasticity of the muscular fibres and their contractile ability and it considers the sarcomere formed by an elastic element PE in parallel to a series made by an elastic element SE and a contractile element CE. The elastic element PE models the non linear effects of the elasticity of the connective tissue, of the cellular membranes and of the collagen, the elastic element SE simulates the intrinsic elasticity of the actin
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 296–299, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Numerical model of a myocyte for the evaluation of the influence of inotropic substances on the myocardial contractility
filaments and myosin filaments and of the Z bands while the CE element simulates the sliding mechanism trough the proteic filaments of actin and myosin. The parameters which defined the contractility are subdivided in kinematics parameters, evaluated during time (length, velocity and acceleration) and contraction parameters, evaluated for each cycle (cell shortening, velocity of shortening, velocity of relengthening). The model is based on the evaluation of the total contraction force related to the sarcomere length and subdivided in active force and passive force. Successively the model, knowing the sarcomere mass, evaluates the kinematics parameters during the whole contraction and it can estimate the other contractility parameters. The effect of the inotropic substances on the contractility parameters has been obtained inserting suitable coefficients which depend on the substance concentration, on the typology of substance administered to the sarcomere and on the fact that the substances are administrated alone or in interaction with other. Such coefficients change the sarcomere contraction kinematics, i.e. the value of the contraction parameters according to data presents in literature [2]. The results obtained from the model have been compared with experimental data coming from a test campaign conducted by the authors [3]. In that work tests are conducted on cardiomyocytes from rat left ventricle, in which the three positive effect inotropic substances are administrated to observe and to analyze the effect on the muscle contraction of their interaction. The numerical results obtained by the model result to be in agree with the experimental data. IV. MATHEMATICAL MODEL OF FIBRE The Hill’s sarcomere model has often been used to describe the behaviour of whole cardiac fibre, without considering several aspects, for example the contribution of other biological structures, neglecting that the fibre direction is not unidirectional, and that the cell activation can’t be neither uniform either simultaneous. Several works in literature [4-5] don’t consider the Hill’s model suitable to describe the behaviour of a single cardiac fibre, due to the not simultaneous of contraction of the elements which compose the fibre and several geometries have been chosen to schematize the fibre. The model of proposed in this work is based on the integration of the Pietrabissa-Montevecchi-Fumero’s model [4] with the Haslam’s model [6]. The model presented is a parametric model which considers both the distribution of the elements and their spatial structure. The cardiac myocyte is considered like a cylinder, and its basis represent the intercalary disks and the lateral area
297
represents the sarcolemma which covers and delimits the cell. Inside the cardiomyocyte, myofibril bands are schematized as parallel columns which connect the cylinder bases. Even if myofibrils are not uniformly arranged, branching themselves and merging one with another, the myocyte is modelled as it is composed by parallel planes superimposed one upon another. Every portion has a different number of myofibrils which connect two consecutive basis of the section. Moreover each myofibril is composed by sarcomeres in series. For the model construction is hypothesized that the columns of the plane are composed by an equal number of sarcomeres, while column of several plans can be composed by a different number of sarcomeres in series. In conclusion the myocyte is composed by a series of N of planes, make one in connection with other by parallel columns. The number and the dimension of the columns change plane by plane. From literature results that the number of myofibril present in a myocyte change from 50 to 100 [7], the length of myocyte is considered equal to 80.4 ± 4.3 µm [8] and the stiffness of the sarcomere is 0.31 ± 0.03 N/m [9-10]. In the model such values are imposed using a Montecarlo method on three normal distributions centred in these values. Basing on the considered geometry several hypothesis has been done: 1) in the series structures the force exercised by each sarcomere of the series is the same and it is equal to the total force developed by the entire structure, while in the parallel structures the total force is obtained by the sum of the total force developed by each single sarcomere of the structure; 2) in the series structures the total shortening is obtained by the sum of the shortening of each single sarcomere, while in the parallel structures the shortening of one sarcomere is the same for each sarcomere of the structure and it is equal to the shortening of the structure; 3) in the series structures the reciprocal of the total stiffness is evaluated like the sum of the reciprocal of the stiffness of each sarcomere of the structure, while in the parallel structures the total stiffness is obtained by the sum of the stiffness of each single sarcomere of the structure. Evaluating with the numeric model of sarcomere, explained in the precedent paragraph, the values of the total force, of the shortening and knowing the stiffness, for each sarcomere and for each plane, the model allows to evaluate the total force, the length variation and the stiffness of the myocyte. V. SOFTWARE FOR THE IMPLEMENTATION OF THE MODEL The sarcomere and the fibre model are implemented using a software realized with LabView™. Such software
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
298
Bernardo Innocenti, Andrea Corvi
VI. VALIDATION OF THE MODEL To validate the proposed model, the results obtained in the simulation have been compared with data presents in literature and with data obtained by an experimental activity. Initially the myocyte physiological model has been verified. In this case the trend of the maximum force produced, versus the average length of the sarcomeres, varying the sarcomere stiffness has been tabulated. The validation of the model, concerning the relationship between the total force and the average length of sarcomeres, has been obtained iterating the model varying the rest length and drawing the curve of the maximum total force. Such behaviour has been compared with data obtained on human cardiomyocyte [8] (fig.1). We can see that the model produces results comparable to literature data; the obtained force grows with the increase of average rest length of the sarcomere. Concerning the behaviour of the relationship between the force and the sarcomere stiffness we used the data obtained by Taubert [11] (fig. 2). We can see that the results are compatible with experimental literature data. To verify the accuracy of the model when the inotropic substances are used, it has been observed how, changing the isoproterenol concentration, changes the value of the percent shortening of the sarcomere, both when the isoproterenol acts alone, and when is only present angiotensin II, or only endothelin I, and when all the three inotropic substances are present. The values obtained by the model are comparable to data obtained by previous experimental activity conduced by the authors [6]. Even in this case the results of the model agree with experimental data. We can so affirm that the mathematical equations and the values of the coefficients used to develop the model are suitable to properly simulate the cardiomyocyte contraction both in physiological condition and in presence of three inotropic substances.
Total force (microNewton)
19 18 17
Literature
16 15 14 13
Model
12 11 10 1,6
1,7
1,8
1,9
2
Sarcomere length (micron)
Fig. 1 Comparison between the literature results [8] and the model result, total force Vs average length of the sarcomere 4,5 4
Total Force (microNewton)
allows to simulate a generic experimental test on a cardiomyocyte. The input parameters are the concentrations of the various substances and the contraction time. The software allows to simulate tests on a single sarcomere. It’s possible to simulate the behaviour both in physiological condition of the fibres/sarcomeres and in presence of one or more inotropic substances. The output parameters are the total force, the active force and the passive force, the shortening of the fibre and of the planes which compose the fibre and of the sarcomeres which maiden the model. Moreover such software allows to simulate the myocardic fibre contraction evaluating the kinematics parameters (length, velocity and acceleration of the fibre) versus time.
3,5
Literature
3 2,5
Model
2 1,5 1 0,5 0 0
0,5
1
Stiffness (N/m)
Fig. 2 Comparison between the literature results [11] and the model result, total force Vs stiffness of the sarcomere VII. RESULTS OF THE MODEL After the validation of the model, it is possible to plot the quantities of interest versus time for each combination of endothelin-I and angiotensin-II, varying the isoproterenol concentration. Using fixed cardiac frequency, initial length and preload, the plot of the total force versus time for the four combination without isoproterenol is shown in fig. 3, and with a 106 pMol concentration is shown in fig. 5. Scientific literature shows that the effect of the isoproterenol on the total force is predominant related to the effect of the other substances. And just an intermediate concentration produces an inotropic effect of the same entity or higher respect the effect of the combination between angiotensin-II and endothelin-I. The figures show that the presence of the endothelin-I produces a reduction of the contraction time; moreover the combination between angiotensin-II and endothelin-I produces an increase of the velocity of contraction. Similar process has been used to plot the behaviour of the total myocyte shortening, versus the isoproterenol concentration, for several combination of angiotensin-II and endothelin-I. Like the total force behaviour, in this case the isoproterenol produces a bigger effect respect endothelin-I and angiotensin-II, making an increase of shortening of about 23% respect the physiological condition. The angiotensin-II presence produces an increase of the contraction time, but such effect is vanished in presence of the other two substances. Like observed in the total force case, the endothelin-I presence reduces the contraction time.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Numerical model of a myocyte for the evaluation of the influence of inotropic substances on the myocardial contractility
299
like an useful instrument in the study of particular mix of drugs which modify the cardiac contractility. In fact, once the effect of the single substance on contractility is known, and the eventual interaction with other drug, it is possible to visualize the reaction, versus the changing of concentration, without studying the in vivo effect after each changing. Moreover the model can be easily improved inserting the behaviour of other substances.
REFERENCES Fig. 3 Total force Vs time for the four combination of endothelin-I (ET) and angiotensin-II (AT) without isoproterenol. (-low level of concentration; + high level of concentration)
Fig. 4 Total force Vs time for the four combination of endothelin-I (ET) and angiotensin-II (AT) with a 106 pMol concentration of isoproterenol. (-low level of concentration; + high level of concentration)
VIII. CONCLUSIONS The present work regards the develop of a model that allows to simulate the myocyte contraction both in physiological condition and in presence of three inotropic substances. The model allows to plot the behaviour of the total, passive and active force versus time during the fibre contraction and to simulate the cell shortening during the contraction. Such model hypothesises the not uniform both of the initial length of the sarcomeres and of the reply to the stimuli of the sarcomere. Moreover the model geometry allows to consider the cardiac fibre not only as a simple series of homogeneous sarcomeres, but as a complex network of different elements, positioned following a geometry which changes along the same myocyte. This fact allows to model with a major accuracy the complex shape and the complex reply of the cardiac muscle fibre. The model allows not only the analysis of the mechanical contraction but also it allows to analyze the myocyte during the administration of three inotropic substances. So the model can be used
1.
Hill AV. (1938) The heat of shortening and the dynamic constants of muscle. Proc R Soc London B Biol Sci 126:136– 195 2. Yasuda S, Lew WY (1997) Lipopolysaccharide depresses cardiac contractility and beta-adrenergic contractile response by decreasing myofilament response to Ca2+ in cardiac myocytes. Circ res 81: 1011–1020 3. Rossi VL, Marini E, Bertolozzi I et al (2005) Triiodothyronine enhances inotropic effects of angiotensin II in non failing myocytes, Fifteenth European Meeting on Hypertension, Milano, Italy, 2005 4. Pietrabissa R, Montevecchi FM (1987) A model of multicomponent cardiac fibre. J. Biomechanics 20:365-370 5. Pietrabissa R, Montevecchi FM, Fumero R (1991) Mechanical characterization of a model of a multicomponent cardiac fibre. J. Biomed. 13:407-414 6. Haslam A (2004) RHC-6 A model of a contracting heart cell. Research Project, University of Sheffield, Department of Computer Science 7. Van der Stap L, McNairnie P, (2000) Simulation of the contractile behaviour of an isolated cardiac myocyte.San Diego University, USA. 8. Van der Velden J, de Jong JW, Owen VJ et al (2000) Effect of protein kinase A on calcium sensitivity of force and its sarcomere length dependence in human cardiomyocyte. Cardiovascular Res. 46:487-495 9. Linari M, Dobbie I, Reconditi M et al (1998) The Stiffness of Skeletal Muscle in Isometric Contraction and Rigor: The Fraction of Myosin Heads Bound to Actin. Biophysical Journal 74:2459–2473 10. Colomo F, Piroddi N, Poggesi C et al (1997) Active and passive forces of isolated myofibrils from cardiac and fast skeletal muscle of the frog. Journal of Physiolgy 500:535-548 11. Taubert K, Willerson JT, Shapiro W et al (1977) Contraction and resting stiffness of isolated cardiac muscle: effects on inotropic agents Am J Physiol Heart Circ Physiol 232:H275H282 Author: Institute: Street: City: Country: Email:
Bernardo Innocenti Dipartimento Meccanica e Tecnologie Industriali via di Santa Marta 3 Firenze Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Rating Stroke Patients Based on Movement Analysis A. Jobbagy1, G. Fazekas2,3 1
Budapest University of Technology and Economics/Dept. Measurement and Information Systems, Budapest, Hungary 2 Saint John’s Hospital, Budapest, Hungary 3 National Institute for Medical Rehabilitation, Budapest, Hungary
Abstract— Impairments and activities of daily living (ADL) of patients with stroke are usually assessed by clinical scales. It is a rather subjective method. A device and a method are presented to objectively characterize movement disorders (impairment) of stroke patients. The finger-tapping and the pointing movements of 15 stroke patients were recorded and analysed with a simple, 2D, passive marker-based, clinically applicable movement analyser, PAM (Passive Marker-based Analyzer for Movements). The result of the objective assessment is compared to human ratings. Movement analysis gives valuable information also about the improvement of motor performance during rehabilitation. Based on the analysis, functional rating can be done with good resolution and accuracy; the measure of disability can also be determined. Good correlation has been found between the results of movement analysis and the Rivermead Motor Assessment. The rating scales assessing ADL functions give markedly different results. Keywords— finger-tapping, stroke patients, PAM
movement
analysis,
rating
I. INTRODUCTION There are different rating scales for stroke patients. (ed. Herndon, 1997) gives a good summary and evaluation. The Barthel Index (Mahoney and Barthel, 1965) is determined by assessing the following activities: level of selfsupporting in feeding, bathing, grooming, dressing, bowels, bladder, toilet use, transfer (bed to chair and back), mobility on level surfaces, stairs. The Functional Independence Measure, FIM (Cavanagh et al., 2000) consists of 18 parameters, each can be between 1 and 7, higher value meaning greater ability. Both physical and cognitive score can be derived from the FIM and the sum of all 18 parameters can also be used. Rigidity or paralysis of the thumb results in a much worse score than rigidity or paralysis of any other finger. Barthel Index exclusively, FIM mainly assesses ADL. The Rivermead Motor Assessment, RMA, tests functional motor skills and thus characterizes impairment (Lincoln and Leadbitter, 1979). RMA was developed to aid clinical rehabilitation. Motion analyzers are able to quantitatively assess human movement. General purpose motion analyzers are not opti-
mal for clinical evaluation of stroke patients. These devices are expensive and require operators with technical skills. II. MATERIALS AND METHODS A. The tested subjects In the Brain Injury Unit of the National Institute for Medical Rehabilitation 15 patients (9 females, 6 males) participated in the finger- and arm movement assessment. All patients were right handed, 9 had hemiparesis on the left, 6 on the right side. The average span between the onset of the disease and the selection for the test was 18 weeks (minimum 2, maximum 55, more than 26 weeks for three patients). The patients had hemiparesis resulting from upper motoneuron laesion. Twelve patients performed the tests twice on the same day, with at least 30 minutes break among the tests. The actual performance of four patients was assessed twice a week during a four-week period. Hemiparetic inpatients were selected if they were able to understand and perform the movement task. A written consent signed by the patient was a must for inclusion. Dementia or loss of ability to move fingers were reasons to exclude a patient. B. Recordings During one recording session patients performed three hand- and arm movements. These were: finger-tapping test, pointing with right hand, pointing with left hand. Fingertapping mimics playing the piano. Patients lifted their fingers – except thumbs – simultaneously and then hit the table in the following order: little-, ring-, middle- and index finger. They were asked to perform the movement as fast as they could and at the same time lift their fingers as high as they could. The pointing test consisted of 5 cycles. There were two marked points on the table 40 cm far from each other. The index finger was on the point closer to the patient. Then the index finger was lifted, moved to the other point and back five times as fast as possible. PAM, a simple passive marker-based movement analyser was used, developed for medical/clinical use (Jobbagy
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 266–269, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Rating Stroke Patients Based on Movement Analysis
and Hamar, 2004). Figure 1 and 2 show the trajectories of markers attached to the middle phalanges of fingers of a 79year old female patient (J11) during finger-tapping. Testing started two weeks after the onset of stroke. Upper subfigures illustrate 1.5-s part of the movement of the index (solid), middle (dashed) and ring (dotted) fingers are shown. Bottom diagrams show 8-s parts of the movement of the middle fingers.
267
down to periodic functions of any kind not only sinusoidal. The periodicity of movement (PM) is characterized by the relative weight of the dominant basis function within all functions necessary to describe the complete record, i.e. all periods. This is calculated on the basis of the weights (σi) of all basis functions: PM =
σ12
n ∑ σ i2 i =1
If the movement is strictly periodic then all σi except σ1 are zero. As a result, PM equals 1. In case of a nearly periodic movement σ1 is dominant but further σi elements are non-zero. PM parameter value decreases as further vectors are needed to describe all periods of the movement. Greater amplitude or greater frequency during fingertapping means faster finger movement, indicating better performance. It is easier to execute the movement faster with smaller amplitude, the amplitude * frequency of tapping is suggested as an appropriate parameter to characterize speed. This feature, called amxfr (given in cm/s), is determined for each tapping cycle and then averaged over the whole test. Fig 1 Finger-tapping of J11 two weeks after the onset of stroke
n A i T i amxfr = i=1 n
∑
Ai : amplitude of the ith tapping cycle in cm, Ti : time period of the ith tapping cycle in s, n: number of tapping cycles during the whole test. The regularity of finger-tapping movement is characterized by calculating PM for each tapping finger. The performance of a finger is equal to the product of amxfr and PM. Increasing the speed usually decreases the regularity of the movement. We suggest characterising the performance of a finger during the finger-tapping test by the product of the parameters expressing speed (amxfr) and regularity (PM). Based on more than 300 finger-tapping tests the Finger-Tapping Test Score (FTTS) (Jobbagy et al., 2005) is devised: Fig 2 Finger-tapping of J11 six weeks after the onset of stroke C. Evaluation of recordings The measure of periodicity, PM of the quasi-periodic finger-tapping movement can be quantified by using the singular value decomposition, SVD method (Stokes et al., 1999). Contrary to the Fourier analysis, the signal is broken
FTTS = (PM - 0.6) * amxfr. PM was greater than 0.6 for all fingers of all healthy subjects and for nearly all Parkinsonian and stroke patients. Subtraction of 0.6 adjusts the proper relative weight of PM to amxfr. PM is dimensionless, thus FTTS is given in cm/s. Based on the scores of the fingers, scores can be calculated for the hands and for the person. One hand can be
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
268
A. Jobbagy, G. Fazekas,
characterized by adding the results of the index-, middleand ring fingers. The score for pointing takes into account both the speed and the regularity of the movement. Contrary to finger-tapping, the amplitude should be constant. The two endpoints of the movement (table contacts) should be the same during the whole test. Significant change in the amplitude means improper execution. Speed, accuracy and regularity are contradictory requirements. Pointing Test Score (PTS) takes all these features into account. In addition to regularity, also the smoothness of the movement is considered:
Both the finger-tapping and the pointing movements were evaluated for 12 stroke patients. For them FTTS and PTS gave different ranking from FIM and Barthel index (see Table 2), but similar ranking as the RMA scale. RMA – FTTS affected hand: r = 0.50, p < 0.07 RMA – PTS affected side: r = 0.50,p < 0.07 Table 1 Correlation of conventional rating scales Barthel index FIM
FIM r = 0.92 (p < 0.001) ---
RMA scale r = 0.61 (p < 0.03) r = 0.65 (p < 0.02)
PTS = fr x PM x (1 – hsm) x (1 – hac) where fr is the average frequency calculated as the reciprocal value of the average time-period during the movement (moving the finger from one point to the other and back), PM characterizes the similarity of the five periods, hsm expresses the smoothness of the movement and hac the accuracy of hitting the marked points. Smoothness is quantified by the deviation of the average marker trajectory from the best fit of a second order curve. There is no “known good” position of the marker at the two end-points. The marker is fixed to the middle phalanx of the index finger, so it is 2 – 3 cm far from the fingertip. The distance between the marker and the marked end-point on the table also depends on the angle of the index finger to the table surface at table contact. Accuracy is characterized by the standard deviation of marker positions at the two end points. Both hsm and hac are normalized; these errors can result in a maximum decline in PTS by 20 % each. III. DISCUSSION A. Comparison to conventional ratings Using conventional tests (Hand Movement Scale (HM), Modified Ashworth Scale, FIM and RMA), neurologists and physiotherapists assessed ADL and impairment of stroke patients in parallel with the instrumental movement analysis. Spearman’s rank correlation method was used to compare the results of different rating methods. The results support that the FIM, the RMA and the Barthel index measure different abilities. FIM and Barthel index characterize ADL functions; the first six parameters of FIM assess self-care. RMA measures impairment, similarly to FTTS and PTS. However, RMA and PTS test global movement patterns, while FTTS evaluates finer motor movement. For the 14 stroke patients tested there is an excellent correlation between the FIM and the Barthel index, while the RMA scale is different from them, see Table 1.
FTTS and PTS of the affected side rank the patients similarly: r = 0.55 (p < 0.05). Strong correlation was found between the FTTS value of the affected and not affected hand: r = 0.85 (p < 0.001). The reason is the low speed of the movement on both sides. In a previous study Parkinsonians achieved quite different speed with the affected and not affected hands during finger-tapping (Jobbagy et al., 2005). Stroke patients performed the finger-tapping test movement with the same speed with both hands. As expected, weak correlation was between the PTS values for the affected and not affected side: r = 0.42, p > 0.14. Table 2 Correlation of FTTS and PTS with FIM and Barthel index FTTS affected hand PTS affected side
FIM r = -0.05 (p > 0.85) r = 0.05 (p > 0.85)
Barthel index r = 0.06 (p > 0.85) r = -0.03 (p > 0.9)
During the parallel movement of the two hands during finger-tapping synchronisation exists that assures the same speed for both hands. Pointing movement involves one arm at a time, not affected side outperforms the affected one. Although similar to RMA, FTTS and PTS give a different measure of movement disorder than the conventional rating scales. Both FTTS and PTS provide neurologists with a new, objective rating method of stroke patients. The rating method has good reproducibility, its resolution is better than that of manual assessment. B. Assessment of increasing performance of patients The improvement in PM and FTTS for J11 during 24 days is demonstrated in Figure 3 and 4, respectively. The extent is notable. Both the regularity and the speed of tapping increased for both hands, for all fingers. The change in PTS was similar. There was substantial improvement for the right (affected) arm and the score also improved for the left arm. Six weeks after the onset of stroke the PTS (good score not requiring fine motor activity) was equal for the affected and not affected side. The
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Rating Stroke Patients Based on Movement Analysis
Fig 3 Improvement in PM of J11 after 24 days
269
Stroke patients perform differently from Parkinsonian patients. The objective measurements show that the movement disorders of stroke patients are more regular. Tests repeated on the same day show better reproducibility. The relative scores of fingers during the finger-tapping test show much less variation than in the case of Parkinsonians. Finger-tapping should be performed by hemiparetic patients separately with left and right hand. This eliminates the synchronization of the affected and not affected side. Measurement series was taken from another group of stroke patients and preliminary evaluation supports this test method. The role of the physiotherapist is decisive. The movement analyser assures high resolution and good reproducibility but it is the physiotherapist who convinces the patients about the importance of the measurement procedure and calms them down if necessary.
ACKNOWLEDGMENT This study was supported by OTKA (Hungarian National Research Fund) project Objective assessment of movement disorders (T 049357, 2005 - 2008).
REFERENCES 1. 2. Fig 4 Improvement in FTTS of J11 after 24 days
results provide neurologists with detailed information concerning the actual state of patients. IV. CONCLUSIONS By using passive marker-based movement analysis, it is possible to assess the actual state of stroke patients with high resolution. The rehabilitation process can be quantitatively characterized using the finger-tapping and the pointing tests. Impairment and physical disability can be objectively rated. Patients must cooperate to achieve reliable results. It must be taken into account that the actual mental state influences the performance. Malingering patients can be revealed by repeating the tests; scatter of results is suspicious.
3.
4. 5. 6. 7.
Cavanagh SJ, Hogan K, Gordon V, Fairfax J: Stroke-specific FIM models in an urban population. J Neurosci Nurs. 2000 Feb;32(1):17-21. Herndon RM (ed.) Handbook of Neurologic Rating Scales. Demos Medical Publishing, 1997. Jobbagy A, Hamar G: PAM: Passive Marker-based Analyzer to Test Patients with Neural Diseases. Proc. of 26th Annual Conference of IEEE EMBS, 1-5 Sept. 2004, San Francisco, CA USA, pp. 4751-4754. Jobbagy A, Harcos P, Karoly R, Fazekas G: Analysis of the Finger-Tapping Test. Journal of Neuroscience Methods, January 30, 2005. Vol 141/1, pp. 29-39. Lincoln N, Leadbitter D: Assessment of motor function in stroke patients. Physiotherapy,65 (1979), pp. 48-51. Mahoney FI, Barthel D: Functional evaluation: the Barthel Index. Maryland State Med Journal, 1965; 14:56-61. Stokes V, Lanshammer H, Thorstensson A: Dominant Pattern Extraction from 3D Kinematic Data. IEEE Tr. on BME, 1999 January, pp. 100-106. Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Akos Jobbagy Budapest University of Technology and Economics Magyar Tudosok krt. 2. Budapest Hungary
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Dissipation of Suction Waves in Flexible Tubes J. Feng and A.W. Khir School of Engineering and Design, Brunel Institute of Bioengineering, Brunel University, Uxbridge, UK Abstract— It is noted that waves with “suction effect” are observed in pulmonary veins and in coronary arteries. In spite of this, the dissipation of “suction waves” has not been studied before. This paper aims to investigate the dissipation of “suction wave” (a single negative sinusoidal wave) in the time domain using wave intensity analysis (WIA) in vitro. The pump generates a single half sinusoidal wave by pushing the piston forward or a negative half sinusoidal wave by pulling the piston backward. The simultaneous measurements of pressure and flow were taken at steps of 5cm along the 2m length tube. The separated forward pressure, forward wave intensity and forward energy can be obtained using WIA. The results show that pressure and velocity pulse of suction wave is greater than those of pushing wave. The degree of dissipation of the “suction” wave is greater than that of “pushing” wave. Therefore, it is concluded in this paper that under the same conditions such as initial pressure and displaced volume, the “suction” pressure and flow pulse are greater than “pushing” pulse and the degree of “suction” wave dissipated faster than “pushing” wave.
II. INSTRUMENTATION The experiment setup is shown Fig.1, which is composed of a piston pump, tank full of water, reservoir, and latex tubes. The pump generates a single half sinusoidal wave by pushing the piston forward or a negative half sinusoidal wave by pulling the piston backward. The simultaneous measurements of pressure and flow were taken at steps of 5cm along the 2m length tube, where the pressure is measured using tipped catheter pressure transducer and flow is measured using ultrasonic flow probe. Flow probe and pressure transducer catheter Pump
Wave dissipation in arteries was intensively studied previously in the frequency domain, which focused on the waves with “pushing effect”. Meanwhile, we noted that waves with “suction effect” are observed in pulmonary veins [1] and in coronary arteries [2]. Also, Davies [2] demonstrated that the “suction wave” play more important role on the coronary blood flow. Although “suction waves” are important in both pulmonary and coronary blood waves, the dissipation of “suction waves” has not been studied before. It is noticed that most of previous work investigated the wave dissipation only using measured pressure waves [3]. It has been accepted that measured waveforms are summation of incident and reflected waves; therefore, determination of degree of wave dissipation only as comparison of amplitude of proximal and distal pressure is invalid [4]. This paper aims to investigate the dissipation of “suction wave” (a single negative sinusoidal wave) in the time domain using wave intensity analysis (WIA). We also compare the degree of forward “suction wave” dissipation with those of “pushing wave”.
Latex tube
Sucking
Keywords— Suction wave, wave intensity, dissipation, time domaim.
I. INTRODUCTION
PC
Pushing
Fig. 1
Experiment setup.
(b)
(a)
Initial P
Initial P
T
T
Fig. 2 (a) “Pushing” wave; (b) “Suction” wave. Shaded area represents compression wave.
A. Separation of forward and backward waveforms The physical meaning of wave intensity (WI) [5] is the flux of energy carried by the waves per cross-section area and its units are W/m2. Normally, WI has two peaks where the first peak represents compression wave and the second peak represents expansion wave for the “pushing” waves but vice versa for the “suction” waves. The separation techniques using WIA can be summarized as followed. The propagation of wave-front in the forward and backward directions can be expressed as water hammer equation:
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 278–281, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
dP± = ± ρcdU ±
(1)
The Dissipation of Suction Waves in Flexible Tubes
279
Where, dP and dU denotes the changes in pressure and velocity, ρ is density of fluid, c is wave speed and “ ± ” indicates the forward and backward directions of wave propagation. The changes of pressure across the wave-front in the forward and backward directions are: dP± =
1 ( dP ± ρcdU ) 2
(2)
Forward wave intensity is: dI + = +
(3)
1 (dP + ρcdU ) 2 4 ρc
Forward wave energy is: T
(4)
I + = ∫ dI + dt 0
Where, T is time of one cycle.
Fig. 4 Separated forward pressure wave and wave intensity for “pushing” (a & b) and “suction” wave.
III. RESULTS Fig.3 shows the pressure and velocity waveform at the measurement sites 25cm away from inlet in the tube of 12mm in diameter. Results indicates that the pressure and velocity pulse of suction wave (Fig.3 b & d) are greater than those of pushing wave (Fig.3 a & c), although the experiment conditions such as initial pressure and displaced volume are the same. In this sized tube and this measuring site, “suction” pressure pulse is around 3.7kPa but “pushing” pressure pulse is only 2kPa. Likewise, the “suction” velocity pulse is just less than 0.8m/s but “pushing” velocity pulse about 0.6m/s. a
c
b
d
Fig.4 shows the separated pressure waveforms and wave intensity at measurement site 25cm away from inlet in the tube of 12mm in diameter. From these figures, one can see that separated pressure pulse for “suction” wave is greater than that for “pushing” wave. Furthermore, amplitude of forward wave intensity of “suction” wave is greater than that of “pushing” wave. It is noted that both “pushing” and “suction” wave intensity has two peaks, where first peak of “pushing wave” represents forward compression wave (FCW) and the second one represents forward expansion wave (FEW) and vice versa for the “suction wave”. It is also found that for both “suction” and “pushing” waves, peak of FCW is greater than FEW. Also, we found from these figures that the period of one cycle for “suction wave” is a little bit smaller than that for “pushing wave” although the speeds of piston, which is controlled by the voltage of power, are set in the same value. Fig.5 shows the normalized dissipation of separated forward pressure of suction and pushing wave in the tubes of 12mm in diameter when the initial pressure is 5kPa.Here the normalized forward pressure means that the percentage of the forward pressure at any measuring sites over that at inlet. Similar to the “pushing” wave, the normalized forward pressure of “suction” wave also exponentially dissipated with the traveling distance. The degree of dissipation of the “suction” wave is greater than that of “pushing” wave, judging by the exponential factors for the regression equations.
Fig. 3 Pressure and velocity pulse of “pushing” (a & c) and “suction” wave (b & d).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
280
J. Feng and A.W. Khir 12mm_5kPa
1.20
-0.0904x
y = 0.9729e R2 = 0.8939
1.00
pushing
pushing
0.80
y = 0.9296e-0.1581x R2 = 0.8897
0.60
y = 0.8882e-0.3462x R2 = 0.9111
% of I+
% of P+
suction
1.00
0.80
0.60
0.40
0.40
y = 0.705e-0.5588x R2 = 0.9136
0.20
0.20
0.00
0.00 0
0.5
1
1.5
Distance (m)
2
Fig. 5 The normalized forward pressure dissipates along the traveling distance, where blue line indicates the regression line for pushing wave and pink line for suction wave. The degree of dissipation is expressed as the exponential decay function, in which x indicates the traveling distance, y indicated the percentage of the value at measurement site over the initial value. Fig.6 & 7 show the normalized dissipation of separated forward wave intensity and forward wave energy of suction and pushing wave in the tubes of 12mm in diameter when the initial pressure is 5kPa. The results also show that the degree of dissipation of the “suction” wave is greater than that of “pushing” wave. 1.20
suction pushing
1.00
0.80
% of dI+
12mm_5kPa
1.20
suction
y = 0.8459e-0.4356x R2 = 0.8357
0.60
0.40
y = 0.631e-0.6419x R2 = 0.854
0.20
0.00 0
0.5
1
1.5
2
Distance (m)
Fig. 6 The normalized forward wave intensity dissipates along the traveling distance, where blue line indicates the regression line for pushing wave and pink line for suction wave. The degree of dissipation is expressed as the exponential decay function, in which x indicates the traveling distance, y indicated the percentage of the value at measurement site over the initial value
0
0.5
1
1.5
2
Distance (m)
Fig. 7 The normalized forward wave energy dissipates along the traveling distance, where blue line indicates the regression line for pushing wave and pink line for suction wave. The degree of dissipation is expressed as the exponential decay function, in which x indicates the traveling distance, y indicated the percentage of the value at measurement site over the initial value.
IV. DISCUSSIONS The coronary arterial flow has been proved to be greater during diastolic phase rather than during the systolic phase. This typical phenomenon has been elucidated as the relaxation of left ventricle during diastole results in suction wave been generated in the distal of coronary arteries [2]. Also, the pulsatile nature of pulmonary venous flow was suggested to partially attributed to the suction created by the relaxation of left atrium and the filling of left ventricle [6]. Therefore, the investigation of suction wave dissipation might be of clinical significance for arterial coronary and pulmonary venous flow. As the experimental conditions such as initial pressure and displaced volume are the same, the pressure and flow pulse of suction wave are much greater than those of the pushing wave. The reasons for that are speculated as: A. Effect of cross-sectional area of tube As the wave propagates along the tube, the pressure, diameter and flow vary with the time and the changes of all these three parameters are interdependent on each other. Decrease in diameter when the “suction” is generated might result in the greater of flow pulse. Therefore, the greater pressure and flow pulse of “suction” wave might be partially attributed to smaller cross-sectional area of tube.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Dissipation of Suction Waves in Flexible Tubes
B. The mechanism of “pushing” and “suction” wave generated As discussed in the analysis section, when the “pushing” wave is generated, the measured pressure is greater than the undisturbed pressure. In contrast, the measured pressure of “suction” wave is lower than the undisturbed pressure. As discussed, the “suction” wave is produced by pulling the piston backward and there is no gap between the piston and liquid before the wave is produced. Therefore, while the piston is running backwards, the vacuum is created between piston and liquid, which causes the sudden decrease of pressure and then produces the “suction” wave. Personally, we do think that atmosphere pressure takes no action on the “pushing wave”, but it works on the “suction wave” when the vacuum happens between the piston and the liquid. Therefore, the different function of atmosphere pressure on the “pushing” and “suction” waves might be another explanation for the fact that greater dissipation is associated with the “suction” wave. In addition, comparison of dissipation between “pushing” and “suction” indicates that the pattern of dissipation of “suction” wave is similar to that of the “pushing” wave but with a greater degree. The reason for the greater dissipation of “suction” wave is still not clear, but if it is speculated to be due to the bigger pressure and flow pulse. V. CONCLUSIONS We concluded from this study that under the same conditions such as initial pressure and displaced volume, the “suction” pressure and flow pulse are greater than “push-
281
ing” pulse and the “suction” wave dissipation is faster than that of “pushing” wave.
REFERENCES 1.
2.
3. 4. 5.
6.
Smiseth, O.A., et al, (1999) The pulmonary venous systolic flow pulse-Its origin and relationship to left atrial pressure. Journal of the American college of Cardiology, 34(3):802809. Davies, E., et al., (2006) Evidence of a dominant backwardpropagating “suction” wave responsible for diastolic coronary filling in humans, attenuated in left ventricular hypertrophy., Circulation, 113 :1768-1778. Horsten, J. B. A. M., (1989) Linear propagation of pulsatile waves in viscoelastic tubes. Journal of Biomechanics. 22:477484. Nichols, W.W., O’Rourke, M.F. (2005) McDonald’s blood flow in arteries: theoretical ,experimental and clinical principles. Arnold, Oxford University Press, Inc, New York. pp.64. Parker, K. H., Jones, C.J.H., (1990) Forward and backward running waves in the arteries: analysis using the method of characteristics, Transactions of ASME: Journal of Biomechanical Engineering. 112: 322-326. G, Keren, et al., (1985) Pulmonary venous flow pattern—its relationship to cardiac dynamics. A pulsed Doppler echocardiographic study. Circulation 71:1105-1112
Author: J. Feng Institute: School of Engineering and Design, Brunel University Street: Kingston Lane City: Uxbridge Country: UK Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Virtual Rehabilitation of Lower Extremities T. Koritnik, T. Bajd and M. Munih University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia Abstract— The paper presents a kinematic model of a human body and a corresponding graphic representation of the human figure in virtual reality. The model was developed in order to visualize the movements of the subject in a real-time virtual environment on a large display, which represented a virtual mirror. An optical system with active markers was used to assess the movements of the subjects. We conducted an experiment with 10 healthy adults performing a stepping-inplace test in a virtual environment by tracking the motion of a reference virtual figure, which represented the virtual instructor. Both figures, the training subject and virtual instructor, were superimposed and shown from the desired angle of view. It was our aim to study the abilities of immersion and adaptation to the reference movements through the virtual mirror. The results of this preliminary investigation include basic kinematic and temporal parameters of the stepping movements, providing quantitative evaluation and comparison of the subjects’ performance.
structor was rendered yellow transparent, opposed to the solid grey subject's figure (Fig. 2).
Keywords— virtual rehabilitation, virtual mirror, lower extremities training, stepping-in-place
I. INTRODUCTION Virtual rehabilitation has a relatively short history in the clinical environment. Virtual reality (VR) can provide enhanced information about the activities being performed in real environment, thereby augmenting the natural biofeedback. Visual feedback is the most common way of introducing virtual reality in rehabilitation [1]. Yet, most of the existing applications and experimental studies focus on upper extremities. For the rehabilitation and training of lower extremities we propose a virtual mirror (VM). VM is a large display in front of which the subject performs the movements. It shows a human-like virtual figure which represents the subject (Fig. 1) and can be viewed from any desired angle. The movements of the figure correspond to the movements of the subject in real time. Observing the figure in the VM alone does not provide any additional information to the subject compared to the conventional mirror or video camera except the changeable viewing angle; however, the virtual environment allows to include other interactive elements in the picture. We included another virtual figure in the VM which represented the virtual instructor. Both figures were superimposed on each other but the virtual in-
Fig. 1 Virtual mirror: a large screen showing movements of the subject in real time
The movements of the virtual instructor can be preprogrammed in any desired fashion, representing the reference to the subject. The subject is then instructed to track the movements of the virtual instructor as closely as possible. This way of visualization and reference tracking can be useful in clinical practice, primarily because it includes the patient's visual biofeedback more actively. Furthermore, it provides the tracking of patient's performance, based on actual measurements and quantitative evaluation. We conducted an investigation featuring virtual mirror in a stepping-in-place (SIP) test. SIP test has long been established in the clinical environment for indication or detection of various diseases and dysfunctions. It has been applied in cases of peripheral vestibular dysfunction [2], stroke patients [3], and Parkinson disease [4]; however, these SIP applications have not included virtual reality. We propose the SIP in VM also as a training modality in lower extremities rehabilitation. In this preliminary study, we included 10 healthy male subjects performing the SIP by tracking the performance of the virtual instructor. We studied the ability of adaptation in the VM by analyzing and comparing the basic kinematic patterns.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 262–265, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Virtual Rehabilitation of Lower Extremities
Fig. 2 Superimposed figures of the subject and the virtual instructor in the virtual mirror
II. METHODS AND MEASUREMENTS A. Kinematic model The movements of the virtual figure in the VM were based on a simplified kinematic model of a human body. The model consisted of 13 rigid segments representing the body parts: the head, torso, pelvis, upper arms, forearms, thighs, shanks, and feet. The segments were connected by rotational joints as shown in Fig. 3; head-torso joint, torsopelvis joint, shoulders, and hips were modeled as 3 degreesof-freedom (DOF) spherical joints, while elbows, knees, and ankles were represented as 1 DOF hinge joints.
263
The axes of knee and ankle rotation were parallel for each leg, so that the segments representing thigh, shank, and foot all moved in the same plane in space. In consequence, this plane was defined by a normal vector pointing in the same direction as the axes of knee and ankle rotation. The pelvic segment was considered as the base of the model, meaning that all segments' positions and orientations were being calculated with reference to the pelvic center point which was located in the middle of the line connecting both hip joints. Pelvic segment moved freely in space and therefore had 6 DOF. The model exhibited a total of 30 DOF of which 24 DOF were encompassed in all joints, 3 DOF represented the position, and the remaining 3 DOF described the orientation of the pelvis. We used vector parameters method to compute the forward kinematics of the model [5]. In order to obtain joint variables values, we used the active markers to measure the position in space. In general, at least 3 markers per segment are required to obtain its position and orientation unanimously. In our case that would result in a need of 39 markers. To reduce the number, we placed the markers on the skin directly over the approximate centers of joint rotations and considered the geometric constraints, imposed by the geometry of the model. In this way, the number of markers needed to provide all the required joint variables was reduced to 17. The positions of the markers were measured using the OPTOTRAK (Northern Digital Inc.) system with a 70-Hz sample rate. The pose of the pelvic segment was determined from the three markers placed over the posterior superior iliac spines (PSIS) and lower edge of the sacrum. In addition, the positions of the PSIS and sacral markers were used to calculate the centers of the hip joints [6]. The remaining markers were placed over the knees, ankles, metatarsophalangeal joints, shoulders, elbows, wrists, and on the head, as shown in Fig. 3. The position of the body was represented by the position coordinates of the pelvic center, which was expressed as a percentage of the subject’s body height (BH) to allow comparison among subjects. The joint angles were calculated from the vectors connecting the neighboring joints, which represented the body segments. Vector cross-products were applied to the consecutive body-segment vectors to obtain the corresponding axes and angles of joint rotations, and the segment coordinate systems. B. Virtual mirror
Fig. 3 Kinematic representation of the human body (left) and placement of the active markers (right)
A human-like figure was used to visualize the subject's movements in the VM. It was comprised of 13 rigid segments, representing the body parts [7]. The motion of the figure corresponded to the kinematic data, obtained by OPTOTRAK measurements. The movements of the figure corresponded to the subject's movements at a 35 Hz refresh
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
264
rate, without noticeable lag, thereby enabling a convincing perception of the VM. We used VRML 2.0 (Virtual Reality Modeling Language) and MATLAB to facilitate the VM, exploiting the graphics processor unit (GPU) built-in 3D visualization functions. Kinematic data were fed into the VRML model in the form of a four-element vector for each joint (x, y, and z components of the rotation axis vector, and the rotation angle value, together forming a standard axisangle notation). Position of the subject was normalized with the body height (BH) to enable the use of the same virtual figure for all subjects. The model was represented in the VRML environment as a tree structure of transform nodes: - pelvis (root node) - left thigh - left shank - left foot - right thigh - right shank - right foot - torso - head - left upper arm - left forearm - right upper arm - right forearm
Since a particular node inherits its initial position and orientation from the preceding node, it was not necessary to compute the forward kinematics of the model to drive the VRML figure. This was performed by the GPU provided with the adequate tree structure and axis-angle vectors. Placing the markers on the subject's body was followed by a simple calibration procedure. The subject was instructed to stand still in front of the VM in a quiet stance for 3 s facing straight forward with feet oriented in parallel and the knees fully extended. Median values of the joint angles during the calibration were registered as the offsets to the initial pose. The initial pose of the virtual figure was the same with all the angles set to 0. Thereby, offsetcompensated values were assigned to the virtual figure during the SIP training. Similarly, the median value of the pelvic position values during the stance was regarded as the origin point. C. SIP experiment We used the SIP test in VR to assess the subjects' ability of adaptation in training of lower-extremities movements. We focused on observing the lower body during the test; therefore, the upper body kinematics of the VRML model was appropriately simplified. We considered the head, arms, and torso as a single rigid segment (HAT). The resulting model consisted of only 8 segments, and only 11 active markers were needed in consequence.
T. Koritnik, T. Bajd and M. Munih
The subjects were instructed to follow the stepping movements of the virtual instructor during the test as closely as possible. Virtual instructor had been preprogrammed with two different SIP tasks, featuring different cadences. The height of knee joint lifting (i.e. hip angle) was set to 45°, while the cadence was 60 beats per minute (BPM) and 120 BPM respectively. The number of step repetitions was 30 for both tasks. The motion of the virtual instructor was obtained by recording the steps of a healthy male subject (25 years), who was well familiarized with the VM. A single step was isolated and adjusted for smoothness and symmetry for each task and then replayed repeatedly for the test subjects to track. The following parameters were being recorded during the test: the rotation axes vectors and corresponding angles of HAT, pelvis, thighs, shanks, and feet, and the position of the pelvis. These data were sufficient to replay the subject's SIP performance later on, with or without the virtual instructor being included in the replay. A test group consisted of 10 healthy male subjects (age 23 – 39 years; mean value (MV) = 28.5 years, standard deviation (SD) = 4.7 years). None of the subject had a medical history of any relevant medical condition to influence the SIP performance. III. RESULTS From the recorded data, the maximal knee angles in each step, and stance-phase durations were selected to represent SIP results. We considered the results for the first 5 steps and the remaining 25 steps separately, where first 5 steps were taken into account individually and represented the subject’s initial response to the virtual instructor. The remaining steps were encompassed in terms of MVs and SDs (table 1). Table 1 Knee angles and stance durations, steps 6 - 30 cadence [BPM] 1 (60) 2 (120)
max angle - knee [°] MV SD reference 84.1 10.3 86 82.4 12.9 83
stance duration [s] MV SD reference 1.29 0.075 1.25 0.59 0.049 0.62
Fig. 5 shows the maximal knee angle for the first 5 steps for both tasks and all subjects. The dashed line represents the reference angle of the virtual instructor; the grey bars represent the average angle in all subjects in each step, while the error bars indicate maximal and minimal angles recorded in each step. Stance durations during the first 5 steps are shown in Fig. 6. It was observed during the SIP test that all subjects were able to adapt to the movements of the virtual instructor within the first five steps. When per-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Virtual Rehabilitation of Lower Extremities
265
forming the task featuring higher cadence, the subjects needed more time in average to adapt their knee angle to the virtual instructor. Stance-phase durations, when interpreted together with knee angles, indicated that some of the subjects made attempts to catch up with the virtual instructor but their movements did not manifest as articulated steps, as their feet were still touching the ground. During the remaining 25 stepping periods, all subjects were able to track the movements of the virtual instructor without missing any steps and without exhibiting any significant trends. That consolidated the impression that healthy subjects could adapt to the VM quickly, showing only that the faster cadence was more difficult to follow.
the VR adaptation in 10 healthy subjects by using the SIP test. When designing a VM SIP experiment, we tended to keep the number of markers as low as possible. We found optical measurements to be suitable for the lab environment; however, the time needed to prepare the subjects for the SIP test (markers and strober units placement, system setup and checking, etc.) was too long compared to the duration of the test itself. In addition, the subjects were asked to stand still for most of the time during the setup. That might raise some concerns when considering the SIP training in patients undergoing lower extremity rehabilitation. In this regard, motion-assessment techniques utilizing computer vision [8], [9] and accelerometers are becoming promising complements to the existing optical measurements.
knee angles 2
ACKNOWLEDGEMENT
task 1 angle [rad]
1.5 1
This work was supported by Slovenian Research Agency.
0.5 0
1
2
3
4
5
2
REFERENCES
task 2 angle [rad]
1.5 1
1.
0.5 0
0
1
2
3 stepping periods
4
5
6
2.
Fig. 4 Mean values of maximum knee angles achieved during first five stepping periods with maximal and minimal deviations
4.
stance durations
task 1
stance duration [s]
2 1.5 1
5.
0.5 0
0
1
2
3
4
5
6
task 2
1 stance duration [s]
3.
6.
0.8 0.6
7.
0.4 0.2 0
0
1
2
3 stepping periods
4
5
6
8.
Fig. 5 Mean stance phase durations during first five stepping periods with maximal and minimal deviations
9.
IV. CONCLUSIONS The current study offered a preliminary insight into the use of a 3D human body model and VR as a lowerextremities training modality. The overall complexity of the model resulted in smooth, natural appearing motion and a convincing representation of the actual body movements of the subjects. Introducing a virtual mirror enabled active inclusion of subjects in the training process. We evaluated
Holden MK (2005) Virtual environments for motor rehabilitation: review. Cyberpsychology and Behavior 8(3):187–211 Fukuda T. The stepping test (1958) Two phases of the labyrinthine reflex. Acta Otolaryngol 50:95–108 Garcia RK et al. (2001) Comparing stepping-in-place and gait ability in adults with and without hemiplegia. Arch Phys Med Rehabil 28:36–42 Sasaki O et al. (1993) Stepping analysis in patients with spinocerebellar degeneration and Parkinson’s disease, Acta Otolaryngol 113:466–470 Lenarcic J (1988) Kinematics. Dorf R (ed.): International encyclopedia of robotics. John Wiley, New York Frigo C, Rabuffetti M (1998) Multifactorial estimation of hip and knee joint centres for clinical application and gait analysis. Gait Posture 8(2):91–102 De Leva P (1995) Adjustments to Zatsiorsky-Seluyanov’s segment inertia parameters. J Biomech 1995 29:1223–1230 Cailette F, Howard T (2004) Real-time markerless human body tracking with multi-view 3-D voxel reconstruction. Proceedings of the British Machine Vision Conference, London, September 2004, pp 597–606 Ude A (1999) Robust estimation of human body kinematics from video. Proceedings of IEEE/RSJ Intelligent robots and systems. Kyongju, Korea, October 1999, pp 1489–1494
Author: Institute: Street: City: Country: Email:
Tomaž Koritnik University of Ljubljana Tržaška cesta 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Web-Based E-learning Application on Electrochemotherapy S. Corovic1, J. Bester1, A. Kos1, M. Papic1 and D. Miklavcic1 1
University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, 1000 Ljubljana, Slovenia
Abstract— In this paper we present the web-based elearning application on electrochemotherapy, an effective approach in local tumor treatment employing locally applied high-voltage electric pulses in combination with chemotherapeutic drugs. The antiumor treatment outcome is directly related to the electric field distribution in the target/tumor tissue. As the electric field distribution can not be displayed during the therapy we use numerical calculations in combination with web-based tools which allow visualizing and understanding the important parameters for effective electrochemotherapy. Keywords— electropermeabilization, electrochemotherapy, numerical modeling, electric field distribution, e-learning.
I. INTRODUCTION Electrochemotherapy (ECT) is an effective approach in tumor treatment employing locally applied high-voltage electric pulses (HV pulses) in combination with chemotherapeutic drugs. ECT is easy and quick to perform with only minor and acceptable side effects [1]. The ECT is performed using either intravenous or intratumoral drug injection, followed by application of electric pulses, generated by an electric pulses generator and delivered to the target tissue via appropriate electrodes. As a response to the high electric pulses delivery, a local electric field (E) is established within the treated tissue. A sufficient magnitude of electric field initializes electropermeabilization of cell membranes, which allows for increased entrance of the drug into the cell and thus allows for improved effectiveness of electrochemotherapy. This value is the critical reversible threshold value of the local electric field (Erev) which causes structural changes in the target tissue. Namely, when Erev is achieved the cell membranes are being permeabilizated and chemotherapeutic drugs enter the target cells. Such a modality of cell membrane permeabilization by means of electric field is named electropermeabilization. For successful electrochemotherapy the entire volume of the target/tumor tissue need to be subjected to the local electric field above reversible threshold value (Erev). By appropriate selection of electrodes and amplitude of electric pulses the needed electric field can be obtained only inside the target/tumor tissue, while the damage to the surrounding healthy tissue are prevented or minimized. Thus, the key parameter for successful antitumor treatment outcome by
means of electrochemotherapy is the sufficiently high local electric field (E) inside the target/tumor tissue. In the development of the electrochemotherapy treatment a multidisciplinary expertise was required. Namely, the collaboration and the knowledge and experience exchange among the experts in the fields of medicine, biology and engineering was needed. The efficacy of electrochemotherapy can be improved with the good knowledge of parameters of the local electric field, being crucial for successful tissue electropermeabilisation and subsequently for the best electrochemotherapy treatment outcome. To make the therapy as efficient as possible it is of great importance to transfer that knowledge to the practicing clinicians who plan or perform the treatment. To collect, organize and transfer the acquired knowledge web-based technologies are being an indispensable tool in modern teaching [2], [3]. The web-based e-learning programs offer more educationally effective and enjoyable learning and teaching methods compared to the conventional learning methods such as learning through listening to spoken words. Furthermore, the use of web-based elearning techniques enables the simulation of the users’ participation in “hands-on” learning activities, which is proven to be the most retentive learning method [4]. In this paper we present the web-based e-learning application which was developed in order to collect, organize and provide the knowledge and experience about electrochemotherapy. One of the main objectives of the presented e-learning application is to demonstrate the importance of local electric field distribution for effective electropermeabilisation of the treated tissue. Namely, the antiumor electrochemotherapy treatment outcome is directly related to the electric field distribution in the tissue. As the electric field distribution can not be displayed during the therapy we use numerical calculations in combination with web technology tools which allow visualizing and understanding the important parameters underlying effective electrochemotherapy. The aims of our web application are: -
to provide the explanations about important parameters of electric field distribution in biological tissues determined based on experiments and theory to show the comparison of experimental results with the theory
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 323–326, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
324
-
S. Corovic, J. Bester, A. Kos, M. Papic and D. Miklavcic
to make understandable to the people not skilled in the electric pulses delivery in vivo: how the electric field distributes in the tissues, the importance of the respective geometries of the tissue and the electrodes, how to choose the electrode type and how to place the electrodes with respect to the target tissue.
The target users of our web-based educational program are the clinicians involved in choosing and carrying out the treatment, medical personnel, patients and all those who want to learn about the electric field distribution being important in electrochemotherapy, as well as in other therapy modalities based on tissue electropermeabilisation such as gene electrotransfer. II. METHODOLOGY The e-learning web application is based on HTML, JavaScript, ASP and Macromedia Flash web technologies. For graphical illustrations and 3-dimensional visualizations of the electroporation process on the levels of cell membrane, cell and tissues a software package 3D Studio Max was used. Based on the numerical calculations of electric filed distribution carried out with software packages FEMLAB and Matlab, more simple 2-dimensional and 3dimensional illustrations using software packages 3D Studio Max, Macromedia Flash, PhotoShop and CorelDraw are designed. The educational content (textual and graphical information) is published using Hypertext Markup Language (HTML). The designed e-learning application is integrated into E-CHO e-learning system developed at the Faculty of electrical engineering (University of Ljubljana) by the Laboratory of telecommunications [www.ltfe.org]. The E-CHO e-learning system is an interactive e-learning environment enabling the authentication of users, statistical analysis, network traffic measurement, support for video streaming, as well as the use of various types of communications among users, such as forums, e-mail correspondence, videoconferencing, etc. [5]. III. THE STRUCTURE OF THE E-LEARNING APPLICATION The e-learning application was developed in order to provide the educational material about the basis of electrochemotherapy with special emphasis on the electric field distribution within biological tissues and the parameters of local electric field being crucial for successful electropermeabilisation of target tissue. The target tissue is a tumor tissue being treated with electric pulses delivered to the electrodes. In order to obtain efficient therapy of the target tissue, all clonogenic cells forming tumor tissue have to be
exposed to the electric field intensity above the threshold value. Therefore, the educational content is based on the fact that it is crucial to know which parameters of the local electric field are important to make the tumor treatment as efficient as possible. The main structure of the e-learning application is shown in Fig. 1. The first part of our web-based distance learning application brings together the educational material providing basic mechanisms underlying electroporation process on the levels of cell membrane, cell and tissues and the basic background about electrochemotherapy (see the first four chapters in Fig. 1). The following chapter MODELING OF ELECTRIC FIELD DISTRIBUTION contains an introductive description about the importance of visualization of electric field distribution by means of numerical modeling. The user is warned about the main errors that are possibly committed during tumor treatment, such as inadequate electrode geometry and insufficient amplitude of electric pulses. Further, we stress the fact that the local electric field inside the treated tissue is markedly unhomogeneous and lower that ratio U/d (applied voltage U over distance between electrodes) due to the specific structure and electric properties of biological tissues (particularly tumor tissues). The application proceeds by the chapter providing the educational material about the important parameters of the local electric field distribution needed for successful electrochemotherapy, such are: -
electrode geometry (needle or plate electrodes); dimension of the particular electrode (width, length, diameter...); distance between electrodes; electrode position with respect to the target tissue; electrode orientation with respect to the target tissue; geometry of the target tissue; geometry of the tissue surrounding the target tissue; the contact surface between the electrode and the tissue; electric properties of the target tissue i.e. tissue conductivity; electric properties of the surrounding tissue; the voltage applied to the electrodes; and threshold values of the tissue Erev and Eirrev.
Further, based on the explanatory examples we explain the influence of the shape and position of electrodes with respect to the geometrical and electrical properties of the target tissue on the electric field distribution within the treated tissue. Each chapter is concluded with the link to explanatory comments that point out the main conclusions about the educational contents of the page. In order to draw the user’s attention this page is constructed to be opened in the sepa-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Web-Based E-learning Application on Electrochemotherapy
Fig. 1 The structure of the e-learning application about electrochemotherapy
rate window so that the size of the window corresponds to the size of the web-page contents. The last chapter Electric field distribution (E) within 3D models of biological tissues provides the HTML based educational material that consists of 2D and 3D animations of electric field distribution in models of cutaneous and subcutaneous tumors. This chapter demonstrates the following tissue models: - the model of cutaneous tumor (see Fig. 2a), - the model of subcutaneous tumor (see Fig. 2 b) and - the model of subcutaneous tumor with skin (see Fig. 2c and Fig. 2d). The objective of this part of the e-learning application is to provide an interaction with the educational content in order to simulate the “hands-on” learning approach about the parameters of the local electric field distribution that
Fig. 2 Model geometries presented in the chapter Electric field distribution (E) within 3D models of biological tissues: a) cutaneous tumor; b) subcutaneous tumor; c) subcutaneous tumor with tinnier skin layer (1 mm) and d) subcutaneous tumor with thicker skin layer (3 mm)
325
have been previously explained. Namely, by changing different parameters by a mouse click on the buttons in the navigation bar users have the possibility to design the needed electric field intensity according to the properties of the target tissue. Based on this we point out that the plate electrodes are more suitable for treatment of protruding cutaneous tumors, where entire volume can be held between the electrodes, while in cases where tumor is seeded more deeply in the skin the needle electrodes are to be used. The parameters that can be varied are the applied voltage, distance between electrodes and shape of electrode. The electric field distribution in 3D models can be played as a 3D animation and displayed in three orthogonal crosssections (see the navigation bar in Fig. 3). The electric field distribution is displayed in the range from Erev to Eirrev. The region of the tissue below Erev is considered unpermeabilized and the region exposed to the E > Eirrev is irreversibly permeabilized (see the color bar in Fig. 3). In the first two models of cutaneous protruding and subcutaneous non-protruding tumor (Fig. 2a and Fig. 2b) the specific conductivity of the target tissue is equal to the specific conductivity of the surrounding tissue. Since, these two models are assumed to be homogeneous, the electric field distribution does not depend on the specific conductivities of the tissues, but only on the electrode size and position, as well as the amplitude of the applied voltage. By the comparison of the local electric field distribution within these two models one can appreciate the influence of the target tissue geometry on the successful electropermeabilization (E > Erev). Namely, for successful electropermeabilization of cutaneous protruding tumors lower amplitude of voltage need to be applied on the electrodes compared to the needed voltage for successful electropermeabilization of subcutaneous non-protruding tumor, while the shape and distance between electrodes are kept constant. The model of subcutaneous tumor gives the user an insight into the electric field distribution within the target tissue when electroporated through the skin. This model is composed of two layers; the upper layer representing skin tissue with lower specific conductivity compared to the underlying layer being more conductive. Unlike the previous two homogeneous models the electric field distribution in this model depends on geometrical and electrical properties of both tissues. The electric field distribution is presented by two models with two different thicknesses of the skin layer: 1 mm (see Fig. 2c) and 3 mm (see Fig. 2d). The aim of this part of educational content is to contribute to the understanding of the influence of the specific conductivity and the thickness of the skin layer on the electric field distribution within the underlying tissues where the target tissue (subcutaneous tumor) is located. Thus, the user can appreciate the influence of the skin thickness on electric
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
326
S. Corovic, J. Bester, A. Kos, M. Papic and D. Miklavcic
field distribution in the target tissue and surrounding tissues. The key message is that in order to successfully electropermeabilize the target tissue being electroporated through the thicker skin layer (3 mm) a higher voltage need to be applied compared to the needed voltage for successful electropermeabilization of the same target tissue being electroporated through the thinner skin layer (1 mm), while the shape and distance between electrodes are kept constant. Based on this the user is offered both the basic explanations on the role of the highly resistive skin tissue (specially stratum corneum) in electric field distribution within treated tissues and the guidelines about how to overcome the highly resistive skin tissue in order to permeabilize more conductive underlying tissues. In Fig.3 the electric field distribution inside the cutaneous protruding tumor obtained with two different amplitudes of applied voltage (Fig. 3a: U=300 V and Fig. 3b: 600 V) using two parallel plate electrodes is shown as example. The comparison of these two figures demonstrates that by increasing the applied voltage on the electrodes (for the same tissue geometry, electrode size and position) the eleca)
tric field becomes more intense and extends toward the central volume of the tumor. Similar effect can be achieved increasing the extending the electrode length (in our case by switching from 4 mm to 7 mm electrode length the entire volume of encircled region can be subjected to the E > Erev). The educational web pages are concluded by a test that gives the user an opportunity to test the acquired knowledge, while allowing the teacher and the web-developer to follow the efficacy of the constructed pages and their educational contents. The important property of the educational web-application is that it is upgradeable so that the present contents can be modified and new contents easily incorporated. IV. CONCLUSIONS The educational content of our web-based application will contribute to the understanding of mechanisms underlying the electropermeabilization process in biological tissues. It is especially aimed at providing the knowledge about the parameters of local electric field being important for successful electrochemotherapy. The objective of the e-learning application is also to give the suggestion/guidance to all practitioners as to choose the needed shape and placement of electrodes, as well as the appropriate amplitude of electric pulses in order to increase the electrochemotherapy outcome.
REFERENCES 1.
2.
b)
3. 4. 5.
Marty M, Sersa G, et all (2006) Electrochemotherapy – An easy, highly effective and safe treatement of cutaneous and subcutaneous metastases: Results of ESOPE (European Standard Operating Prosedures of Electrochemotherapy) study. EJC Supplements 4: 3-13 Day A J and Foley D J (2006) Evaluaing a web lecture intervention in a human-computer interaction course. IEEE Trans Educ 49: 420-431 Humar I, Sinigoj A, Bester J, and Hegler O M (2005) Integrated component web-based interactive learning systems for engeneering. IEEE Trans Educ 8: 664-675 Dale E (1969), Audio-Visual methods in teaching. 3rd ed. New York: Holt, Rinehart, and Winston www.ltfe.org Author: Selma Corovic Institute: Street: City: Country: Email:
Faculty of electrical Engineering, University of Ljubljana Trzaska 25 Ljubljana Slovenia
[email protected]
Fig. 3 Electric field distribution inside the protruding tumor for two different the applied voltages (U) on the electrodes: a) U=300 V and b) U=600 V
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Assessment of a system developed for virtual teaching M.L.A.Botelho1, D.F.Cunha2 , F.B.Mendonca3 and S.J.Calil4 1
Universidade Federal do Triangulo Mineiro (UFTM), Biologics Sciences Department (DCB), Uberaba-MG, Brazil 2 Medical Clinical Department, UFTM, Brazil 3 Lidercomp Informatica Ltda, Uberaba-MG, Brazil 4 Biomedical Engineering Department (DEB), Universidade Estadual de Campinas (UNICAMP), Brazil
Abstract - There is today a worldwide search for ways to present electronic courses through Internet. There are also a significant number of products to attend such search. Most of these programs however are designed for management rather then to help the teacher on the creation and implementation of a lecture. Among the programs offering such facilities within the Brazilian market, few can be afforded by people within the academic area. It is presented here the results of a survey to evaluate a Brazilian system that was developed to create and implement a distant education course. Keywords - Distance education, E-learning, Biomedical informatics, Internet. I. INTRODUCTION
In the near future there is a strong probability for the convergence of two paradigms within the educational system of last century: the conventional teaching, and the open education at distance [1, 2]. Nowadays, the computer already takes part of the educational resources, as a complementary for improvement and possible change on the quality of the teaching and learning processes [3]. To understand the several types of electronic applications for educational purposes available in the world market, a detailed survey was carried out. The considered systems were installed in a microcomputer Pentium IV, and its functionalities were tested. Demos, tutorials, folders and manuals were also used. The compilation of the most relevant data of this work was already published [4], and one of its conclusions was that there are few options in the Brazilian market suitable to the limited resources to the majority of the university teachers. This text, part of a doctorate thesis [4], reports the results of the evaluation process of a system developed for the preparation and presentation of virtual classes using Internet. The basic premise of such project was that it should be used by people who were not specialists in computer science and that it should not demand sophisticated resources of hardware and software.
II. METHODS
A. Description of the system to be valued characteristics: Aiming at an environment as friendly as possible, with intuitive navigation and easy operation, the developed system established basic functionality requirements, like: • • •
Standard interface in all screens of the system, with Help function available during all the interactions with the system; System installations and operations as the most simple as possible; The educational material already produced by the teachers in their work, must be used in presentations preparations in the system, without difficulties.
All programs were developed to run in Windows platform, using Delphi 6 and Microsoft Direct X. The system allows class presentations online or offline (previously recorded lecturers). In both the available presentations, the interface screen has identical configuration, as presented by Figure 1, which shows the screen of an online class. There are four small windows: the teacher image (above left), the slide show (bigger part in the center), the slide title (down left), and the chat area (the window below the slide). Two work platforms are offered, for each user's type: the Teacher platform, which has the actions to create and/or to
Figure 1 –Screen format used for the virtual class.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 319–322, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
320
alter projects, to perform the presentation and to choose the configuration of the presentation. The other one is the Student platform, which has identical appearance of the teacher’s one, but does not offer the options to create or alter projects. To create an offline project, the teacher should first prepare the slide show, the audio and video of his presentation, and a text file with the topics that will be presented. After uploading these three files, it is possible for the developer to associate any frame of the just created video with any slide from the slide show. So, during the audio and video presentation, the slides are sequentially changed according to the subject being presented. To create an online project, the system imports the slide show and the topics to be presented, likewise described previously. However, the image and the voice of the teacher will be captured and transmitted by his equipment during the presentation as video streaming. The communication protocol used is TCP-IP. The system installation in a computer can be done using a CD, or through a link at an Internet site. The contents of an offline class can be accessed using the same procedures, and opened in the system with easily identifying functions. To attend an online class, the student should inform his/her identification and the IP number of the teacher’s computer, during the class timetable. The system carries out the connection and fetches the FTP server address to download the slide show to the student’s computer. During the presentation, the connection is used solely for the audio and video streaming, and for the chat. B. The system evaluation: The system evaluation followed a Test Plan, where each Test was the presentation of a class. There were two main phases: the Beta Tests, when the development of the system was still not completed (the called Beta Phase), and the Final Tests, when the programming had being ended. Here it is described the Final Tests, when the test participants filled out a spreadsheet to grade the parameters of the Evaluation Criteria that was based on Software Engineering Theory [5]. These parameters are: Efficiency, Stability, Portability, Usability, Satisfaction Degree, and the Acceptance Test [6]. After each test, teachers and students were invited to fill up a questionnaire. The answers were then statistically analyzed to learn about the user satisfaction to participate in distant lecturers and if he/she would like to continue to use the developed method
M.L.A.Botelho, D.F.Cunha, F.B.Mendonca and S.J.Calil
To carry out the tests about the offline class presentation, two teachers prepared lecturers on subjects concerning Embryology and Physiology. One hundred and eighty seven CD's, containing the lectures and the questionnaire, were distributed for students attending courses on Medicine, Nursing and Biomedical Science. One hundred answers were obtained. For the tests of the online classes, two teachers carried out presentations for thirty-eight students, on medical subjects (Medical course) and on informatics subjects (for courses on Computation and Information Systems). It was asked to students to fill a questionnaire that they would receive by e-mail. Twenty-nine answers were obtained. III. RESULTS
A. The System Performance Evaluation Results: In the tests carried out using the offline classes, all the Evaluation Criteria presented the best grades, reaching the top scores. In the tests for the online classes, however, the Evaluation Criteria concerning the general performance of the system (Efficiency, Stability, Satisfaction Degree) presented an average score of 1,25 (scale from 0 to 2). The other Criteria reached grade 2. B. The Students Answers Results: The analysis of 129 received questionnaires (online and offline classes) showed that the students valued in a positive manner the participation on virtual classes using this environment. Figure 2 shows a graphic that illustrates the distribution of these answers to the question about their appreciation for the virtual class. It can be seen that only 2,33% disliked the virtual class system Figure 3 shows a graphic that illustrates the distribution of the student’s answers to the question about if they would like to attend more classes using this system in their course. Only 4,7% were sure about their dislike for this kind of system.
Figure 2 – Distribution of the student’s answers to the question about their appreciation for virtual classes using this system.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Assessment of a system developed for virtual teaching
321
Figure 3 – Distribution of the student’s answers to the question about their likeliness to attend more classes using this system in their course. Figure 4 shows the results of the student’s answers to the question about the use of this system in their courses. Here, the answers were divided by the type of class they have attended. For the students who attended the offline classes, only 1% was sure they would not like to have such system in their course. On the other hand, this kind of certainty was a lot higher for the online class students (17%). Figure 5 shows the results of the evaluation for technical quality of the system performance, associated to the types of classes (offline and online). Two groups of grades are shown: the answers that resulted as Excellent (E) were added up to the ones resulted as Good (G), and the answers that were graded as Bad (B) were added up to the Worst (W) ones. 95% of those who attended to offline class marked E or G. In the online presentation group, 77% did the same, and 23% marked B or W. Only the groups that assisted the online classes used the chat, and 100 % of the students rated as Excellent or Good. A total of one hundred and seven students (82,95 %) choose the option “Possibility to watch lecture more then once”, and forty-seven (36,43 %) choose the option “Possibility to attend the class in my environment, without having to go out from home”. It is important to say that there was more than one option for this question.
Figure 5 – Comparison of the technical evaluation, by type of class. Around 66% of the students did not answer the question on “what they dislike most about the system”; 17,6% found the environment “cold”; 10,4 % marked the option “I missed the colleagues” and 6% made mark the option “I did not like having class using a computer”. C. Results of Teacher’s Answers: The results from the teacher’s answers to the Evaluation Questionnaire were: •
100 % answered they liked the environment of virtual class;
•
25% answered that the installation is easy and 75% that found a bit difficult;
•
100% answered that the class preparation presented moderate difficulty, and that the offered tools were easy to use;
•
75% answered that the teaching moment is easy, and 25% said that it presented mild difficulty;
•
50% answered rated as excellent the available tools for the presentation class, and 50 % rated as good;
•
75% answered that the students liked the new technology and 25% said that only part of them were interested;
•
for the question about “what they liked more”, they all mention the fact of being able to re-use the class already developed other times;
•
to the Test of Acceptance, 75% answered “Yes”, and 25% “Perhaps”. IV. DISCUSSION
Figure 4 – Student’s answers to the question “Would you like to keep on using the system in your course”, presented separated by type of class they had attended.
From the development phase to its test, the system designed to prepare and present offline classes did not
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
322
M.L.A.Botelho, D.F.Cunha, F.B.Mendonca and S.J.Calil
present difficulties during its analysis or during its development. Even in the Beta and Final Tests presented any problem. It received the best grading to all the questions during the student’s and the teacher’s evaluations. Conversely, the application of online classes presented difficulties to all the development phases. The biggest difficulty of programming phase was the synchronization between the slides, the image and the voice of the teacher during the class presentation. This problem was solved after the development of a technique, already described [4], that made possible the perfect synchronization. During the tests, the principal problem detected with the online classes could be summarized in the observation of a teacher: "the system has his performance directly connected to the efficiency of the video transmission”. So, among the system functionalities, the audio and video streaming is the critical point. When the involved computers (both the clients and the server) have not the Internet connection at constant and excellent quality, the whole performance of the system is reduced. Furthermore, the concurrent data traffic is also a factor which compromises the system’s speed. If just one of the class participants, during the lecture, executes a download of a big file (like a movie, for example), the performance of the whole system is also reduced. The difficulty in carrying out the online classes can be observed by the grades given for the questions related to the technical quality for both types of classes presented by Figure 5. The offline classes evaluations is much better.
This work allows the observation that efforts could and should be carried out to include the Brazilian graduation system within the digital world; good results certainly are expected. The sample of students and teachers collaborators are interested in employing virtual classes in day by day work, which demonstrates that the Brazilian universities can use this resource for enrichment of the teaching and learning processes.
ACKNOWLEDGMENT The first author was a doctorate student in the PICDT/CAPES/MEC.
REFERENCES 1. 2.
3. 4.
V. CONCLUSIONS
5.
With these finds reported, it is observed that the speed and the quality of the Internet transmission, as it is today, represent a serious limitation factor to the use of the open Education, in Brazil. Certainly, differentiated services, like the Internet 2, will offer better ways to universities implementing online events. On the other hand, the offline presentations available at Internet showed few difficulties of implementation, and great acceptance from the teaching community, as demonstrated. Some specialists devoted to the open educational system suggest that those institutions that failed to embrace technological progress will be unable to provide the future demands [7, 8]. However, it is not the simple conversion of the course contents or their curricula that must take place, but how the educators see the education itself. The development of skills is necessary to maximize the benefits and the potential that the information technologies and communication offer to the education arena [9].
6.
7.
8. 9.
Moore, M.G., Kearsley, G. (1996) Distance Education: A Systems View, Wadsworth Publishing Company. Belmont (USA). Moran, J.M. (2004) Propostas de Mudancas nos Cursos Presenciais com a Educacao On-line. 11o Congresso Internacional de Educacao a Distancia. 8/09/2004. Salvador, BA. Valente, J.A. (1995) “Diferentes Usos do Computador na Educacao” In: Computadores e Conhecimento: Repensando a Educacao. Nied/Unicamp. Botelho, M.L.A. (2007) Concepcao, Desenvolvimento e Avaliacao de um Sistema de Ensino Virtual. Tese de doutorado. Departamento de Engenharia Biomedica. Faculdade de Engenharia Eletrica e de Computacao, Universidade de Campinas. Pfleeger, S.L (2001) Software Engineering: Theory and Practice. 2nd Ed. Prentice Hall. Botelho, M.L.A., Cunha D.F., Mendonça, F.M., Calil, S.J. (2006) Avaliacao de um sistema desenvolvido para o ensino virtual. X Congresso Brasileiro de Informatica em Saude. P.1313-1318. Florianopolis, SC. Brasil. O´Neill, K., Singh, G., O´Donoghue, J. (2004) Implementing eLearning Programmes for Higher Education: A Review of the Literature. Journal of Information Technology Education. v3, p313-322. http://jite.org/documents/Vol3/ v3p313-323131.pdf. Sigulem, D. (2000) Educacao Continuada a Distancia na Área Medica. http://www.virtual.epm.br/cursos/aulas/index.htm. Palloff, R.M., Pratt, K. (2002) Construindo comunidades de aprendizagem no ciberespaço – Estrategias eficientes para salas de aula on-line. Artmed Editora. Porto Alegre. Author: Institute: Street: City: Country: Email:
Maria Lucia de Azevedo Botelho Departamento de Ciencias Biologicas - UFTM Praca Manoel Terra Uberaba - MG Brazil
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Biomedical Engineering and Virtual Education A. Kybartaite, J. Nousiainen, K. Lindroos, J. Malmivuo Ragnar Granit Institute/ Tampere University of Technology, Tampere, Finland Abstract— This paper briefly presents Biomedical Engineering (BME) in the virtual education. BME is a relatively new and highly multidisciplinary field of engineering. Due to its versatility and innovativeness, BME requires special learning and teaching methods. Virtual education is an emerging trend in the higher educational system. Technologies, learning theories, instructions, tutoring, and collaboration incorporated in the virtual education can lead to effective learning outcomes. European Virtual Campus for Biomedical Engineering (EVICAB) is the platform, where traditional biomedical education is transferred to the virtual. Keywords — Biomedical Engineering, Online Education, eLearning, Open Access, Collaboration.
I. INTRODUCTION Biomedical engineering (BME) is a relatively new field of engineering. It is under the process of continuous change and creation new specialty areas due to a large flow of information and advancements in technologies. Some of the well established specialty areas within the field of BME include bioinstrumentation, biomaterials, biomechanics, cellular, tissue and genetic engineering, clinical engineering, medical imaging, rehabilitation, and systems physiology [1]. The field of BME is very multidisciplinary as it brings together knowledge from many different sources, like medicine, technology and natural sciences. An education can be seen as traditional and online. The traditional education is based on teacher’s and students’ face-to-face interactions in a class. Online education has the same meaning as virtual, internet-based, web-based, or education via computer-mediated communication. It is currently becoming popular in higher educational institutions as working students are not able to spend most of their time in a class. Meanwhile, they can attain all study related material on the internet; in the place and time which is the most convenient for them. Due to its versatility and innovativeness a special educational environment is needed for BME. For this reason a common European Virtual Campus on Biomedical Engineering (EVICAB) is under the process of development.
II. TRADITIONAL
AND ONLINE EDUCATION
Despite the totally different information delivering media, traditional and online education has still much in common. Books, lecture notes, exercises, laboratory works, and final exams are common elements of any class. Nowadays it is possible to convert traditional course elements to online without content modifications or loss of data. Examples of traditional and online class elements are listed in Table 1. Although EVICAB project is in the beginning stage it already has experience in implementing the online material. The exemplary EVICAB course, Bioelectromagnetism, refers to the book which is available for students in printed and in web edited format [2]. The web book can be accessed globally, by all students at any time. Also video lectures are provided. The lecturer’s talk is recorded and compatible with alternating lecture slides. Students have the possibility to choose which lecture format to take so that their information retaining level would be the highest. The final online exam, a new dimension in learning, has also been tested. It was realized in the following way: the students attended the examination at the computer class. Their identities were checked before examination began and questions were opened on the computer. The computers were connected to the Internet and the students were allowed to use all the available material including the text books. Online examination primarily tests students’ ability to understand and make conclusions on the material. The internet examination allows instructors/ lecturers to monitor the progress of the examination via the internet independently of their location. It is not enough just to transfer the traditional material to online in order to achieve effective learning outcomes but also a pedagogical and technical support is needed as illustrated in Fig. 1. Table 1 Traditional and Online Class Elements Traditional Class: Books Lectures Laboratory works Exercises Final exam
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 329–331, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Online Class: eBooks Audio/ Video lectures Online laboratory works Online tests, quizzes Online exam
330
A. Kybartaite, J. Nousiainen, K. Lindroos, J. Malmivuo
Online
Support
Lectures
Video/ Audio
Books
eBooks
Exercises
Tests, quizzes
Technologies: - ICT - VLE - Standards - Open sources
Lab works
Virtual labs
Final exams
Online exams
Traditional
Learning theories h i Course instructions Tutor/ Instructor
EVICAB Common curriculum
virtual
Common VLE Quality assurance Support for the course content production Common base for materials and technologies
Fig. 1 Traditional and online class elements, support for the material transferring and expected outcomes by implementing courses in EVICAB III. SUPPORT FOR VIRTUAL EDUCATION Technology, learning theories, course instructions, and tutors/ instructors are the key elements that support online material implementation. Traditional and online class elements, the support, and expected outcomes by implementing courses in EVICAB are illustrated in Fig. 1. In order to provide and apply the online material, sufficient information communication technologies (ICT), i.e., computers with internet access and software programs are required. A virtual learning environment (VLE), in general, is defined as software to facilitate teachers in managing educational course for students. Moodle [3] was chosen as the virtual learning environment for EVICAB courses. It serves as the common platform to all courses; the material can be accessed by teachers and students at any time. Moodle also allows tracking its users’ activity as every user can access the environment under own password. This VLE is not the only that can be used in EVICAB. If other course
providers have already implemented their materials on the other VLE, Moodle can serve as a link to that. Open source tools and open access learning materials are applied in preparing EVICAB courses. For example, when producing audio/ video lectures, accessing the material (e.g., flash players) or communicating and collaborating (e.g., Skype). So that the material prepared by different authors and tools would be compatible and possible to use within VLE, Scorm [4] standard will be applied. Lecture materials in EVICAB courses are divided into segments. Students can navigate through the whole information and choose certain parts to study. The material also can be reused and modified by adding extra information, implementing quizzes or self assessment tests. Online materials can have a disadvantage, which is passive online reading. In order to avoid that, the online education should be based on learning theories and reasonable pedagogy, like constructivism. This approach gives students the opportunity to construct their own meaning from the information presented during virtual sessions. Learning based on constructivism is seen as active, goal oriented, self-regulated, and depended on prior knowledge and experience. Every EVICAB course will have instructions so that students could know what prior-knowledge is needed, what are the requirements to pass the course, what can be expected after completing the course, and how will it be related with other courses. Based on this information students could plan their further studies with more motivation. A teacher as a physical person disappears in the virtual education as all information is available online. Thus, a role of tutor/ instructor becomes important. As students will always have questions related to the course material, assignments, practical issues, organizational matters, etc., there is a need for a contact person who can answer their questions in a short time. A direct student-to-student communication is restricted in online education. Thus, it is strongly recommended to students to communicate, collaborate, and solve common problems using any online communication technology, like discussion groups, forums, or wiki. IV. OUTCOMES This chapter outlines what outcomes have already been achieved and are expected in a long term in EVICAB. These are also illustrated in Fig. 1. Common curriculum. Since BME is a multidisciplinary field, which brings knowledge from many different sources,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Biomedical Engineering and Virtual Education
it requires a wide educational background. EVICAB aims to create an open access common curriculum for all cycles of BME education. This is achieved by collaboration between partner institutions and universities, BME programmes. Currently, five partners are involved in curriculum development; they are represented by Ragnar Granit Institute, Tallinn University of Technology, Kaunas University of Technology, Linköping University, and Brno University of Technology. Virtual Learning Environment. EVICAB uses Moodle as the virtual learning environment. It is also the platform for tutoring and communication between students and teachers. In the future the interface will improve as more courses will be available there; more teachers and students will use VLE. High quality of online education. Since the standards for preparing and selecting the course materials are under the process of developing, the education in EVICAB has the aim to be at the highest level. This is guaranteed by quality assurance system build in EVICAB. Support for course content production. Course designers and providers are encouraged to share their experience and tools to prepare high level online materials (e.g., experience in producing video lectures, teleconferencing). Common recourses. EVICAB aims to create and maintain bases for open access lecture materials and for tools (e.g., software programs) used to create online courses. Course designers and providers could modify, improve, comment, and apply materials and tools for their need.
331
promoting BME education. EVICAB will serve as the environment for that. The virtual education is a challenge both for teachers and students. The effective implementation and application of BME education in the virtual environment requires not just transferring the traditional material to the online but also efficient application of technologies, learning theories, pedagogies, human assistance and collaboration between teachers, institutions, and students. The main advantage of the virtual education is the global open access. The global learning community can be at the fingers of teachers and students. Application of learning technologies, tools, open access materials can provide a new dimension in the education and lead to effective learning outcomes.
ACKNOWLEDGMENT This work has been supported by the eLearning Programme of European Commission.
REFERENCES 1. 2. 3. 4.
Biomedical Engineering Society at http://www.bmes.org Bioelectromagnetism at http://www.rgi.tut.fi Moodle at http://moodle.org Scorm at http://www.adlnet.gov Corresponding author:
V. CONCLUSIONS The online education is a relative new approach in learning and teaching, thus it encourages collaboration for
Author: Institute: Street: City: Country: Email:
Asta Kybartaite Ragnar Granit Institute/ Tampere University of Technology Korkeakoulunkatu 10, Tampere, 33720 Finland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Internet Examination – A New Tool in e-Learning J.A. Malmivuo, K. Lindroos and J.O. Nousiainen Ragnar Granit Institute, Tampere University of Technology, Tampere, Finland Abstract—Internet examination is a new innovation in elearning. Internet examination extends the virtual mobility from the learning process to the examination of the students’ knowledge on the course topic. Internet examination has pedagogical benefits also in the case where the students are on the site of teaching. The paper is based on the experience we have obtained at the Ragnar Granit Institute. Keywords— e-learning, Internet.
I. INTRODUCTION Internet is more and more frequently used in education. Its benefits in distant learning and as a support in classroom learning are already widely acknowledged. We have used Internet examination in the ordinary teaching in the Institute and in the courses given by the author in other universities in Finland and abroad [1]. We apply it also in the European EVICAB project [2]. In this paper we introduce the use of Internet as a platform for course examinations and assess its benefits and drawbacks. II. FORM OF THE INTERNET EXAMINATION During the examination the students may use all the material available on the Internet, including the course book. The only thing which is not allowed is communicating with other persons with e-mail or other means. This changes the style of the questions: In ordinary examinations, where the students may not have the material available, it is more tested whether the students remember certain details from the course. In an Internet examination, where all material is at hand, the examination tests whether the students have fully understood the concepts and have the ability to combine various issues and to give rationales for their conclusions. The latter method corresponds more closely with the professional skills what the students need when they move to the working life. Depending on whether the course is part of the degree studies or supplementary education, the students participate in the examination in different way. In a degree studies examination the students take the examination in a computer class. Their identity is checked and the supervising assistant controls that the students log on the
examination with their own name. It is also important to have a list of the participating students so that no student outside the classroom may participate in the controlled examination. If the students are from several universities, the examination may be arranged in their home university at the same time provided that the aforementioned conditions are ensured. In supplementary education examination the students may take the exam anywhere because there is no need to control their identity. This is one important feature of the Internet examination. In supplementary education courses, arranged for instance in connection with international scientific congresses, the students may be from several countries and different cities and universities. Because the examination is usually arranged a couple of weeks after the course, arranging it on the course site would then be impossible for the students. III. MAKING THE EXAMINATION A. Examination style The examination may be of any form. It may include questions with multiple choice answers, calculation tasks or essays. A multiple choice examination is more suitable for tests performed during the course. In the final exam the calculation tasks or essays are more suitable. In calculation tasks the Internet examination has the problem that writing equations with the computer is more time consuming and difficult and therefore only such questions may be used where the correct result is sufficient. Deriving equations by the student to the answer is practically impossible. Writing essays is most practical from the point of view of the student. For the teacher/assistant the essays are more time consuming to check. B. Examination classroom The students make the examination in a computer classroom. They sign in the educational platform and open the examination question page. For Internet examination we have used the Moodle program [3].
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 336–337, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Internet Examination – A New Tool in e-Learning
For their answers the students open a Word file and write there their personal data and a password given by the assistant supervising the examination. The password is important to control that the examination is attended only by the students in the classroom. The students are free to use all material available on the Internet. This is good, because then they do not need to remember all details of the topic, more important is that they understand it and are able to make conclusions. We tested the Internet examination first on our course on Bioelectromagnetism. The textbook for this course is available on the Internet [4]. This form of examination tests better the students’ ability to successfully do their job in the working life. The only thing which is not allowed for the students is communicating with any other person with e-mail or any other method. An important feature in the Internet examination is that it may be performed simultaneously in more than one classroom locating in different universities in different cities or even different countries. This is important because the students do not need to travel for the examination. At the end of the examination the students upload their Word file including their answers to the Moodle system. C. Operations of the teacher Because the students will have the Internet available, the questions should not be of the style: “What is … ?” but rather of style “Why … ?” or “For what purpose … ?”. Such questions measure the students’ understanding of the topic of the question and ability to make conclusions. After the examination the teacher may download the students’ answers from the Moodle and print out them. It is easier for the teacher to review the answers because they are written with computer instead of unclear handwriting. One benefit is also that all the documentation from the examination is archived in the computer. After correcting the answers and giving the grades the teacher may upload the results to Moodle or on an Internet page for the students to see them.
337
Because the students’ answers are uploaded to the server, the teacher may easily archive them. This is an additional safety factor for the student and the teacher in case the student is not satisfied to the grade given. Important is, that the teacher does not need to be in the examination location during the examination but all the administration of the examination may be performed from any location in the world where the Internet connection is available. IV. CONCLUSIONS The Internet examination is a modern way to perform the examination. Its main benefit is that it is not tied to one location but may be arranged in several different locations at the same time. The students apparently appreciate this kind of examination more for several reasons. One, but not the only one, is that all information for finding small details on the topic of the questions is available on the Internet.
ACKNOWLEDGMENT This work has been supported by the European Commission and the Ragnar Granit Foundation.
REFERENCES 1. 2. 3. 4.
www.rgi.tut.fi/edu/bem/ www.evicab.eu www.moodle.fi/evicab/moodle/ www.tut.fi/~malmivuo/bem/bembook/ Author: Jaakko Malmivuo Institute: Street: City: Country: Email:
Ragnar Granit Institute Korkeakoulunkatu 3 Tampere Finland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Presentation of Cochlear Implant to Deaf People J. Vrhovec1,2, A. Macek Lebar2 , D. Miklavcic2, M. Eljon2 and J. Bester2 1
MKS Electronic Systems, Rozna dolina C. XVII/22b, 1000 Ljubljana University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, 1000 Ljubljana, Slovenia
2
Abstract— We designed an Internet site for deaf people and the people who are meeting with them on an everyday bases in Slovenian language. As the native language of the deaf is the sign language, the basic information about the cochlear implant is interpreted in sign language as well. In order to understand how the cochlear implant works we first have to familiarize ourselves with our sense of hearing – the ears. The ear damages are different and according to type of injury defect aid must be different. The cochlear implant can help people who still have at least 10% of their hearing nerves preserved. After successful implantation of the cochlear implant rehabilitation is very important. Rehabilitation is crucial for distinguish the useful sound from noise and later on for distinguishing words from noise. On the designed Internet site we also included a presentation on how well deaf people with cochlear implants can hear. This part of the presentation is intended for people who can hear, so they can imagine what deaf person with a cochlear implant can hear. Keywords—cochlear implant, ear, internet site
I. INTRODUCTION In the world of silence the deaf people communicate with each other by using sign language. Sign language is there native language. When they communicate with the hearing world around them, they usually help themselves with different devices such as hearing aids and cochlear implant. In this article we will be talking about the profound deaf people and one of the devices - cochlear implant. Cochlear implant can help most of deaf people when hearing aids and drugs are not enough, yet the hearing nerve is not completely damaged. However the most important part of using cochlear implant is, that the person who is using cochlear implant, is willing to use it. Nowadays the computers are available more or less to all people. Therefore worldwide web could be an efficient way for introducing the cochlear implant to deaf people, their families and friends. The people who are interested in cochlear implant can seat behind their home computer and read the site in peace. But in which language? Problem with Slovenian deaf people is that their native language is Slovenian sign language. Deaf people in Slovenia can understand Slovenian written language, their second language, but expect from them to understand English, is not realistic.
II. METHODS A. The internet site The internet site provides presentation of information with multimedia technology. A lot of pictures are on the site to help deaf people to understand the technical terms. Because very important issue for the deaf is language, basic and also crucial explanations are also given in sign language. The rest is written down in their second languageSlovenian language. The sound presentation is for hearing people to understand, how good an average profound deaf person with cochlear implant can actually hear. The internet site can provide reading the sections that the user is interested in and read the rest in the case of lack of knowledge. The internet site was designed in Dreamweaver MX. It is divided to six different sections, as it is shown in figure 1. There is special part on the site, where user can test his/her knowledge. After the user receives the results of the test, he/she can use the link to the sections on the site. The site contains the section about the sound and the ear. After the user learns about natural hearing he/she understands easier next sections about hearing loss and one of the aids; the cochlear implant. At the end of internet content is very imported section rehabilitation. If the user wants to know more about the cochlear implant, he/she can use one of the links which point to different internet sites with similar contains.
Fig. 1 The internet site is divided to six different sections
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 332–335, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Presentation of Cochlear Implant to Deaf People
III. DESCRIPTION OF CONTENT A. Section1:Sound Sound is a disturbance of mechanical energy that propagates through matter as a wave. Sound is characterized by the properties of sound waves, which are frequency, wavelength, period, amplitude and velocity or speed. How we can hear sound is represented in the next section [9]. B. Section 2:How natural hearing works In the section about natural hearing contains the physiology of human ear. The ear consists of three basic parts - the outer ear, the middle ear and the inner ear. Each part of the ear serves a specific purpose in the task of detecting and interpreting sound. The outer ear serves to collect and channel sound to the middle ear. The middle ear serves to transform the energy of a sound wave into the internal vibrations of the bone structure of the middle ear and ultimately transform these vibrations into a compression wave in the inner ear. The inner ear serves to transform the energy of a compression wave within the inner ear fluid into nerve impulses which can be transmitted to the brain. The three parts of the ear are shown in the figure 2. The middle ear is an air-filled cavity which consists of an eardrum and three tiny, interconnected bones - the hammer, anvil, and stirrup. The eardrum is a durable and tightly stretched membrane which vibrates as the incoming pressure waves reach it. The stirrup is connected to the inner ear and thus the vibrations of the stirrup are transmitted to the fluid of the middle ear and create a compression wave within the fluid. The three tiny bones of the middle ear act as levers to amplify the vibrations of the sound wave. Due
333
to a mechanical advantage, the displacements of the stirrup are greater than that of the hammer. The inner ear consists of a cochlea, the semicircular canals, and the auditory nerve. The cochlea and the semicircular canals are filled with a water-like fluid. The cochlea is a snail-shaped organ which would stretch to approximately 3 cm. In addition to being filled with fluid, the inner surface of the cochlea, in the organ of Corti, is lined with over 20000 hair cells. These nerve cells have a differ in length by minuscule amounts; they also have different degrees of resiliency to the fluid which passes over them. These nerve cells perform one of the most critical roles in our ability to hear. As a compression wave moves from the interface between the hammer of the middle ear and the oval window of the inner ear through the cochlea, the small hair cells will be set in motion. Each hair cell has a natural sensitivity to a particular frequency of vibration. The different frequencies are “heart” by different section of the organ of Corti, with the parts nearest the ossicles sensitive to high tones and the parts farthest from the ossicles sensitive to low tones [2]. C. Section 3:Hearing loss Hearing loss can be categorized by where or what part of the auditory system is damaged. There are two basic types of hearing loss: conductive hearing loss and nerve impairment. Conductive hearing loss occurs when sound is not conducted efficiently through the outer ear canal to the eardrum and the tiny bones, or ossicles, of the middle ear. Conductive hearing loss usually involves a reduction in sound level, or the ability to hear faint sounds. This type of hearing loss can often be corrected with the use of medicaments or with surgically. Very successful is the use of hearing aid. Nerve impairment occurs when there is damage to the inner ear (cochlea). Medical and hearing aids may help. Very successful is the use of cochlear implant. Nerve impairment hearing loss not only involves a reduction in sound level, or ability to hear faint sounds, but also affects speech understanding, or ability to hear clearly. The hearing loss may occur on the nerve pathways from the inner ear (retrocochlear) to the brain. That kind of hearing loss cannot be corrected with the use of medicaments or with surgically. It is a permanent loss [9]. D. Section 4:Cochlear implant
Fig. 2 The ear consists of three basic parts.
Cochlear implant is electronic device that provides useful hearing and improved communication ability to individuals who are profoundly hearing impaired and unable to achieve speech understanding with other hearing aids. For individuals with a profound hearing loss, even the most powerful
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
334
J. Vrhovec, A. Macek Lebar , D. Miklavcic, M. Eljon and J. Bester
Fig. 4 The insert of Slovenian sign language. Fig. 3 Cochlear implant. With number 1 and 2 are marked the earhook and the processing unit. With number 3 and 4 is marked sound processor, controller option and connecting cable. The magnet is marked with number 5. Cochlear implant body is under number 6. While the most important part of cochlear implant the electrode array and electrodes are marked with number 7 and 8 [8].
hearing aids may provide little or no benefit. A profoundly deaf ear is typically the ear in which the majority of sensory receptors in the inner ear, called hair cells, are damaged or diminished [1, 3, 4, 5, 6, 8, 9]. Cochlear implant is shown in figure 3. Cochlear implants bypass damaged hair cells and directly stimulate the residual hearing nerves ends by transforming sound signal into electrical pulses, allowing individuals who are profound or totally deaf to perceive sound [1]. The most proper time for the children to receive the cochlear implant is when they reach the age between 1-2 years. Because at that age the speech development, language, and thinking skills are developing. The target audience for the developed internet site are deaf people and the people who are meeting with them on an every day base. Therefore the internet site has to be understandable and appealing to them. Deaf people talk to each other in sign language. To present them the contents in Slovenian sign language, we recorded a short movie about the main contents. The insert of Slovenian sign language is shown in figure 4. All the contents are written down in Slovenian language and corroborated with pictures. The challeneges that deaf people are faced with in every day communication and during as well as after rehabilitation are presented. For hearing people we prepared the presentation on how well can a profound deaf person with cochlear implant hear according to the results of routine hearing test. The people who can hear usually think that deaf person with cochlear implant can hear as well as he/she can. Unfortunately most profound deaf people with the cochlear implant can not hear even close to as good as we can. Therefore the sound pres-
entation was made. Well known song was filtered according to specifications that the cochlear implant has due to its construction. Obtained sound was filtered once more; the results of routine hearing test of deaf person using cochlear implant where incorporated into processing. All sound processing was made in Matlab [7]. E. Section 5:Rehabilitation After the user receives the cochlear implant the rehabilitation is very important. For young children the rehabilitation is proceeding along with child development. While older persons are once again learning to hear or they are getting used to now sense. It is on rehabilitation that depends how good the person receiving of cochlear implant would develop speech and hear the sounds around him/her. The deaf person receiving the cochlear implant can hear the sounds around him/her. He/She has then to learn to separate useful sounds from the noise. Only then can he/she learn to separate the words from the rest of the sounds. IV. CONCLUSIONS The target audience for the internet site we developed are deaf people and the people who are meeting them on everyday bases. The purpose of the internet site is to inform them how the cochlear implant works. Therefore the presentation of cochlear implant in Slovenian sign language is included. On the other hand the problems that deaf people have in everyday communication and during as well as after rehabilitation are presented. For hearing people we included the presentation how good can a profound deaf person with cochlear implant hear according to results of hearing test. The people who can hear usually think that deaf person with the cochlear implant can hear as good as he/she can. But the last is not true. Their hearing ability mostly depend on rehabilitation process.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Presentation of Cochlear Implant to Deaf People
V. ACKNOWLEDGMENT
335 5.
The study was supported by Slovenian Research Agency and Ministry of Higher Education, Science and Technology.
6.
REFERENCES
7.
1.
2. 3. 4.
Hamida AB, M. Samet, N. Lakhouda, M. Drira and J.Mouine (1998), Sound spectral processing based on fast Fourier transform applied to cochlear implant for the conception of a graphical spectrogram and for the generation of stimulating pulses, Proceedings of the 24th Annual Conference of the IEEE Industrial Electronics Society, 1998. IECON '98, pp.1388-1393 VonBekesy G (1961), Concerning the pleasures of observing, and the mechanics of the inner ear, Nobel Lecture. Loizou PC (1999), Signal-Processing techniques for cochlear implants, IEEE engineering in medicine and biology, Vol. 18, pp.34-45. Francis A. Spelman (1999), The past, present, and future of cochlear prostheses, IEEE engineering in medicine and biology, Vol. 18 , pp 27-33.
8. 9.
Zwolan TA., Kileny PR. (1993) Cochlear implants for the profoundly deaf, Proceedings of Sixth Annual IEEE Symposium on ComputerBased Medical Systems, pp. 241-246 McDermott HJ (1998), How cochlear implants have expanded our understanding of speech perception, Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vol. 5, pp. 2251-2256 Hamida AB (2000), On the rehabilitation of Cochlear implant patients Using a Flexible and versatile speech processing technique with a spectral approach, IEEE EMBS International Conference on Information Technology Applications in Biomedicine, pp.359-364 Cochlear implant at http://www.cochlear.com/ Silva C (2005), Cochlear implants, Electrical Insulation Conference and Electrical Manufacturing Expo, pp.442-447. Author: Institute: Street: City: Country: Email:
Jerneja Vrhovec Faculty of Electrical Engineering Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
New courses in medical engineering, medical physics and bio/physics for clinical engineers, medicine and veterinary medicine specialists in Serbia V. M. Spasic-Jokic1,4, D. Lj. Popovic2, S. Stankovic3 and I. Z. Zupunski4 1
2
VINCA Institute of Nuclear Sciences/Laboratory of Physics, Belgrade, Serbia University of Belgrade/ Faculty of Veterinary Medicine, Department of Physics and Biophysics, Belgrade, Serbia 3 University of Novi Sad, Faculty of Science, Department of Physics, Novi Sad, Serbia 4 University of Novi Sad, Faculty of Technical Sciences, Novi Sad, Serbia
Abstract— The paper presents new courses in medical physics, medical engineering and bio/physics for clinical engineers, medicine and veterinary medicine specialists introduced at two universities in Serbia - University of Novi Sad and University in Belgrade since 2004. The courses were aimed to educate well trained specialists in medical physics, medical engineering and veterinary medicine and to establish new programs in specialist studies incorporated in CPD programs for professional licensing. Keywords— clinical engineering, medical physics, veterinary medicine, specialist studies.
I. INTRODUCTION Following the Bologna process of transforming European educational area the new curricula in medical physics, medical engineering and bio/physics for medicine and veterinary medicine specialists in Serbia, was introduced on Belgrade and Novi Sad University starting in 2004. The new courses were aimed to fulfill the existing gap in education of physicists and medicine and veterinary medicine specialists as according to EFOMP recommendations holding a university Master’s Degree in Medical Physics, is not a sufficient qualification to work as a medical physicist or a medical engineer in hospital environment. To manage patients without supervision the Recommendations hold necessary to have basic university degree, master degree and hospital practice. Therefore, in 2004 the postgraduate program for Medical Physicists was initiated at the Association of Centers for Interdisciplinary and Multidisciplinary Studies and Research (ACIMSI), at the University of Novi Sad and in 2005 another Program of Specialist education for Medical Engineers at the Faculty of Technical Sciences, University of Novi Sad started, too. In 2006, a new program for Specialist Study for Medical Physics and Medical Engineering was established at ACIMSI, integrating other programs in the area. [1,2,3] At the same time, new courses on Bio/Physics/Medical Physics within Veterinary Medicine curricula were introduced at the Faculty of Veterinary Medicine, University of
Belgrade in 2004: those were the Core Course in Biophysics/Medical Physics and an Elective Course on Physical Methods, Instrumentation and Techniques. II. MEDICAL ENGINEERING AND MEDICAL PHYSICS A. General Medical Physicist/Engineer is generally involved in three basic hospital activities: health care services and consulting; development and research, and training. [2,3] Medical Physicist is a professional with clearly defined competences and responsibilities in health care service and consulting, patient dosimetry, development and implementation of complex equipment as well as in optimization, QA, and radiation protection. This requires adequate theoretical and practical knowledge, as well as permanent education and training. Medical (Clinical) Engineer is a professional with competences and responsibilities in health care service and consulting, patient dosimetry, technical support, development and implementation of complex equipment as well as in optimization, QA and QC procedures, and radiation protection. This also requires adequate theoretical and practical knowledge, as well as permanent education and training. B. Education and Licensing of Medical Physicists and Medical Engineers According to international recommendations of EU, EFOMP (European Federation of Organizations for Medical Physics), IFMBE (International Federation for Medical and Biological Engineering) and EAMBES (European Alliance for Medical and Biological Engineering & Science) we recognized following professional categories in Medical Physics and Medical (Clinical) Engineering: [2,3] 1.
Qualified Medical Physicist/Medical (Clinical) Engineer (QMP/QME) should have a bachelor and a master university degree and at at least 2 years’ training ex-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 310–312, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
New courses for clinical engineers, medicine and veterinary medicine specialists in Serbia
2. 3.
perience on the job, that is essential to achieve the competencies to work as QMP Specialist Medical Physicist/Engineer (SMP/SME) must fulfill all requirements defined in point 1. and must successfully complete a 5 years CPD cycle Expert in Medical Physics/Engineer (EMP/EME) besides the requirements defined in point 2. should complete a Specialist study program and have at least 3 years of working experience, or should have a PhD degree in Medical Physics.
The certification is in responsibility of the National Certification Board, within the Ministry for Health, which forms a National Register of Medical Physicists/Engineers. The responsibility could be transferred on National Professional Society. If it is not legally possible licensing could be in competency of National Society of Biomedical Engineering and Medical Physics (BIMEF). BIMEF will be able to perform internal accreditation which could be recognized by EFOMP. Primary condition for staying in National Register is fulfilling 5 years CPD cycle. [2,3] Academic (University) education of medical physicists and medical engineers and preferences (university titles) are divided in tree basic stages: I stage – Basic University Study in Medical Physics, duration: 4 years, 240 ECT + Final thesis. University title: Medical Physicist II stage – Master University Study in Medical Physics at the Department of Physics, Faculty of Science, University of Novi Sad, duration: 1 year, 60 ECT + Master Thesis. University title: Graduated Medical Physicist –Master in Medical Physic. Master University Study in Electrical Engineering at Faculties of Electrical Engineering, duration (4+1) years, 300 (240+60) ECT + Master thesis University title: Graduated engineer- Master in Electrical and Computer Engineering III stage – Postgraduate University Study at ACIMSI (Association of Centers for Interdisciplinary and Multidisciplinary Studies and Development Researches) University of Novi Sad. Specialist study in Medical Physics, duration: 1 year, 60 ECT + Specialists thesis. University degree: Specialist in Medical Physics .Specialist study in Medical Engineering, duration: 1 year, 60 ECT + Specialists thesis. University degree: Specialist in Medical Engineering.Doctorial study in Medical Physics and Medical Engineering, duration: 3 years, 180 ECT + Doctorial thesis. University degree: PhD in Medical Physics and PhD in Medical Engineering. Postgraduate studies for medical physicists and medical engineers are organized at ACIMSI Center for Medical Physics and Clinical engineering and are complementary with similar studies in other European countries. They are
311
aimed at training professional and research personnel for successful team work in medical institutions, as rapid development of medical diagnostics and therapy, as well as medical equipment of advanced technology, require a team work of experts who, apart from being competent in their profession, have to demonstrate knowledge in the field of medicine, physics and engineering. The concept of such multidisciplinary studies enables physicians, physicists and engineers, who decide to work in medical institutions, to use the existing equipment successfully, modernize it rationally and continually work on its improvement. Principal carriers of the programs are Faculty of Technical Sciences and Faculty of Sciences at the University of Novi Sad. C. Programs for Specialist Study in Medical Physics and Medical Engineering The program in Specialist Study in Medical Physics and Medical Engineering has been designed as to incorporate the contemporary scientific and professional findings in the field of medical physics, medical diagnostics and therapy as well as modern technological achievements regarding medical instruments. The curriculum offers a flexible approach to students’ preference for studying particular specialized areas of medical physics through elective subjects regarding application in current medical practice. The studies are multidisciplinary, since experts from different fields carry out the process of teaching: physicists, mathematicians, doctors, biologists, chemists, engineers and others. The educational basis are Faculty of Technical Sciences, Faculty of Sciences and Medical School of University of Novi Sad, while the clinical ones are Military Medical Academy, Belgrade and Institute of Oncology “Sremska Kamenica”. International Aspects of the Programs are recognized regarding the Bologna declaration, EFOMP (European Federation of Organizations for Medical Physics) and ESOEPE (European Standing Observatory for the Engineering Profession and Education) Programs. [1,2,3] The program offers 28 courses divided in two categories: core mandatory subjects and elective subjects. The students are recommended to choose the mentor and to select the group of subjects which is in the best correspondence with his/her job. Core subjects includes the topics in bio/physics; anatomy, physiology and cell biology; principles of biomedical engineering; medical facilities; principles and safety standards; metrology and QA programs. Elective subjects are divided in 11 groups as follows: Physiology, anatomy and cell biology; Principles of biomedical engineering; Medical facilities in radiological diagnostics and safety aspects; Medical facilities in Nuclear Medicine and safety aspects; Medical Facilities in Radiotherapy and safety aspects; Dosimetry and Radiation protection in Ra-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
312
V. M. Spasic-Jokic, D. Lj. Popovic, S. Stankovic and I. Z. Zupunski
diological diagnostics, Nuclear Medicine and Radiotherapy; Nonionizing techniques in medicine and safety aspects; Protection against nonionizing radiation; Metrology and mesurement uncertainty; Quality Assurance in varoius branches of medicine; and Information technologies. III.COURSES ON BIO/PHYSICS WITHIN VETERINARY MEDICINE CURRICULA
Within biomedical studies at Belgrade University, physics/and mathematics are incorporated in the curriculum at the faculties of biology, medicine, veterinary medicine, stomatology and pharmacology. For decades, physics was considered rather as sort of “cook book” of technical terms than as an essential tool for understanding and interpreting the basic laws of nature, and should it be presented as a fundamental part of today’s culture or give only elementary scientific facts and technical data for core biomedical subjects and diagnostics and therapeutic methods and instrumentation for the clinics. [5] The new program introduces in 2004 at the Faculty of Veterinary Medicine, University of Belgrade offers a Core Course in Biophysics/Medical Physics in the first year of the study and an Elective Course dealing with physical methods, instrumentation and techniques applied in veterinary medicine research and practice. The courses are aimed to reflect the interest of veterinary medicine specialist and future researchers, as well as to provide information on new physical methods and techniques to be used in biomedical sciences and practice. Their main goal is to present physics both on formal cognitive level and to instigate logical reasoning and abstract thinking, as well as to encourage future biomedical community to take a closer look into physic/biophysics /medical physics theories and methodology. The courses try to promote a new understanding of the living matter, to enabled students to observe, to categorize and to quantify the natural phenomena, to read formulae as a phenomenon or a process and not just a sequence of signs, and finally to get a grip of the totality of nature itself. [6] The Core Course titled "Selected Chapters In Physics And Biophysics In Veterinary Medicine« presents basic concepts and issues in biophysics/medical physics relevant in understanding essential physiological functions of the living systems and to present basic contemporary aspects on the transport and transformation of matter and energy, both within living and non-living environment. The Elective Course titled "Physical Methods in Veterinary Medicine Diagnostics and Therapy” is aimed to presents operative and practical knowledge on the principles and functioning of biomedical instrumentation and techniques used in veterinary medicine practice. [4,5,6]
Both courses are largely founded on practical exercises and experimental work. Mathematics workshop became an integral part of laboratory workshops and is focused on examples from clinical praxis. The syllabus is divided into modules and students are enabled to pass the course step by step, passing the test following the modules. The ex cathedra presentations are avoided as much as possible, and laboratory, video and film presentations are complementary with the theoretical part of the courses. Students are encouraged to search for literature and present seminars as a result of individual or teamwork. IV.CONCLUSIONS After two year experience of introducing new courses in medical physics, medical engineering and bio/physics for medicine and veterinary medicine specialists at the universities of Serbia, the first results are encouraging. There is strong interest among graduate at postgraduate students to apply for the new programs, as well as among professionals already working in the field. The programs, as well as the teaching stuff obtained higher evaluations for content of the courses, as well as for the clearness of presentations and good organization.
REFERENCES 1. 2. 3. 4. 5. 6. 6.
Stankovic S, Spasic Jokic V, Veskovic M (2005)Medical Physics Education in Serbia: Current State and Perspectives, Biomedizinishe Technik, Berlin, Vol.50,Supl.1/2,2005, pp.1376-1377 Faculty of Technical Science at University of Novi Sad at http://www.ftn.ns.ac.yu/studije/specijalisticke/biomedicinska.pdf ACIMSI at www.medfiz.ns.ac.yu Brown H, Smallwood RH, at al. (1999) Medical Physics and Biomedical Engineering. Inst.of Physics Publ. Bristol and Philadelphia Popovic D (2005) Courses On Bio/Physics Within Veterinary Medicine Curricula. Biomedizinishe Technik, Berlin, Vol.50, Supl.1/1, 2005, pp.40-41 Popovic D, Djuric G (1995) Teaching Physics and Biophysics for Veterinary Medicine Students In: Thinking Science for Teaching. Plenum Publ. New York, pp.423-438 Popovic D, Djuric G (1998) Physics and Mathematics In Veterinary Medicine Studies. Roskilde Univ. Press, Roskilde, Denmark, pp.194 – 202 Author: Vesna Spasic Jokic Institute: VINCA Institute of Nuclear Sciences, Laboratory of Physics (010) Street: POBox.522 City: 11001 BEOGRAD Country: SERBIA Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Education and Training of the Medical Physicist in Europe The European Federation of Organisations for Medical Physics -EFOMP Policy Statements and Efforts S. Christofides1, T. Eudaldo2, K. J. Olsen3, J. H. Armas4, R. Padovani5, A. Del Guerra6, W. Schlegel7, M. Buchgeister8, P. F. Sharp9 1 Medical Physics Department, Nicosia General Hospital, 215 Old Nicosia Limassol Road, 2029 Nicosia, Cyprus Servei de Radiofisica I Radioproteccio, Hospital de la Santa Creu I Santa Pau, Av. Sant Antoni Maria Claret, 167, 08025, Barcelona, Spain 3 Radiofysisk Afdeling, 54 C3, KAS Herlev, Herlev Ringvej 75, DK-2730 Herlev, Denmark 4 University Hospital of the Canaries, 38320 La Laguna (Tenerife), Spain 5 SO di Fisica Sanitaria, Ospedale S. Maria della Misericordia, I-33100 Udine, Italy 6 Department of Physics “E. Fermi”, University of Pisa, Via Buonarroti 2, I-56127 Pisa, Italy 7 DKFZ, Abteilung Medizinische Physik in der Strahlentherapie, Im Neuenheimer Feld 280, 69120 Heidelberg, Germany 8 Medizinische Physik, Universitatsklinik fur Radioonkologie, Hoppe-Seyler-Str. 3, D-72076 Tubingen, Germany 9 Bio-Medical Physics & Bio-Engineering, University of Aberdeen & Grampian Hospitals NHS Trust, Foresterhill, Aberdeen AB25 2ZD, United Kingdom
2
Abstract— One of the main aims of the European Federation for Organisations of Medical Physics is to propose guidelines for education, training and accreditation programmes. This is achieved through the publication of Policy Statements and the organisation of education and training course, seminars and conferences. This is a continuous effort in an attempt to harmonise the education and training of the Medical Physicist across Europe. This paper presents an overview of the past, present and future efforts of EFOMP to achieve this aim. Keywords— Education, Training, Medical Physics, Policy Statement.
I. INTRODUCTION The European Federation for Organisations of Medical Physics (EFOMP) was set up in 1980 in London [1]. Among its aims and objectives is the proposal to issue guidelines for education, training and accreditation programmes. EFOMP’s first Policy Statement [2] was “Medical Physics Education and Training: The present European Level and Recommendations for its Future Development”, published in 1984. This Policy Statement is currently considered obsolete and a review is currently being undertaken of the status of Education and Training activities of the States of its National Member Organisations. The new Policy Statement will also take into consideration the European Union Directives relevant to Education and Training. An essential component of professionalism is Continuous Professional Development (CPD). EFOMP has been, and continues to be, very active in this area by publishing policy statements for CPD schemes and by establishing National Accreditation systems for Medical Physicists.
The exchange of information is a very important component of CPD as well as for the education and training of any professional. Therefore the support, sponsorship and organisation of educational as well as scientific meetings is another essential component of the efforts of EFOMP in advancing the professionalism of the Medical Physicist in Europe. The purpose of this paper is to give an outline of the relevant policy statements and the future plans of EFOMP in this area. II. EFOMP EDUCATION AND TRAINING ACTIVITIES A. Policy Statements In total EFOMP has published 11 policy statements and is currently preparing another two. All of them are related to the professional status of the Medical Physicist in Europe and so are relevant to the present paper. A brief description of them is given below: Policy Statement 1: “Medical Physics Education and Training: The present European Level and Recommendations for its Future Development” [2]. This policy statement was the result of an inquiry made in 1984 to the EFOMP National Member Organisations. This showed that a formally regulated additional education and training programme for physicists and university graduates in the engineering sciences with emphasis on Technological Physics existed in nearly half of all the European countries (9 from 19) that responded to the inquiry. In the remaining countries, the postgraduate on-the-job training in hospitals or Nuclear Physics sections is managed on an individual basis, i.e. not following a generally recog-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 313–318, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
314
S. Christofides, T. Eudaldo, K. J. Olsen, J. H. Armas, R. Padovani, A. Del Guerra, W. Schlegel, M. Buchgeister, P. F. Sharp
nised nationwide scheme. This also applies to final examinations, whereby such examinations may actually be given, even in the absence of a nationwide formally regulated additional training. To qualify for postgraduate training usually requires a completed university course in physics or the engineering sciences with a certification comparable to a diploma. The length of postgraduate, on-the-job training, varies between 1 and 4 years. However, in cases where requirement is for a higher qualification in Medical Physics (e.g. 'Qualified Hospital Physicist) combined with a doctorate (Ph.D.), this period may extend to 7 years. On the average, 3 years are usually required for postgraduate on- the-job training, mostly with an average of 400 course hours. Only in 6 of 9 countries with formal postgraduate training programmes, is the additional qualification officially recognised as being comparable to that of a medical specialist. This policy statement is now considered obsolete. It remains available on the EFOMP website until the new policy statement under development is published. Policy Statement 2: “The Roles, Responsibilities and Status of the Clinical Medical Physicist” [3]. This policy statement was also published in 1984 and it concluded that the need for clinical medical physics service in each country depends primarily on the standard and scope of medical care. Generally speaking it can be said that in the radiological field (X-ray diagnostics, radiotherapy, nuclear medicine and radiation protection) there is an obvious need for a clinical medical physics service. This has been proven its development in those countries where the service has been well established. It is also obvious that the introduction of a medical physics service in general depends a great deal on the appreciation by the medical profession of the ways in which physicists may assist in solving problems of medical diagnosis and treatment. The number of physicists per million inhabitants in different European countries shows a wide variation. Figures can be used in comparisons between countries only if they have about the same standard of medical care. Countries striving to reach this standard should in their planning take into account the required medical physics service. The number of physicists needed in diagnostic radiology, radiotherapy, nuclear medicine and radiation protection is correlated to the number of institutions and the number of facilities for example, radiotherapy units. As a rule, countries at an early stage of development of medical physics are in fact developing medical physics are in fact developing medical radiation physics first as this is still the largest single aspect of the medical physics service.
EFOMP considers this strategy suitable and that it will form a basis for further development of physics service in other applications of physics to medical care. The number of clinical medical physicists and supporting staff must be adequate to meet the high standards of service required. In making such provision, health authorities are best guided by the recommendations of the national organizations that are affiliated to EFOMP. Policy Statement 3: “Radiation Protection of the Patient in Europe: The Training of the Medical Physicist as a Qualified Expert in Radiophysics” [4]. This policy statement was published in 1988 in response to Article 5 of the EEC Directive 84/466/Euratom of 3 September 1984 that state “A Qualified Expert in radiophysics shall be available to sophisticated departments of radiotherapy and nuclear medicine”. In this policy statement the Qualified Expert in Radiophysics has been defined as “an experienced Medical Physicist working in a hospital, or in a recognised analogous institution, whose knowledge and training in radiation physics are required in services where the quality of the diagnostic image or the precision of treatment is important and the doses delivered to the patients undergoing these medical examinations or treatments must be strictly controlled”. Policy Statement 4: “Criteria for the Number of Physicists in a Medical Physics Department” [5]. Published in 1991. In the opinion of the EFOMP, Medical Physics is a Health Care Profession and the Medical Physicist whose training and function are specifically directed towards Health Care is entitled to an official recognition as a specialist. High standards in Medical Physics Services are important and at a time of increasing demand sufficient resources must be directed towards an appropriate, safe and cost effective use of physical sciences in the Health Service for the benefit and safety of the patient. This policy statement defines the minimum staffing requirements of Medical Physics Departments according to the number and the sophistication of the equipment of the hospital. This policy statement has been revised and has been replaced by policy statement number 7. Policy Statement 5: “Departments of Medical Physics Advantages, Organisation and Management” [6]. Published in 1993. The recommendations of this policy statement are as follows: 1. The role of Medical Physics Departments is to support the established broad range of applications of physics and engineering in medicine and to be actively involved
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Education and Training of the Medical Physicist in Europe Policy Statements and Efforts
2.
3.
4. 5.
6. 7.
in the development, implementation and exploitation of new medical technologies and procedures. A main objective of a Medical Physics Department must be to provide a competent and cost-effective medical physics service to all parts of the national health services that need it. This service includes: safety of patients and hospital staff, maintenance of medical equipment and scientific support. Medical Physics services must be the responsibility of an integrated Department of Medical Physics providing an agreed core of work activities representative of the diverse character of the specialty. These services must be organized or coordinated at the highest practicable level, which can be through a regional or sub-regional structure. The Head of Department must be a physical scientist in medical physics to whom all physical scientists employed on hospital physicists' grades and technical staff must be professionally and officially responsible. The Head of Department should be responsible for the departmental budget. University Departments of Medical Physics have the further tasks of teaching, research and training in this field.
Policy Statement 6: “Recommended guidelines of National Registration Schemes for Medical Physicists” [7] Published in 1994. When EFOMP was inaugurated in May 1980, its principal objective was to harmonise and promote the best practice of medical physics in Europe. In pursuing this objective, one of the long-term aims of EFOMP is to achieve uniformly high standards of training and performance of medical physicists in the countries of all member organisations. Furthermore EFOMP wishes to see some form of recognition when these standards are achieved. This has three advantages. First, it demonstrates that patients are receiving the same level of medical physics support, no matter where they are being treated. Second, it greatly facilitates the movement of physicists from one country to another. Finally, EFOMP would be seen as the body competent to decide how the recently established qualification of European Physicist [8] will be applied in the context of medical physics. Within the European Community, the direct application of the Council Directive on "Mutual Recognition of Higher Education Diplomas" 89/48/EEC [9] has not proved a very successful mechanism for ensuring the freedom of movement and the maintenance of appropriate standards for medical physicists. A major reason for this is that the current leve1 of legalised regulation of the profession in Europe is low. However, the European Commission is
315
clearly sympathetic to self-regulation by the professions and in response to an enquiry from FEANI (Fédération Européenne d'Associations Nationales d'Ingénieurs) on the Eur Ing qualification received the reply "The Commission considers that the FEANI scheme is an excellent example of self-regulation by a profession at the European level". In response to the above, in this policy statement EFOMP provides the necessary guidelines that will enable it to take the lead in establishing a mechanism for the proper recognition of medical physics by means of approved National Registration Schemes. Policy Statement 7: “Criteria for the Staffing Levels in a Medical Physics Department” [10]. Published in 1997 This policy statement is a revision of policy statement number 4 [6] and it contains more details as to how many Medical Physicists are required for a Medical Physics Department in relation to the equipment available in the hospital. Policy Statement 8: “Continuing Professional Development for the Medical Physicist” [11]. Published in 1998. Modern Health Care Services are met with everincreasing demands on competence, specialisation and cost effectiveness. The Medical Physics Service in hospital faces the same demands, and Continuing Professional Development (CPD) is vital if the Medical Physics profession is to embrace the pace of change occurring in medical practice; it promotes excellence within the profession and protects the profession and public against incompetence. CPD is the planned acquisition of knowledge, experience and skills required for professional practice throughout one's working life. Therefore EFOMP recommend that: 1. All medical physicists should be involved in CPD after qualification. 2. Formal CPD programmes should be developed to recognise individual effort. 3. Formal CPD programmes should set out clear objective guidance for the extent of CPD to be achieved within a defined timescale. 4. National organisations should have their CPD scheme included in the EFOMP Directory. 5. Renewal of professional registration should be linked to CPD performance. 6. The resources for CPD should be provided by the individual, the professional body, the employer and public education / training bodies. This Policy statement was revised in 2000 and it is replaced by policy statement number 10. Policy Statement 9: “Radiation Protection of the Patient in Europe: The Training of the Medical Physics Expert in Radiation Physics or Radiation Technology” [12]. Published in 1999.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
316
S. Christofides, T. Eudaldo, K. J. Olsen, J. H. Armas, R. Padovani, A. Del Guerra, W. Schlegel, M. Buchgeister, P. F. Sharp
In order to reach harmonisation throughout Europe when implementing EC-Directive 97/43/Euratom [13] into national legislation, with regard to the definition and role of the Medical Physics Expert (MPE) EFOMP recommends the following guidelines: • • • •
•
At the minimum the MPE must have been recognised as a qualified medical physicist and preferably also have further experience. The Education and Training Scheme in Medical Physics aiming at the level of a MPE has to follow the EFOMP guidelines [4]. A system for a recognised Continuing Professional Development is recommended. According to the duties defined by the new Directive, the MPE has to be involved in radiological practices in all university and specialised hospitals using ionising radiation on patients i.e. radiotherapy, nuclear medicine and diagnostic radiology. Involvement of the MPE in radiological practices as demanded in Article 6(3) of the Directive, is recommended by EFOMP in the following way:
EC Directive: In radiotherapy the MPE shall be closely involved. EFOMP: Daily relationship between MPE and patient environment is mandatory. To be deeply involved in dosimetry, Quality Assurance and elaboration of techniques used in the radiotherapy department. EC Directive: In nuclear medicine the MPE shall be available. EFOMP: MPE shall be able to make a meaningful intervention. Daily relationship between MPE and the patient environment is most appropriate. EC-Directive: In diagnostic radiology the MPE shall be involved as appropriate. EFOMP: Depending on the spectrum of techniques used there must be access to medical physics service, for instance local or regional networks could be established to provide practitioners and smaller hospital with up to date medical physics service. When special practices are used as defined in Article 9 of the Directive, a daily relationship of the MPE and the patient environment shall be standard. Policy Statement 10: “Recommended Guidelines on National Schemes for Continuing Professional Development of Medical Physicists” [14]. Published in 2000. This is a revision of Policy Statement number 8 that further develops the recommendations for the establishment of CPD Schemes at the National Level.
Policy Statement 11: “Guidelines on Professional Contact and Procedures to be Implemented in the Event of Alleged Misconduct” [15]. Published in 2003. The role of the medical physicist in health care is diverse. In many areas the medical physicist will take decisions and give advice that have a direct influence on the management of patients and in all of them medical physicists will interact with individuals from a wide range of professional groups. This policy statement gives guidelines on professional conduct that have been drawn up to enable the National Member Organisations of EFOMP to establish a code of practice that will ensure that medical physicists across Europe conduct themselves at all times in a manner that is appropriate to the profession. Policy Statement 12: “The present Status of Medical Physics Education and Training Europe. New Perspectives and EFOMP Recommendations” [16]. Expected to be published by the end of 2007. This policy statement is still under development. The EFOMP Council is expected to approved it at its next meeting in September 2007. It will replace policy statement number 1 which is now regarded as obsolete. The organisation of the Medical Physics Education and Training in many countries has changed since the publication of the first policy statement, and more recent EFOMP Policy Statements have been issued that have introduced new concepts and new recommendations that make thorough revision of this first document necessary. EFOMP is now challenged to make recommendations for education and training in Medical Physics, within the context of the current developments in the European Higher Education Area arising from “The Bologna Declaration”, and with a view to facilitate the free movement of professionals within Europe, according to the new Directive. A complete revision of the document now therefore appears to be essential. The aim of this document is to provide an updated view of the present level of education and training in Medical Physics in Europe and make recommendations in view of these new European challenges. Policy Statement 13: “Recommended Guidelines for the Development of Quality Management Systems for Medical Physics Departments” [17]. Expected to be published by the end of 2007. This policy statement is still under development. The EFOMP Council is expected to approved it at its next meeting in September 2007. The rapid and highly sophisticated advancement of equipment and procedures in the medical field increasingly depend on information and communication technology. The safety and quality of such technology it is vigorously tested before it is placed on the market, it is not always proven to
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Education and Training of the Medical Physicist in Europe Policy Statements and Efforts
be of the expected level when used under hospital working conditions. To improve its effectiveness and ensure the safety of the patient and users, it is necessary to have in place additional safeguard and monitoring mechanisms. Furthermore a large number of accidents and incidents happen every year in hospitals and as a consequence a number of patients die or are injured [18, 19, 20, 21]. A contribution to these events could well be attributed to Medical Physicists (malpractice, lack of education and training, etc). This EFOMP Policy Statement outlines the way in which a Quality Management System can be developed for Medical Physics Departments, that will be instrumental in eliminating or at least minimizing the contribution of the Medical Physicist to the accidents or incidents that happen to patients while being in the hospital environment and to have the mechanisms in place to be able to use effectively and efficiently new highly complicated and sophisticated technologies and procedures.
317
tional week on Radiation Protection in the Hospital Environment. European Network of Medical Physics Training Schools: This is a new initiative that is currently under development. Moving a step further from just publishing policy statements the EFOMP Officer’s have been discussing among them and have had recently discussions with other interested parties, the establishment of a process of coordinating across Europe the Continuous Professional Development Courses for Medical Physicists, so that these are harmonised and to offer the same level of education irrespective of the institution delivering them. It is anticipated that the firsts courses under this network will take place during the first half of 2008. Once the Network is established details will be available on EFOMP’s website. The Above activities are mainly organised by the Education, Training and Professional (ETP) and the Scientific (SC) Committees of EFOMP [27, 28].
B. Education and Training Activities Support for Meetings, Congresses and Courses: The sponsorship of meetings and congresses is instrumental in disseminating and encouraging the adaptation of the policy statements. EFOMP organise, co-organise, support and recognise meetings, congresses and courses with its NMOs. Guidelines that explain the above terms and the requirements for interested NMOs to collaborate in such events can be found on EFOMP’s website [22]. The purpose of these Guidelines is to help NMOs to obtain EFOMP sponsorship for their events by setting out the steps that they need to take and the conditions that must be fulfilled. The biggest event is the biennial European Congress on Medical Physics. Note that there are detailed guidelines on the requirements for this event [23]. Awards and bursaries: The encouragement of young scientists to pursue the profession of Medical Physics is of enormous importance to EFOMP. For this reason EFOMP gives awards [24] to young scientist at the EFOMP Congress and provides bursaries [25] to young scientist to attend course and participate at the European School of Medical Physics (ESMP). European School of Medical Physics: This is annual event in collaboration with the European Scientific Institute (ESI) and takes place in Archamps, France. It consists of five weeks of intensive training in Medical Physics [26]. One week each for Medical Imaging with Non-ionising radiation, Medical Imaging with Ionising Radiation, Medical Computing, Physics of Modern Radiotherapy and Brachytherapy. EFOMP is considering introducing an addi-
III. CONCLUSIONS The above brief discussion describes the activities of EFOMPS in the area of Education, Training and Professional Development of the European Physicist. These activities can only materialise through the collaboration of all the Medical Physicists of its NMOs. The NMOs must actively adopt and implement the guidelines of the policy statements as well as participate in the various events organised by EFOMP in collaboration with its NMOs. The contributions of all interested are more than welcomed in order to further develop the harmonisation of the education, training and professional status of the Medical Physicist in Europe.
ACKNOWLEDGMENT EFOMP acknowledges all those that have contributed to the development of its policy statements as well as all those that have and are organising educational and training events under its auspices.
REFERENCES 1. 2. 3.
Constitution of the European Federation of Organisations for Medical Physics, www.efomp.org/federation.html Policy Statement 1: “Medical Physics Education and Training: The present European Level and Recommendations for its Future Development”, www.efomp.org Policy Statement 2: “The Roles, Responsibilities and Status of the Clinical Medical Physicist”, www.efomp.org
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
318 4. 5. 6. 7. 8. 9.
10. 11. 12.
13.
14. 15.
S. Christofides, T. Eudaldo, K. J. Olsen, J. H. Armas, R. Padovani, A. Del Guerra, W. Schlegel, M. Buchgeister, P. F. Sharp IFMBE at Policy Statement 3: “Radiation Protection of the Patient in Europe: The Training of the Medical Physicist as a Qualified Expert in Radiophysics”, www.efomp.org Policy Statement 4: “Criteria for the Number of Physicists in a Medical Physics Department”, www.efomp.org Policy Statement 5: “Departments of Medical Physics - Advantages, Organisation and Management”, www.efomp.org and Physica Medica XI(3), 1995, 126-128 Policy Statement 6: “Recommended guidelines of National Registration Schemes for Medical Physicists”, www.efomp.org and Physica Medica XI(4), 1995, 157-159 Boswell PG. Recognising Fundamentals. Europhys News 1994: 25; 46-47. Council Directive 89/48 EEC of 21 December 1988 on a general system for the recognition of higher education diplomas awarded on completion of professional education and training of at least three years duration. Official Journal of tbe European Communities 1989: 32; 16-23. Policy Statement 7: “Criteria for the Staffing Levels in a Medical Physics Department ”, www.efomp.org and Physica Medica XIII, 1997, 187-194 Policy Statement 8: “Continuing Professional Development for the Medical Physicist”, www.efomp.org and Physica Medica XIV, 1998, 81-83 Policy Statement 9: “Radiation Protection of the Patient in Europe: The Training of the Medical Physics Expert in Radiation Physics or Radiation Technology”, www.efomp.org and Physica Medica XV, 1999, 149-153 EC Directive 97/43/Euratom of 30 June 1997 on health protection of individuals against the dangers of ionising radiation in relation to medical exposures Official Journal of the European Communities No L180, 9.7.1997, p. 22 Policy Statement 10: “Recommended Guidelines on National Schemes for Continuing Professional Development of Medical Physicists”, www.efomp.org and Physica Medica XVII, 2001, 97-101 Policy Statement 11: “Guidelines on Professional Contact and Procedures to be Implemented in the Event of Alleged Misconduct”, www.efomp.org and Physica Medica XIX, 2003, 227-229
16. Policy Statement 12: “The present Status of Medical Physics Education and Training Europe. New Perspectives and EFOMP Recommendations” 17. Policy Statement 13: “Recommended Guidelines for the Development of Quality Management Systems for Medical Physics Departments” 18. Eurohealth, “Mythbusters, Myth: We can eliminate errors in health care by getting rid of ‘bad apples’”, Eurohealth, Vol. 12 No. 2, pp. 29-30. 19. Gillin, M, “Institute of Medicine Report on Medical Errors”, AAPM Newsletter, March/April, 2000. 20. International Atomic Energy Agency (IAEA). Safety Reports Series No. 17, Lessons learned from accidental exposures in radiotherapy 2000. Vienna: IAEA; 2000. 21. International Commission on Radiological Protection. Prevention of accidental exposures to patients undergoing radiotherapy. Publication 86. Exeter: Pergamon Press; 2001. 22. EFOMP support of meetings, congresses and courses: Guidelines for National Member Organisations, www.efomp.org/federation.html 23. EFOMP Congresses: Policy and General Requirements, www.efomp.org /federation.html 24. EFOMP Congress awards for young physicists, www.efomp.org/ federation.html 25. EFOMP ESMP bursaries for young physicists, www.efomp.org/federation.html 26. EFOMP European Schools of Medical Physics, www.efomp.org/ federation.html 27. EFOMP Education, Training and Professional Ccmmittee Composition and terms of reference, www.efomp.org/federation.html 28. EFOMP Scientific Committee composition and terms of reference, www.efomp.org/federation.html. Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Stelios Christofides Medical Physics Department, Nicosia General Hospital 215 Old Nicosia Limassol Road 2029 Nicosia Cyprus
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The value of clinical simulation-based training Vesna Paver-Erzen1, Matej Cimerman2 1
University Medical Centre/ Clinical department of anaesthesiology and intensive therapy, Zaloska 7, Ljubljana, Slovenia 2 University Medical Centre/Clinical department of traumatology, Zaloska 7, Ljubljana, Slovenia
Abstract— Simulators were first used in aviation for flight training of pilots and for inter-staff communication. Regular training in the simulation centre is obligatory for all aircraft staff, whatever their rank or position. Simulation-based training has been introduced in nuclear power, space flight and petrochemical industries, particularly in the settings where there is a high probability of large-scale catastrophic events. The major advantages of learning skills on a simulator are: each procedure can be interrupted, improved and repeated until the required proficiency has been achieved, and no real harm is done when an eventual mistake –inadmissible in real clinical setting – is made on a mannequin. This learning modality is therefore less stressful for both the trainee and the teacher, and helps increase the trainee's self-confidence. Keywords— clinical training, simulation.
I. INTRODUCTION Simulators were first used in aviation for flight training of pilots and for inter-staff communication. Regular training in the simulation centre is obligatory for all aircraft staff, whatever their rank or position. Simulation-based training has been introduced in nuclear power, space flight and petrochemical industries, particularly in the settings where there is a high probability of large-scale catastrophic events. Considering that simulator-based training is wellestablished in aviation and in several other industries, it is hard to imagine that clinical skills of dealing with medical emergency situations and complications are still taught on a live patient. It has been proved that technical faults are to be blamed for only 30% of all clinical complications and that 70% are due to mistakes made by health professionals. Virtual training in resuscitation procedures was first used on unsophisticated patient mannequins. Ten years ago, simulation-based training centres for advanced medical virtual training equipped with full-scale simulators began to be set up. These »virtual patients«, who can talk and replicate all human physiological and physio-pathological functions, are used as a teaching tool for close replication of approx. 70,000 programmed clinical situations , encountered in anaesthesia and intensive care, as well as during medical and surgical interventions, resuscitation and other procedures. Simulation-based clinical training is offered in
several European university hospitals in Germany, France, Great Britain, Italy and Danemark. In the U.S.A., advancedlevel simulation traning is delivered in the so-called virtualreality hospitals, which provide close replication of all clinical situations and procedures for teaching purposes. II. WHAT ARE THE ADVANTAGES OF SIMULATION-BASED TRAINING?
There are some procedures for which standard supervised live-patient training is no longer ethically justified or acceptable to the patient. Health professionals are expected to have mastered a given procedure before performing it on a patient. Another hinderance is a limited number of patients available. Reduction in working hours of doctors in training, required under new European legislation, will result in reduced opportunity for these doctors to acquire clinical skills alongside patients. Simulation-based training has been therefore increasingly used at all levels of medical education. Simulation-based clinical skills training is provided in three domains: • • • •
psychomotor (acquiring technical skills) cognitive (decision-making) emotional (team interaction), and evaluation of performance, assessment of acquired skills.
The major advantages of learning skills on a simulator are: each procedure can be interrupted, improved and repeated until the required proficiency has been achieved, and no real harm is done when an eventual mistake – inadmissible in real clinical setting – is made on a mannequin. This learning modality is therefore less stressful for both the trainee and the teacher, and helps increase the trainee's self-confidence. As reported in the literature, medical students and doctors are positive about learning clinical skills on simulators. They find that in addition to being cost-effective, this form of learning increases their self-confidence and helps them develop a more realistic attitude toward intervening in critical situations. Doctors who acquire their clinical skills through simulation learning demonstrate lower rates of medical errors and perform various procedures more rapidly
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 327–328, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
328
Vesna Paver-Erzen, Matej Cimerman
than their colleagues who receive standard supervised training with real patients. Despite the high costs involved in the setting up of a simulation centre, simulation-based clinical training delivered there is relatively cost-effective. Financial costs carried by a medical error which causes death or permanent invalidity of the patient are considerably higher than establishing a clinical skills learning facility. American studies have shown that costs incurred on purchasing of a high-tech simulator are returned within six months of its use. University Medical Centre Medical Council gave the green light for the establishment of a simulation centre, and set up a force group for preparatory assistance and background activities up. Simulation centre location has been chosen by Professor Sasa Markovic, Medical Director of the University Medical Centre, and plans for the reconstruction of existing facilities have been drawn up.
5.
REFERENCES
13.
1. 2. 3. 4.
Rall M, Gaba DM (2005) Chapter 84: Patient simulators. In: Miller RD ed. Miller's Anesthesia. 6th ed. Churchill Livingstone, Philadelphia, pp 3073-103. Gaba DM, Fish KJ, Howard SK (1994) Crisis management in anesthesiology. Churchill Livingstone, Philadelphia Rall M, Monk S, Mather S et al (2003) SESAM – The Society in Europe for simulation applied to medicine. Eur J Anaesthesiol 20:763 Schuttler J (2002) Anforderungskatalog zur Durchfűhrung von Simulatortraining-Kursen in der Anasthesie. Anasthesiologie & Intensivmedizin 43:828-830
6. 7. 8. 9.
10. 11. 12.
Rall M, Dieckamnn P (2005) Crisis resource management to improve patient safety. Refresher course lectures. Euroanaesthesia 2005. Vienna, Austria, 2005, pp 107-112 Dalley P, Robinson B, Weller J et al (2004) The use of high-fidelity human patient simulation and the introduction of new anesthesia delivery systems. Anesth Analg 99:1737-1741 Kuhnigk H, Kuhnigk R, Sefrin P et al (1999) »Full-scale«-Simulation in der praklinischen Notfallmedizin – Konzeption des Wurzburger Anasthesie – und Notfallsimulators. Der Notarzt 15:129-133 Gaba DM (2005) Improving patient safety by implementing strategies of high reliability organization theory. Refresher course lectures. Euroanaesthesia 2005. Vienna, Austria, 2005, pp 243-247 Kuhnigk H, Roewer N (2004) Anasthesiesimulation – Innovation fűr die Zukunft. In: Thiede A, Roewer N, Elert O, Riedmiller H eds. Chronik und Vision. Zentrum Operative Medizin 2004. Universitatsklinikum Wurzburg, Klinikum der Bayerischen JuliusMaximillians-Universitat, pp 269-270 Columbus Bussines First at http://www.meti.com/media.html Glavin R (2005) Simulation in anesthesia and acute care settings. Refresher course lectures. Euroanaesthesia 2005. Vienna, Austria, pp 155-161 Nargozian CD (2004) Simulation and airway-management training. Current Opinion in Anaesthesiology 17:511-512 The Associated Press. July 25th, 2004. Medical Training Goes Virtual at http://www.meti.com/media.html Authors: Vesna Paver-Erzen Institute: Clinical department of anaesthesiology and intensive therapy Street: Zaloska 7 City: Ljubljana Country: Slovenia E-mail:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Personal Computer as a Universal Controller for Medical-Focused Appliances Denis Pavliha, Matej Rebersek, Luka Krevs and Damijan Miklavcic Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia Abstract— Modern medical-focused appliances are sophisticated devices that can process complex data and control various subsystems. This is the reason why they require a custommade advanced Graphical User Interface that is able to control all these features. In order to build such a Graphical User Interface we first need to choose a proper hardware platform and then an operating system for which we will build the Graphical User Interface application. As for the hardware, an industrial-targeted Mini-ITX mainboard meets our needs of reliability, stability and speed. The mainboard is based on the Personal Computer x86 platform and we can expand its features with peripherals such as CompactFlash card for data storage, touchscreen LCD for user interaction and an external board, connected to the USB port of the mainboard, for data interchange. The operating system used is Microsoft Windows Embedded CE in a combination with a Dynamically-Linked Library to control the features of the external USB board. With such a configuration we obtain a fast and compact controller with data interchange capability and a sophisticated Graphical User Interface. Since the Graphical User Interface is custom-made and the operating system gets loaded fast the end-user does not have a feeling to be using a PC. The main benefit is the system’s upgradeability because even with major hardware changes we can still reuse all our code to rebuild our Graphical User Interface and transfer it on a new platform without losses. Keywords— Microprocessor, Personal Computer (PC), Embedded System, Graphical User Interface (GUI), Universal Serial Bus (USB).
I. INTRODUCTION Modern medical-focused appliances have evolved into powerful machines that are based on complex electronic circuits and can perform demanding tasks at a very high speed. In addition to data processing these devices must also communicate with external peripherals and their performance has to be done at a high speed and with even higher accuracy. Therefore such devices need a sophisticated Graphical User Interface (GUI) that can deal with many parameters and process the data involved in the functioning of the appliance itself. When designing a GUI one first needs to choose a hardware platform on which the GUI will run. After revising many possible solutions we come across two possible choices, both based on a computer system. If we try to clas-
sify these computer systems into two segments regarding the sphere of usage, we can divide them into two groups [1]: • •
General purpose computers (Personal Computers), Special purpose computers (Embedded Systems).
The first segment represents Personal Computers (PCs) which are built as universal machines on which we can install an operating system and therefore use applications that are developed for this operating system. As these machines are built for general-purpose usage they are not popular as the main components of specific devices such as medical-focused equipment. However, computers matching the classification of the second segment are not intended to be used as universal. Although operating systems for such devices do exist they are in most cases on a different level than PC-based operating systems and do not provide us with all advanced features that the PC-based operating systems do. It is up to the developer to customize the operating system for the embedded system or even to develop an operating system oneself to fully satisfy the needs for the specific device. Developers of controllers have in the past predominantly used PC platforms as main system components of specific systems only if there was the need to graphically represent the data captured [3]. In most cases specifically designed microprocessor or microcontroller boards were used instead [2]. Such boards are equipped with a microprocessor or maybe a microcontroller, external memory and in most cases some specific hardware interfaces, as shown in Fig.1. Software for these solutions must be written with specific development tools that manufacturers of the microcontroller board provide and cannot be substituted by other tools. Such hardware is not upgradeable unless we replace the whole board with a board that holds a microcontroller that is a successor of the one used before. With this kind of replacement only minor changes of the software have to be made. Nevertheless, even minor software changes with a different but similar microprocessor or microcontroller can often represent a serious challenge. In this paper we suggest using a general-purpose PC as a specific-device controller with a fully featured Graphical User Interface (GUI) and capability of data interchange via an external bus.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 381–384, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
382
Denis Pavliha, Matej Rebersek, Luka Krevs and Damijan Miklavcic
PRINTER
I/O
PRINTER
MONITOR
I/O
I/O
DATA PROCESSING UNIT
OPTICAL AND ACOUSTIC SIGNALS
VIDEO
(MICROPROCESSOR)
GRAPHIC PROCESSOR I/O
KEYBOARD
UART I/O
UART
~
IEEE 488
~ ~
RS 232 C
Fig. 1 A block diagram of a part of an EKG system that uses a specific microprocessor board as the main system component [2].
II. HARDWARE PLATFORM If a Personal Computer (PC) is meant to be used as a controller for such specific devices as medical-focused appliances the demands of the selected hardware are clear – reliability, stability and speed. Since our system is designed to be a controller with a Graphical User Interface (GUI) a high speed of the hardware platform is needed for the realtime control to be implemented. The stability of the system and consequently reliability are factors that cannot be let out since we can afford neither failures nor errors during the functioning of the system. For these reasons a compact Mini-ITX mainboard is used. The Mini-ITX standard was developed in 2001 by VIA Technologies and specifies a mainboard form factor with dimensions of 170 x 170 millimeters with very low power consumption [6]. Such mainboards are primarily intended for industrial usage since they are produced with a much longer sales lifetime than consumer boards. The mainboard itself already includes the microprocessor, Random Access Memory (RAM) and some basic peripherals such as the Video Graphics Array (VGA) Controller, Ethernet Controller, Universal Serial Bus (USB) 2.0 Controller and Integrated Drive Electronics (IDE) Controller. With its small dimensions, low power consumption, a long sales lifetime and the wide selection of integrated peripher-
als it represents an optimal choice as the main component of our system. Although the mainboard includes an IDE Controller it does not have any on-board storage such as Hard Disks or Flash Memory. Since the whole system has to be as compact as possible a Hard Disk would not be an optimal solution. First, Hard Disks are relatively big and would probably not fit into the small system case we plan to use and second, Hard Disks are electro-mechanical components, which makes them more susceptible for damage. Since the mainboard supports IDE devices and the avoidance of mechanical parts is welcome, a CompactFlash (CF) card proves itself as the solution. The CompactFlash (CF) card provides complete TrueIDE functionality, which means it is fully compatible with ATA/ATAPI-4 (Advanced Technology Attachment - Packet Interface) which is a computer disk drive standard [7]. This is why we can connect it directly to the IDE controller of the Mini-ITX mainboard and the CF card then acts as a Hard Disk Drive. Since there are no mechanical components involved there is no presence of mechanical delays, which additionally boosts the speed of our system. The interaction between the user and the system will be carried through a touchscreen Liquid Crystal Display (LCD). As the Graphical User Interface (GUI) will be entirely developed by ourselves we can afford to plan neither any keyboard nor pointing devices and project the whole control to be executed through the touchscreen display. This means less external peripheral devices will be needed and
PC (x86) MAINBOARD CPU
IDE
CF
VGA RAM TOUCHSCREEN
ETHERNET
DEBUGGER
USB 2.0
EXTERNAL USB BOARD
LCD
I2C GPIF
Fig. 2 A block diagram of our solution which includes a Personal Computer x86-based mainboard, some basic peripherals and an external USB board. Debugging is provided via the Ethernet interface.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Personal Computer as a Universal Controller for Medical-Focused Appliances
the case of the whole system will consequentially be smaller since it will not include a keyboard. As the hardware for the GUI part is already selected we still need the connective part of the system to be chosen. Although the Mini-ITX mainboards in most cases include controllers for special busses such as the Inter-Integrated Circuit (I2C) bus these controllers do not suit the needs of our appliance. What we need is a programmable GeneralPurpose Input-Output (GPIO) system. As it is not included on the mainboard an external solution is needed. The choice we made for the external bus is the USB 2.0, which is a serial bus that achieves speeds up to 480 Mbps. This is enough to satisfy the needs of our system, since it represents enough bandwidth to transfer all the data involved. The external USB board we intend to use is a Cypress CY3684 development board. Its main component is the Cypress FX2LP integrated circuit which includes a programmable 8051-based microcontroller, a General Purpose Interface (GPIF) and the I2C bus. The GPIF will be used for controlling devices that are not time critical and the I2C bus for those that are. All the interfaces are controlled by the 8051 microcontroller and thus programmable through the firmware of the board.
not even feel that he or she is dealing with a Personal Computer because the boot time is as short as on common settop-boxes 1 and the operating system appearance does not resemble typical Windows operating systems. This gives us enough space to build an application that acts as the main GUI of our system and controls all of its features. Besides this the last version of the Windows Embedded development environment named ‘Platform Builder’ can be used as a Microsoft Visual Studio plug-in. This means most of the development work can pass through the Microsoft Visual Studio, which makes it easier for developers who already have experience with this environment. The debugger data gets transferred via the Ethernet interface, as already shown in Fig.2. The hardware is supported by the operating system through a package of device drivers called a ‘Board Support Package’ or BSP. Should we want to upgrade our hardware without discarding all the work we have done so far, we simply change the BSP with another one that is specific for the new hardware and rebuild the Windows Embedded CE operating system image.
III. SOFTWARE PLATFORM After selecting the proper hardware the system now needs an operating system to handle the application processes. Even though the hardware selected is a general purpose PC we still want an operating system targeted for embedded machines and considering this demand only two choices appear – Windows Embedded or Linux Embedded. As often pointed out [4] there are many advantages of using Windows Embedded in comparison to Linux Embedded from different aspects such as costs and time to market of the project. Considering these facts Windows Embedded seems the choice to be made. In the Windows Embedded family there are two operating systems – Windows Embedded CE (formerly Windows CE .NET) and Windows XP Embedded. Since the system we are building does not need all the features of the Windows XP operating system we plan to build a Windows Embedded CE appliance. The Windows Embedded CE is a componentized operating system targeting small footprint devices [8]. The image of the whole operating system can be as small as a few megabytes. For this reason the operating system boots very quickly and the operating system image can even be loaded by the Basic Input Output System (BIOS) so that the boot time gets reduced even more. Besides this the operating system shell can get customized so that the end-user does
383
.NET application
DLL driver
DLLImport Interface
USER CODE WDAPI Object
USER CODE
.lib Module USER KERNEL WinDriver Kernel Module
Cypress EZ-USB FX2LP
Fig. 3 A block diagram that explains the interaction between different software layers [5].
The only missing driver in the Windows Embedded CE operating system is the driver for the Cypress FX2LP device. The driver should not be written as a component of the operating system but a Dynamically-Linked Library (DLL). The benefits of a DLL reflect in the possibility of being accessed from the application developed in Microsoft Visual Studio. Since driver writing is a very complex and last1
Standalone devices that connect to a signal source and display it on a television, like satellite TV receivers.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
384
Denis Pavliha, Matej Rebersek, Luka Krevs and Damijan Miklavcic
ing project we will take advantage of a driver development toolkit named ‘Jungo WinDriver USB’. This toolkit helps us build the driver in the desired programming language and creates a diagnostic driver application we can then edit, which gives us the possibility to change the driver in the way we need it and reduces the time we would spend otherwise developing the driver from zero [5]. Such a driver is in the end a DLL file and as shown in Fig.3 can be accessed from Microsoft Visual Studio as a reference via the DLLImport2 interface. IV. CONCLUSIONS With suggested hardware configuration a generalpurpose PC is now used as an embedded system that is able to control medical-focused appliances. Even if a touchscreen LCD is included in the case with the whole system the case still remains small and compact. Since the only mechanical parts in the system are fans the risk of mechanical damage is reduced to the minimum and the compact system case can be freely moved around where and when needed. The operating system image is loaded directly by the Basic Input Output System (BIOS) and boots in a few instants, which is a big advantage in comparison to other PC-focused operating systems. The original shell of the Windows CE Embedded operating system is hidden and our custom Graphical User Interface (GUI) application is used instead. Such a configuration resembles industrial set-top-boxes and conceals the fact that a PC is used as the main component of the system. The system is standalone and able to control external appliances such as medical-focused devices via the GeneralPurpose Programmable Interface (GPIF) and InterIntegrated Circuit (I2C) interfaces that reside on the external USB board. Due to the use of an external board the compactness of the system holds back a little but it is actually an achievement since the system becomes extendable.
The main benefit of such a system is its upgradeability. If we want to change the mainboard with another one there are basically no limitations as long as we retain the use of a platform that is supported by Microsoft Windows Embedded CE and includes a Board Support Package (BSP) for this operating system. With a hardware upgrade the software does not get affected at all and we do not have to redo all the work and write either the main application or custom device drivers from the beginning. The operating system simply gets rebuilt for the new hardware platform and the application developed in .NET gets loaded on it as on the previous system.
ACKNOWLEDGMENT The authors gratefully acknowledge the Jungo Support Team for their technical support.
REFERENCES 1.
2. 3. 4. 5. 6. 7. 8.
Aksamovic A, Pasic Z, Imamovic F (2003) Selection of Processors as a Base for Development of Special Purpose Numerical Systems, Eurocon Proc. vol. 1, Eurocon 2003, Ljubljana, Slovenia, 2003, pp 118–121 Santic A (1995) Biomedicinska elektronika. Skolska knjiga, Zagreb Hinrichs H (2004) Biomedical Technology and Devices Handbook. CRC, Boca Raton Krasner J (2003) Total Cost of Development, A comprehensive cost estimation framework for evaluating embedded development platforms, Embedded Market Forecasters Jungo WinDriver User's Guide at www.jungo.com VIA Technologies at www.via.com.tw The CompactFlash Association at www.compactflash.org Microsoft Windows Embedded at www.microsoft.com/windows/embedded Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Matej Rebersek Faculty of Electrical Engineering, University of Ljubljana Trzaska c. 25 SI-1000 Ljubljana Slovenija
[email protected]
2
A .NET interface used to call unmanaged DLLs from managed application code.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Accurate On-line Estimation of Delivered Dialysis Dose by Dialysis Adequacy Monitor (DIAMON) I. Fridolin1, J. Jerotskaja1, K. Lauri1, A.Scherbakov1and M. Luman 2 1
2
Department of Biomedical Engineering, Technomedicum, Tallinn Technical University, 19086 Tallinn, Estonia Department of Dialysis and Nephrology, North-Estonian Regional Hospital, J.Sutiste Rd 19, 13419 Tallinn, Estonia
Abstract— The aim of this study was to compare equilibrated urea Kt/V (eKt/V) from the slope of the logarithmic online UV-absorbance measurements measured by the Dialysis Adequacy Monitor (DIAMON), by eKt/V obtained from a new algorithm that can be implemented into the DIAMON prototype. As the reference the urea eKt/V was utilized obtained from the blood samples according to the rate adjustment method. The mean value of equilibrated Kt/V obtained with UV-absorbance (eKt/Va) was 1.06 ± 0.21, using the new algorithm (eKt/Vn) was 1.09 ± 0.18, and eKt/V from blood-urea (eKt/Vb) 1.09 ± 0.20 (N = 21). The mean values of eKt/V were not statistically different comparing different methods. However, both the systematic and the random error were diminished by the new algorithm. The systematic error was decreased from 2.19% to 0.312%, and the random error was decreased from 14.37% to 7.30% using the new algorithm. In summary, the DIAMON prototype can accurately and on-line estimate the dialysis dose. Keywords— hemodialysis, dialysis dose, dialysis quality, dialysis monitoring, absorbance.
I. INTRODUCTION The dialysis dose has been reported to have a great significance for the outcome of the dialysis treatment [1], [2]. On-line monitoring of the dialysis dose has been suggested as a valuable tool to ensure adequate dialysis prescription [3]. A new technique for on-line monitoring of solutes in the spent dialysate utilising the UV- absorbance has been established, enabling one to follow a single hemodialysis session continuously and monitor deviations in dialysis efficiency [4]. A good correlation between UV-absorbance and a small removed waste solute such as urea enables the determination of Kt/V for urea [5]. Recently a new prototype device - Dialysis Adequacy Monitor (DIAMON), has been designed for continuous, online, estimation of delivered dialysis dose. The monitor that is small and simple to handle, is based on the UV-technique and replaces the scientific-work oriented spectrophotometer in clinical practice. A new algorithm was developed to ensure accurate on-line estimation of delivered dialysis dose by means of Kt/V applied successfully on to data acquired by a spectrophotometer during the clinical experiments [6].
The aim of this study was to compare DIAMON equilibrated urea Kt/V (eKt/V) from the slope of the logarithmic on-line UV-absorbance measurements, by a new algorithm developed to calculate eKt/V, and the urea eKt/V obtained from the blood samples according to the rate adjustment method [7]. II. PATIENTS This study was performed after approval of the protocol by the Tallinn Medical Research Ethics Committee at the National Institute for Health Development, Estonia. An informed consent was obtained from all participating patients. Ten uremic patients, three females and seven males, mean age 62.6 ± 18.6 years, on chronic thrice-weekly hemodialysis were included in the study at the Department of Dialysis and Nephrology, North-Estonian Regional Hospital. Three different polysulphone dialysers were used: F8 HPS (N=11), F10 (N=2), and FX80 (N=8) (Fresenius Medical Care, Germany) with the effective membrane area of 1.8 m2, 2.2 m2, and 1.8 m2, respectively. The dialysate flow was 500 mL/min and the blood flow varied between 245 to 350 mL/min. The type of dialysis machine used was Fresenius 4008H (Fresenius Medical Care, Germany). III. MATERIALS AND METHODS Dialysis machine
Dialysate inlet
Dialysate outlet
Patient DIAMON prototype To drainage
Fig. 1 Schematic clinical set-up.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 350–353, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Accurate On-line Estimation of Delivered Dialysis Dose by Dialysis Adequacy Monitor (DIAMON)
351
eKt/V from the three methods was finally compared regarding mean values and SD. Also random error was calculated for different methods as SD over the sessions’ Accuracy. For a single session Accuracy was in percentage as
Accuracy =
Kt / Vb − Kt / Va *100% Kt / Vb
(1)
where eKt/Vb and eKt/Va are the eKt/V values from blood-urea and from UV-absorbance, respectively. The new algorithm eKt/Vn was used instead of eKt/Va when Accuracy was calculated for the New model. Systematic error and random error were calculated as the mean value and as SD over the total material Accuracy. Student’s t-test (two tailed) and Levene Test of Homogeneity of Variances were used to compare means for different methods and SD values respectively. Fig. 2 Dialysis Adequacy Monitor (DIAMON) during the clinical experiments
The mean value of equilibrated Kt/V obtained with UVabsorbance (eKt/Va) was 1.06 ± 0.21, using the new algorithm (eKt/Vn) was 1.09 ± 0.18, and eKt/V from blood-urea (eKt/Vb) 1.09 ± 0.20 (N = 21 for all methods) (Fig. 1). The mean values of eKt/Va, eKt/Vn and eKt/Vb were not statistically different (P ≥ 0.27). The SD-s were not significantly different (P ≥ 0.38) for any methods. Fig. 4 shows the difference for single dialysis treatments between observed eKt/Vb using the Present model and the New model respectively. The difference is obviously decreased using the New model.
1,5
1,0
eKt/V
The clinical set-up of the experiments is shown on Fig. 1. An optical dialysis adequacy sensor was connected to the fluid outlet of the dialysis machine with all spent dialysate passing through the optical cuvette. The optical dialysis adequacy sensor consisted of a Dialysis Adequacy Monitor (DIAMON) (AS Ldiamon, Estonia) that was used for the determination of delivered dialysis dose (Fig. 2). DIAMON incorporated a light source (280nm UV LED), a detector (GaNi UV-photodiode), an electronic circuit board, and an optical cuvette. The monitor was connected to the fluid outlet of the dialysis machine with all spent dialysate passing through during the on-line experiments. The transimitted light intensity of the spent dialysate was measured. The sampling frequency was set to 20 samples per minute. The obtained intensity values were processed to obtain UV-absorbance and presented on the computer screen by a PC using Ldiamon’s software (AS Ldiamon, Estonia, for Windows). The results from measurements during 21 hemodialysis treatments using a LED with a peak emission wavelength of 280±5 nm are presented in this paper. The algorithm to calculate eKt/V as described earlier was used [5] (referred as “Present model”). The new algorithm (“New model”) to calculate eKt/V was obtained using regression analysis including several dependent parameters like slope of the logarithmic on-line UV-absorbance, ultrafiltration volume, dialysis length, blood flow rate, dialyzer’s urea clearance in-vitro, patient’s dry body weight, and two dummy variables gender and indication for diabetes [8].
IV. RESULTS
0,5
0,0
Present
New
Blood
Fig. 3 Predicted eKt/Vb using the Present model (Present), the New model (New), and the observed eKt/Vb (Blood).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
352
I. Fridolin, J. Jerotskaja, K. Lauri, A.Scherbakovand M. Luman
V. DISCUSSION
0.30 0.20 0.10 0.00 0.50
0.70
0.90
1.10
1.30
1.50
-0.10 -0.20 eKt/Vb-new model eKt/Vb eKt/Vb-eKt/Va_Diamon
-0.30
eKt/Vb
Fig. 4 Differences between the observed eKt/Vb and predicted eKt/Vb using the Present model (eKt/Va_Diamon) and the New model respectively.
Systematic +/- Random Error, %
Fig. 5 shows the Systematic and the random error for Present and New model using blood urea eKt/V as a reference. The systematic error was 2.19% for eKt/Va and 0.312% for eKt/Vn. The systematic error, being relatively small before the new algorithm was applied, decreases even more for New model. The random error using blood urea eKt/V as a reference was 14.37% for eKt/Va and 7.30% for eKt/Vn. The random error is decreased essentially being significantly different (P < 0.05) for the Present and the New model. As seen from the results the prediction of eKt/Vb can be done much more precisely applying the New model.
20,0 15,0 10,0 5,0 0,0 -5,0
The mean values of eKt/Va, eKt/Vn and eKt/Vb were principially all the same (Fig. 3). This means that the new prototype device - Dialysis Adequacy Monitor (DIAMON), based on the UV-technique, can estimate reliably the delivered dialysis dose by means of eKt/V. Fig. 4 shows the difference for single dialysis treatments between observed eKt/Vb using the Present model and the New model respectively. The difference is obviously decreased using the New model with possibilities to apply a correction in order to achieve higher accuracy. Fig. 5 shows that both the Systematic and the random error for Present and New model using blood urea eKt/V as a reference decrease for the New model. The results are confirmed by the fact that the new algorithm has been successfully applied to the eKt/V estimated by a commercial spectrophotometer [6]. This means that utilizing the new algorithm the eKt/Vb can be predicted with good results in terms of the Systematic and the random error. The parameter values are comparable with the specifications given to other available dialysis adequacy monitors [9]. The DIAMON prototype is small, does not interfere with dialysis machine’s operation, and is based on the UV-method which does not need blood samples, no disposables or chemicals, is fast, and allows to follow a single hemodialysis session continuously and monitor deviations in dialysis efficiency. New algorithm should be applied to the data material not included into the model build up to proof further the validity of the model. This can be done by creating a model using only a part of the data material and validating the obtained model on the rest of the material. Preferably the material used to validate the model should include new set of values for the model parameters which did not exist during the model build-up (e.g. new patients, dialyse filters, etc.). Including new patients should be the most sensitive because of the possible different composition of the UV-absorbing compounds filtered from the blood into the dialysate during the dialysis. In summary, utilizing eKt/V from the DIAMON prototype the overall dialysis dose can be estimated with satisfactory accuracy and precision. To validate the algorithm with data material not included into the model build up will be issue in the further studies.
-10,0
VI. CONCLUSIONS
-15,0
Present vs Blood
New vs Blood
Fig. 5 The systematic and the random error for Present and New model using blood urea eKt/V as a reference.
The presented results show the possibility to estimate urea eKt/V with a high accuracy utilizing the new algorithm based on on-line UV-absorption measurements in the spent
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Accurate On-line Estimation of Delivered Dialysis Dose by Dialysis Adequacy Monitor (DIAMON)
dialysate with the DIAMON prototype. More general eKt/V validation using the UV-technique should be validated in the next studies.
ACKNOWLEDGMENT The authors wish to thank Galina Velikodneva for assistance during clinical experiments, Aleksander Frorip and Rain Kattai for skilful technical assistance and also those dialysis patients who so kindly participated in the experiments. The study was supported by the Estonian Science Foundation Grant No 5871 and 6936, by the NATO Reintegration Grant EAP.RIG 981201, and by the LDI Inc. Enterprise Estonia project.
REFERENCES 1. NKF K/DOQI guidelines. CLINICAL PRACTICE GUIDELINES FOR HEMODIALYSIS ADEQUACY, UPDATE 2006. at http://www.kidney.org/professionals/KDOQI/guideline_upH D_PD_VA/index.htm. 2. Port F K, Ashby V B, Dhingra R K, Roys E C, and Wolfe R A (2002) Dialysis dose and body mass index are strongly associated with survival in hemodialysis patients. Journal of the American Society of Nephrology 13:1061-1066 3. Locatelli F, Buoncristiani U, Canaud B, Khler H, Petitclerc T, and Zucchelli P (2005) Haemodialysis with on-line monitor-
4.
5.
6.
7. 8. 9.
353
ing equipment: tools or toys? Nephrology Dialysis Transplantation 20:22-33 Fridolin I, Magnusson M, and Lindberg L-G (2002) On-line monitoring of solutes in dialysate using absorption of ultraviolet radiation: technique description. The International Journal of Artificial Organs 25:748-761 Uhlin F, Fridolin I, Lindberg L-G, and Magnusson M (2003) Estimation of delivered dialysis dose by on-line monitoring of the UV-absorbance in the spent dialysate. American Journal of Kidney Diseases 41:1026-1036 Fridolin I, Uhlin F, Magnusson M, and Lindberg L-G (2006) Accurate Estimation of Delivered Dialysis Dose by On-Line Ultra Violet Absorbance in the Spent Dialysate. Nephrol Dial Transplant. Vol 21:ERA/EDTA XLIII Congress (abstract), Glasgow Daugirdas J T (1995) Simplified equations for monitoring Kt/V, PCRn, eKt/V, and ePCRn. Advances in Renal Replacement Therapy 2:295-304 Fridolin I and Uhlin F, 'Device for Dialysis Quality Parameters, Utility Model nr. EE 00620 U1. Estonia, 2006. Fresenius, OCM. Online Clearance Monitor. Operating instructions for 4008 H7S Dialysis Machines, Fresenius Medical Care 2000. Author: Ivo Fridolin Institute: Department of Biomedical Engineering Technomedicum Tallinn University of Technology Street: Ehitajate tee 5 City: 19086 Tallinn Country: Estonia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Ambulatory blood pressure monitoring is highly sensitive for detection of early cardiovascular risk factors in young adults Maja Benca, Ales Zemva, Primoz Dolenc Division of Hypertension, University Medical Centre, Ljubljana, Slovenia Abstract— We evaluated the appropriateness of 24-h ambulatory blood pressure (BP) monitoring to detect prehypertensive conditions in apparently healthy siblings of patients with premature cardiovascular disease (CVD). We performed office blood pressure measurements and 24-hour ambulatory blood pressure monitoring in 30 young adults (mean age 26 ± 3 years), whose parents have experienced premature CVD, and 30 control subjects (mean age 26 ± 3 years) with a negative family history of CVD. Positive parental CVD history group had significantly higher mean values of 24-h systolic BP (123 ± 10 mm Hg vs. 118 ± 6 mm Hg; p = 0.044), daytime systolic (127 ± 12 mm Hg vs. 121 ± 7 mm Hg; p = 0.041) and diastolic BP (77 ± 8 mm Hg vs. 73 ± 4 mm Hg; p = 0.045) as well as 24-h heart rate (71 ± 8 beats/min vs. 67 ± 8 beats/min; p =0.05) and systolic BP load (21 ± 20% vs. 10 ± 11%; p = 0.02) compared to controls. There was no significant inter-group difference in blood pressure measurements obtained by conventional office method. In addition, the study group had a considerably higher diurnal variability of blood pressure and heart rate, which is believed to be contributing to their overall CVD risk. In conclusion, slightly higher levels of blood pressure, blood pressure variability and heart rate are early determinants of higher CVD risk, which can be detected in individuals by using 24-h ambulatory blood pressure monitoring. Keywords— ambulatory blood pressure monitoring, blood pressure variability, young adults, office blood pressure.
I. INTRODUCTION Accurate measurement of blood pressure is essential to classify individuals, ascertain their blood pressure-related risk as well as to guide treatment of hypertensive patients. It is known that 24-hour ambulatory blood pressure monitoring gives a better prediction of risk than office blood pressure measurements and is useful for diagnosing certain specific clinical conditions, like white-coat hypertension [1,2]. We investigated whether 24-hour ambulatory blood pressure monitoring is also a suitable method for detecting individuals at higher risk among apparently healthy young adults. In a cross-sectional study we compared a group of 30 young adults with a positive parental history of premature cardiovascular disease (CVD) to a sex and age-matched group of controls. We
hypothesized that the positive parental CVD group would have in average higher arterial blood pressure and higher blood pressure load. II. METHODS A. Subjects The study group consisted of 30 young adults, age 18 – 31 years, whose parents (at least one of them) have experienced some form of a premature CVD (myocardial infarction, stroke or venous thrombosis). Prematurity of CVD meant manifestation of disease before the age of 55 years for male parents and 65 years for female parents. In the control group, there were 30 sex and age-matched young adults with a negative parental CVD history. The age of their parents was at least 55 or 65 years (for fathers and mothers respectively) or older. The basic characteristics of both groups are presented in Table 1. Table 1 General characteristics of study and control group Variable
Study Group
Number (males/females) Age (years) Body mass index (kg/m²)
Control Group
Pvalue
19/11
19/11
1.00
26.2 ± 3.2
26.2 ± 3.0
0.97
24.6 ± 5.0
22.8 ± 3.6
0.13
A written informed consent was obtained from all the participants. The study was previously approved by the Medical Ethics Committee of the Ministry of Health of Slovenia. B. Basic measurements All the participants attended laboratory in the morning, between 7.30 and 8.30 a.m., after night fasting. Their body weight was measured to the nearest 0.1 kg and body height was measured barefoot to the nearest 0.01 m. Body mass index was calculated as body weight in kg divided by square height in m².
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 357–360, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
358
C. Office blood pressure measurements Office systolic and diastolic blood pressures were measured by conventional auscultatory method, performed by trained and experienced person using a calibrated mercury sphygmomanometer. The procedure was done according to well known recommendations – subjects were previously instructed not to consume alcohol, caffeine or nicotine prior to test, comfortably seated at the room temperature, with legs uncrossed, cuff placed on a relaxed and supported arm, such that the middle of the cuff on the upper arm is at the level of the right atrium [1]. D. Ambulatory blood pressure monitoring Measurements of ambulatory 24-hour blood pressure and heart rate were performed by Spacelabs Medical Inc. 90207 ambulatory blood pressure monitors (Redmond WA, USA), which passed the validation testing as recommended by the British Hypertension Society and the US Association for the Advancement of Medical Instrumentation [3]. Monitor uses the oscillometric technique, which is less susceptible to the changes of transducer position over the brachial artery and to the external noise, but still requires from subject to be at rest while the measurement is proceeding. The device, consisting of an appropriate cuff placed on a non-dominant upper arm and connected by a tube to a small monitor attached to a subject's belt, was prepared and activated by a trained technician. All the participants received oral and written instructions on how to act while the measuring is being done, as well as how to react in case of discomfort. Measurements were taken every 20 minutes during daytime and every 30 minutes during night-time over a 24-hour period, preferably on a workday. Data were later analyzed using software package developed by our institution. Following the exclusion of artefactual readings, we analyzed the following statistical parameters: daytime systolic and diastolic pressure and heart rate, night-time systolic and diastolic pressure and heart rate, 24-hour systolic and diastolic pressure and heart rate, as well as systolic and diastolic blood pressure load [13]. Blood pressure load was defined as the percentage of total correctly recorded measurements over 24 hours, that were >140/90 mmHg during awake hours and >120/80 mmHg during asleep hours [2]. The program also provided a plot of the data. E. Statistical analysis Results were statistically analyzed using software package SPSS for Windows – version 11.0. At first, means of descriptive statistics were used for comparison
Maja Benca, Ales Zemva, Primoz Dolenc
between the groups. We investigated the parameters of interest (24-hour BP monitoring results) using independent t-tests. A p-value of ≤ 0.05 was considered statistically significant. III. RESULTS Ambulatory monitoring showed that positive parental CVD history group had significantly higher mean 24-hour systolic and diastolic, mean daytime systolic and diastolic, and mean systolic and diastolic blood pressure load. Daytime and 24-hour heart rates were also significantly higher in this group. However, mean values of systolic and diastolic pressure, measured by conventional technique did not differ significantly between the groups. Data are presented in the Table 2. Individuals with a positive parental CVD history (study group) also had significantly higher degree of variability of several blood pressure and heart rate parameters. Variability is seen as standard deviation in Table 2. Table 2 Blood pressure and heart rate parameters, expressed as mean value ± standard deviation) Legend: BP = blood pressure, SBP = systolic blood pressure, DBP = diastolic blood pressure, HR = heart rate.
Variable Office BP (mm Hg) SBP DBP 24h ambulatory BP
Study Group
Control Group
125 ± 13 78 ± 12
123 ± 12 76 ± 9
0.653 0.410
123 ± 10 73 ± 7
118 ± 6 70 ± 4
0.044 0.076
127 ± 12
122 ± 7
0.041
77 ± 8
73 ± 4
0.045
113 ± 9
110 ± 7
0.147
65 ± 6
63 ± 6
0.252
21 ± 20 11 ± 15
10 ± 11 6±6
0.020 0.077
71 ± 8
67 ± 8
0.050
76 ± 9
70 ± 9
0.020
62 ± 8
61 ± 6
0.597
pvalue
monitoring
24h SBP (mmHg) 24h DBP (mmHg) Daytime SBP (mmHg) Daytime DBP (mmHg) Night-time SBP (mmHg) Night-time DBP (mmHg) SBP load (%) DBP load (%) 24h HR (beats/minute) Daytime HR (beats/minute) Night-time HR (beats/minute)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Ambulatory blood pressure monitoring is highly sensitive for detection of early cardiovascular risk factors in young adults
IV. DISCUSSION Our study confirmed that parental history of premature cardiovascular diseases in young adults is indeed related to significantly higher values of arterial blood pressure and blood pressure load, compared to control subjects of same age. It is an early indicator of promoted atherosclerosis, which in this case can easily be explained by interplay of genetic code and individual's environment. Both factors are known to commonly pass on from parents to siblings. The heritability of blood pressure has already been proven on twin-pairs as on general population [4,5]. The age of young adulthood of our participants was chosen in order to detect very early changes in cardiovascular system, which appear in prehypertensive period, when blood pressure is just beginning to rise but is still within normal range. Thus, we presume it is less likely that target organs in those subjects had already been damaged. In older subjects, long lasting blood pressure elevations eventually cause several secondary changes, like established atherosclerosis or nephroangiosclerosis that tend to raise blood pressure additionally. In this way, our study enlightened another important clinical application of ambulatory blood pressure monitoring – evaluation of pressure related risk in young, apparently healthy people, whose history or other data reveal enhanced probability of premature cardiovascular events. Until today, most common application of ambulatory blood pressure monitoring was diagnostic, such as identifying individuals with white coat hypertension, non-dipping blood pressure pattern, patients with refractory hypertension, suspected autonomic neuropathy, and patients in whom there was a large discrepancy between clinic and home blood pressure measurements [1,2]. Several prospective studies have documented that the average level of ambulatory blood pressure predicts risk of morbid events better than office blood pressure [1,6]. However, research is rarely performed on young and healthy adults. Majority of studies were performed on patients, treated for hypertension or other chronic diseases [7,8]. Most of important prospective or cross-sectional studies on CVD risk factors in children or young adults used conventional, office blood pressure measurements, which are less reliable in comparison to 24-hour ambulatory blood pressure measurements [9-12]. Thus, the usage of ambulatory blood pressure monitoring is the main advantage of our study. Finally, we detected a higher degree of blood pressure and heart rate variability among the subjects of study group. The occurrence of blood pressure fluctuations over time has been documented since the 18th century, but the clinical importance of this phenomenon is only now being
359
recognized [13]. Reports of several studies indicate that diurnal blood pressure variation, in addition to high blood pressure per se, is related to target organ damage and the incidence of cardiovascular events [14]. One of the first important researches to demonstrate a significant increase in cardiovascular mortality with an increase in blood pressure and heart rate variability in general population was the Ohasama study [15]. In the Pamela study, which investigated a sample of 3200 individuals, randomly selected from the general population, scientists found a significant positive relationship between left ventricular mass index and 24-hour average blood pressure values. They also provided the first demonstration of a positive independent association between left ventricular mass index and blood pressure variability [16]. Some reports suggested that vascular hypertrophy is the first damage to appear as a consequence of increased blood pressure variability [14]. Early vascular hypertrophy is best revealed by ultrasound measurement of intima-media thickness of large arteries, preferably carotid arteries. Well known European ELSA study proved that not only average 24-h pulse pressure and systolic BP values, but also 24-h BP fluctuations are associated with, and possibly determinants of, the alterations of large artery structure in hypertension [17]. These findings opened some new fields for research. In the future, it will be necessary to answer the questions about the prognostic relevance of blood pressure variability [18] and treatment possibilities for modulation of 24-h blood pressure profiles [19,20]. Ambulatory blood pressure monitoring will undoubtedly be the main investigating method in the increasing number of studies that will try to answers these questions. Whether ambulatory blood pressure monitoring will predominantly stay a scientific method, or it will be accepted as a diagnostic tool for a wider range of clinical indications remains to be seen. Although it is a noninvasive and usually painless method, ambulatory blood pressure monitoring demands a fulltime, 24-hour compliant behavior of patients or healthy volunteers, which makes it less convenient for a routine screening test. Another important limiting factor is the financial aspect, since the cost of procedure is substantially higher compared to the office blood pressure measurements. V. CONCLUSION Ambulatory blood pressure monitoring is more reliable in detection of early, preclinical elevations of blood pressure compared to the office blood pressure measurement. 24-h blood pressure levels, as well as degree and pattern of its diurnal variability have important implications for long term
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
360
Maja Benca, Ales Zemva, Primoz Dolenc
morbidity and mortality. Devices for ambulatory blood pressure measurement are usually sold with software packages that present the data in variety of ways. It would facilitate the practice if the graphic presentation of the data was standardized, as is the case for electrocardiograms.
9. 10. 11.
ACKNOWLEDGEMENT The study was supported by grant from Slovenian Research Agency.
REFERENCES 1.
2. 3.
4. 5. 6. 7. 8.
Pickering TG, Hall JE, Appel LJ et al. (2005) Recommendations for blood pressure measurements in humans and experimental animals; Part 1: Blood pressure measurement in humans; A statement for professionals from the Subcommittee of Professional and Public Education of the American Heart Association Council on high blood pressure research. Hypertension 45: 142-161. Dolenc P: Neinvazivno 24-urno merjenje krvnega tlaka. In: Dobovisek J, Accetto R. (2004) Arterijska hipertenzija. 5th edition, Lek, Ljubljana, 75-97. O'Brien E, Coats A, Owens P et al. (2000) Use and interpretation of ambulatory blood pressure monitoring: recommendations of the British Hypertension Society. BMJ 320: 1128-1134. Snieder H, Harshfield GA, Treiber FA. (2003) Heritability of blood pressure and hemodynamics in African- and EuropeanAmerican Youth. Hypertension 41: 1196-1201. Kupper N, Willemsen G, Riese H. (2005) Heritability of daytime ambulatory blood pressure in an extended twin design. Hypertension 45: 80-85. Weber MA. (2002) The 24-hour blood pressure pattern: does it have implications for morbidity and mortality? Am J Cardiol 89(suppl): 27A-33A. Kennedy BP, Farag NH, Ziegler MG et al. (2003) Relationship of systolic blood pressure with plasma homocysteine: importance of smoking status. J Hypertens 21: 1307-1312. Sundstrom J, Sullivan L, D'Agostino RB et al. (2003) Plasma homocysteine, hypertension incidence, and blood pressure tracking; The Framingham Heart Study. Hypertension 42: 1100-1105.
12. 13. 14. 15. 16. 17.
18. 19. 20.
Primatesta P, Falascheti E, Poulter NR. (2005) Birth weight and blood pressure in childhood; Results from the Health survey for England. Hypertension. 45: 75-79. Alper AB, Chen W, Yau L et al. (2005) Childhood uric acid predicts adult blood pressure; The bogalusa heart study. Hypertension 45: 34-38. Thomas NE, Baker JS, Davies B. (2003) Established and recently identified coronary heart disease risk factors in young people. The influence of physical activity and fitness. Sports Med 33(9): 633-650. Pall D, Katona E, Fulesdi B et al. (2003) Blood pressure distribution in a Hungarian adolescent population: comparison with normal values in the USA. J Hypertens 21: 41-47. Parati G. (2005) Blood pressure variability: its measurement and significance in hypertension. J Hypertens 23 (suppl 1): S19-S25. Parati G, Lantelme P. (2002) Blood pressure variability, target organ damage and cardiovascular events. J Hypertems 20: 1725-1729. Kikuya M, Hozawa A, Ohokubo T et al. (2000) Prognostic significance of blood pressure and heart rate variabilities. The Ohasama Study. Hypertension 36: 901-906. Sega R, Corrao G, Bombelli M et al. (2002) Blood pressure variability and organ damage in a general population. Results from the PAMELA Study. Hypertension 39: 710-714. Mancia G, Parati G, Hennig M. (2001) Relation between blood pressure variability and carotid artery damage in hypertension: baseline data from the European Lacidipine Study on Atherosclerosis (ELSA) J Hypertens 19: 19811989. Parati G, Valentini M. (2006) Prognostic Relevance of Blood Pressure Variability. Hypertension 47: 137-138. Palatini P, Parati G. (2005) Modulation of 24-h blood pressure profiles: a new target for treatment? J Hypertens 23: 1799-1801. Kario K. (2005) Morning Surge and Variability in Blood Pressure: A New Therapeutic Target? Hypertension 45: 485-486. Address of the corresponding author: Ales Zemva Division of Hypertension, University Medical Centre Bolnisnica dr. Petra Drzaja Vodnikova 62 1525 Ljubljana Slovenia E-mail:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Application of time-gated, intensified CCD camera for imaging of absorption changes in non-homogenous medium. P. Sawosz, M. Kacprzak, A. Liebert, R. Maniewski Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences Trojdena 4, 02-109 Warsaw, Poland
Abstract— The paper presents application of time-gated, intensified CCD camera for imaging of local changes of absorption in the non-homogenous liquid phantom. The surface of the phantom was illuminated sequentially at 25 points (forming 5×5 array) by laser beam at wavelength of 780 nm generated by picosecond, near-infrared diode laser. The spatial distribution of diffusely reflected photons was measured in reflectance geometry at null source-detector separation. For each position of the laser beam the reflectance was measured for two different time windows, distinctly delayed in respect to the laser pulse. The observation of late photons, which penetrated deeply in the optically turbid medium allowed to image the absorbing inclusion (10 mm diameter black ball) located at depth of 15 mm. For each of two time windows the single images for all scanned points were summed. Obtained final images allowed to localize the non-homogeneity in the phantom. The study shows, that the presented method based on imaging at null source-detector separation distance for late time windows may be applied in the evaluation of the tissue absorption measurements, especially in the brain oxygenation imaging. Keywords— time-gated intensified CCD camera, timeresolved imaging, non-homogenous medium
I. INTRODUCTION In the last years the optical techniques based on near infrared spectroscopy were rapidly developed in medical diagnostics, especially in brain studies [1]. These techniques are non invasive and potentially could be easily applied in clinical conditions at the bedside. Large number of emission and detection points positioned on the surface of the head allows to image changes of brain oxygenation and/or perfusion and finally to localize the ischemic areas. Several technical solutions of the optical systems for imaging of brain oxygenation changes were proposed. Continuous wave [2-6], frequencydomain [7, 8] and time-domain [9-14] systems were reported. Recently, time-gated CCD camera was applied in construction of the NIR imager as a multichannel detector [15,16]. Imaging on the CCD array with application of intensified, timegated camera was proposed for positioning of absorbing and scattering inclusions [17] as well as fluorescent objects [18] in a turbid medium. This imaging technique could potentially increase spatial resolution of the optical methods by combining the information of spatial and temporal distribution of photons remit-
ted from an object of interest. Such measurement technique can also be used for imaging at null source-detector separation for improving the spatial resolution and contrast [19,20]. In the present paper we report on experiment in which temporal and spatial distribution of photons reemitted from a highly scattering turbid medium, simulating human tissue was imaged at null source-detector separation. This experiment is a first step in planned development of a brain oxygenation imaging system based on the time-gated ICCD. II. EXPERIMENTAL SETUP The experimental setup for measurement of time-resolved spatial distribution of diffuse reflectance from a turbid medium, simulating human tissue consisted the ICCD camera, diode laser, delay line and non-homogenous liquid phantom. Phantoms was fish tank filled with milk and water solution with absorbing inclusion, black ball of diameter of 10 mm immersed at depth of 15 mm. The position of the ball is marked in Fig.1. Near-infrared, picosecond diode laser BHL-600 (Becker&Hickl, Germany) at wavelength of 780 nm and repetition frequency of 50 MHz was applied. The pulse width was about 100 ps. To acquire pictures at different times in respect to the laser pulse we used time-gated intensified CCD camera Picostar HR (Lavision, Germany).
Fig. 1. Setup for measuring time-resolved spatial distribution of diffuse reflectance (1. scanning grid, x – position of black ball, 2. non-homogeneity black ball, 3. Nikon standard 50mm objective, 4. ICCD camera, 5. Becker&Hickl near-infrared, picosecond diode laser, 6. trigger line).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 410–412, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Application of time-gated, intensified CCD camera for imaging of absorption changes in non-homogenous medium.
The phantom was illuminated sequentially at 25 different points as it is shown in Fig.1 (scanning grid), however the optical pathlength was constant for all positions. For each position of the laser beam, the camera grabbed the images for two different time windows. The time windows 300ps long were delayed in respect to the laser pulse for 1100 ps and 1400 ps. The CCD data collection time was 1280 ms. The data from the camera was acquired with the use of a PC class computer (Pentium IV, 3.30 GHz) and DaVis software v.7.0 provided by the ICCD manufacturer (LaVision. Germany). The 12bit images of 320×240 pixel resolution were recorded. The time between the laser pulse and opening of the camera shutter was changed using delay line (Kentech Instruments, UK). The change of delay time was controlled with a RS232 communication line and took about 350ms. III. RESULTS The spatial distribution of diffusely reflected light after its penetration in the highly scattering phantom was measured. Time-resolved experiment allowed us to distinguish photons with respect to their pathlength. However during experiment we focused at time windows, which were significantly delayed in respect to the laser pulse. We summed up 25 images corresponding to each of two time windows. Resulting images are presented in Fig. 2 and Fig. 3, for earlier and later time window respectively.
411
IV. DISCUSSION The presented experiment showed that the measurement of spatially and temporally-resolved diffuse reflectance from a highly scattering turbid medium is feasible. We observed only late photons, therefore we could increase the sensitivity of the camera to the changes of absorption located deeply and we imaged successfully the non-homogeneity disturbance, immersed at depth of 15 mm. For the earlier time window the black spot, which corresponds with the position of the ball is more focused. However, it can be noted that for the later time windows the contrast between homogenous and non-homogenous areas is higher. Unfortunately this technique has also disadvantages, which need to be taken under consideration. Primarily numerical aperture of such system is limited and the switching time between delays is significant comparing to the hemodynamic changes. We conclude that the method based on imaging at null source-detector separation distance for late time windows may be applied in development of brain oxygenation imaging system.
REFERENCES 1.
Fig. 2 Resulting image delayed by 1100 ps in respect to the laser pulse.
Fig. 3 Resulting image delayed by 1400 ps
in respect to the laser pulse.
Litscher, G. and G. Schwarz (1997) Transcranial cerebral oximetry. Pabst Sci. Pub. Lengerich. 2. Siegel, A.M., J.J.A. Marota, and D.A. Boas (1999) Design and evaluation of a continuous-wave diffuse optical tomography system. Optics Express. 4(8): p. 287-298. 3. Boas, D.A., et al. (2001) The accuracy of near infrared spectroscopy and imaging during focal changes in cerebral hemodynamics. Neuroimage. 13(1): p. 76-90. 4. Franceschini, M.A., et al. (2003) Hemodynamic evoked response of the sensorimotor cortex measured noninvasively with near-infrared optical imaging. Psychophysiology. 40(4): p. 548-60. 5. Yamashita, Y., A. Maki, and H. Koizumi (1999) Measurement system for noninvasive dynamic optical topography. Journal Of Biomedical Optics. 4(4): p. 414-417. 6. Kohl-Bareis, M., et al. (2002) Near-Infrared Spectroscopic Topographic Imaging of Cortical Activation. Lecture Notes of ICB Seminar on Laser Doppler Flowmetry and Near Infrared Spectroscopy in Medical Diagnosis, Warsaw. 7. Chance, B., et al. (1998) A novel method for fast imaging of brain function, non- invasively, with light. Optics Express. 2(10): p. 411-423. 8. Danen, R.M., et al. (1998) Regional Imager for Low-Resolution Functional Imaging of the Brain with Diffusing Near-Infrared Light. Photochemistry and Photobiology . 67(1): p. 33-40. 9. Eda, H., et al. (1999) Multichannel time-resolved optical tomographic imaging system. Review Of Scientific Instruments. 70(9): p. 3595-3602. 10. Miyai, I., et al. (2001) Cortical mapping of gait in humans: a near-infrared spectroscopic topography study. Neuroimage. 14(5): p. 1186-92. 11. Selb, J. (2005) et al., Improved sensitivity to cerebral hemodynamics during brain activation with a time-gated optical system: analytical model and experimental validation. J Biomed Opt. 10(1): p. 11013.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
412 12. Kacprzak, M., A. Liebert, and R. Maniewski (2005) A time-resolved NIR topography system for two hemispheres of the brain, in European Conferences on Biomedical Optics. Munich, Germany. 13. Wabnitz, H., et al. (2006) Depth-selective analysis of responses to functional stimulation recorded with a time-domain NIR brain imager, in Biomedical Optics 2006 Technical Digest (Optical Society of America, Washington, DC). p. ME34. 14. Contini, D., et al. (2006) Design and characterization of a twowavelength multichannel time-resolved system for optical topography. Biomedical Optics Technical Digest (Optical Society of America, Washington, DC). 15. Selb, J., et. al. (2006) Time-gated optical system for depth-resolved functional brain imaging Journal of Biomedical Optics 11(4), 044008 (July/August) 16. Selb, J., et. al. (2005) Improved sensitivity to cerebral hemodynamics during brain activation with a time-gated optical system:analytical model and experimental validation, Journal of Biomedical Optics 10(1), 011013 (January/February) 17. D'Andrea, C., et al. (2003) Time-resolved optical imaging through turbid media using a fast data acquisition system based on a gated CCD camera. Journal Of Physics D-Applied Physics. 36(14): p. 1675-1681.
P. Sawosz, M. Kacprzak, A. Liebert, R. Maniewski 18. Laidevant, A., et al. (2006) Time-Resolved Imaging of a Fluorescent Inclusion in a Turbid Medium Using a Gated CCD Camera. in Biomedical Optics 2006 Technical Digest (Optical Society of America, Washington, DC). Fort Lauderdale, Florida, USA. 19. Sase, I., et. al. (2006) Noncontact backscatter-mode near-infrared time-resolved imaging system: preliminary study for functional brain mapping, Journal of Biomedical Optics 11,(5), 054006 (September/October) 20. Torricelli, A., et. al. (2005) Time-Resolved Reflectance at Null Source-Detector Separation:Improving Contrast and Resolution in Diffuse Optical Imaging, PRL 95, 078101
Author: Piotr Sawosz Institute: Institute of Biocybernetics and Biomedical Engineering PAS Street: Ks. Trojdena 4 City: Warsaw Country: Poland Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Bluetooth Portable Device for Continuous ECG and Patient Motion Monitoring During Daily Life P. Bifulco1, G. Gargiulo2, M. Romano1, A. Fratini1 and M. Cesarelli1 1
Biomedical Engineering Unit – Dept. of Electronic end Telecommunication Engineering, University "Federico II", Naples, Italy 2 The University of Sydney: Electrical and Information Engineering school, Sydney, Australia
Abstract— Continuous patient monitoring during daily life can provide valuable information to different medical specialties. Indeed, long recording of related cardiac signals such as ECG, respiration and also other information such as body motion can improve diagnosis and monitor the evolution of many widespread diseases. Key-issues for portable or even wearable biomedical devices are: power consumption, longterm sensors, comfortable wearing, easy and wireless connectivity. Within this scenario, is valuable to realize prototypes making use of novel electronic technologies and common available communication technologies to assess practical use of long-term personal monitoring and foster new ways to provide healthcare services. We realized a small, battery powered, portable monitor capable to record ECG and body three-axes acceleration and continuously wireless transmit to any Bluetooth device including PDA or cellular phone. The ECG front end offers ultra-high input impedance allowing use of dry, long-lasting electrodes such conductive rubbers or novel textile electrodes that can be embedded in clothes. A small size MEMS 3-axes accelerometer was also integrated. Patient monitor incorporate a microprocessor that controls 12-bit ADC of signals at programmable sampling frequencies (e.g. 100 Hz) and drives a Bluetooth module capable to reliable transmit real-time signals within 10 m range. All circuitry can be powered by a standard mobile phone like Ni-MH 3.6V battery that can sustain more than seven day continuous functioning utilizing the Bluetooth Sniff mode to reduce TX power. At the moment we are developing dedicated software to process data and to extract concise parameters valuable for medical studies. Keywords— Personal monitoring device, ECG, 3-axes accelerometer, Bluetooth, biomedical instrumentation.
I. INTRODUCTION For the management of various pathologies it can be very important to monitor patient for long periods during his normal daily activities [1-3]. For example, a continuous personal monitoring of chronic patients can reduce hospitalisations and improve patients’ quality of life; cardiac long monitoring (e.g. ECG) can help in diagnosis and identification of syncope and other paroxysmal arrhythmias; longterm patient’s activities monitoring can help in elderly people management; combining cardiac activity (e.g. heart rate)
and body-motion, patient’s physical activity and energy expenditure can be estimated [4]; human performance in particular condition and/or environment (e.g. athletes, divers) can be evaluated, etc. It is also worth mention that continuous monitoring can help in drive and regulate therapies and treatment (e.g. monitor blood glucose and insulin injection control). To accomplish these tasks personal patient’s monitoring equipment have to comply with some specific requirement: reduced dimension, portability and/or wearability (light weight, specific sensors, body compatibility etc.), long-term signals or parameters monitoring (battery consumption, long-term electrodes, etc.), continuous signal acquisition and real-time processing and feature extraction (A/D, microprocessors, SW, etc.), transmission capability (band, range, wireless, etc.), provide data integrity and security (communication protocols, identification, encryption, etc.), compliance with medical devices regulation (electrical safety, electromagnetic compatibility, etc.) [7]. Recently are becoming more and more available on market wireless monitoring devices, such as hospital patient monitors, ambulance or portable equipments, some homecare devices and, more in general, devices to be used in the every-day life, which often use available telecommunication channels to communicate with external environment. Beside, the spread of personal computational devices such as mobile phone and PDA, embedding wireless communication technology, offers a great advantage in making patient monitoring devices truly personal, and truly wearable. In particular, Bluetooth standard [8-10] offers important advantages: operation in ISM (Industrial, Scientific and Medical) band, low cost, low EM interferences [11], reduced power consumption, confidentiality of the data, dimensions of the transmitter and it is capable of generate small pico-net of some devices. Also it is embedded in most of portable, palm computers and mobile phones and already used in a great number of wearable devices (e.g. mobile phones wireless headsets). The emerging Zig-Bee standard [12] offers enhanced capabilities especially in term of power consumption, number of connected devices, etc. but, currently, it is not so widespread as Bluetooth. Taking into account the mentioned requirements, a small prototype personal monitor, capable to record one or more ECG leads, body 3-axes acceleration, and an optional photo-plethismograph (PPG)
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 369–372, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
370
P. Bifulco, G. Gargiulo, M. Romano, A. Fratini and M. Cesarelli
has been realized and tested in different environments. A C++ software provides signal processing and plots; this SW can be ported on PDA or mobile phones. II. MATERIALS AND METHODS A. ECG monitoring Long term body-potentials monitoring requires specific solutions. In particular, electrodes stability over time it is crucial [5]: use of wet electrodes (e.g. commonly Ag/AgCl disposable electrodes, also employed in Holter ECG recording) is to avoid because of progressive drying of conductive gel. An alternative can be the use of polarizable electrodes (e.g. platinum) or dry electrodes such conductive rubbers, easily tolerated by skin, flexible and, for that, often utilized in sport medicine and long-term recording. The latter, however, offer higher impedance with respect to others, which imply higher input impedance for the amplification stage. Furthermore, novel design made available textile electrodes that can be embedded in clothes, which significantly improve daily usability; this solution may result again in high electrode impedance. In addition to that it is worth remind that electrode current generates overpotentials. These considerations suggest keeping amplifier input- impedance as higher as possible [6]. Another significant problem, especially in daily life recordings, is electrode motion artifact. It is well known that unstable connection between electrode and skin, but also skin stretch and in general motion cause relatively large electrode potential variations, which degrade bio-potential recording quality. Obviously, as also above mentioned, another key issue is circuit power consumption: it have to be kept as limited as possible to improve cell duration. We decided to supply our circuit using a common Ni-MH 3.6V battery (as those commonly use for mobile phones): a capacity of 1000 mAh is usual. Accordingly to these considerations, a ECG frontend was designed to provide a very high input impedance (obtaining also a general independence from the electrode used), a gain of about 1000 V/V, current consumption < 1 mA, single power supply of 3.6V, over voltage protection, large dynamic and opportune monitoring-ECG frequency band. B. Body motion To get concise information about patient motion to estimate physical activity a novel MEMS (MicroElectroMechanical Systems) 3-axes accelerometer was employed. MEMS technology is based upon micromachined sense elements, usually silicon, to create moving
structures. Mechanical properties of silicon (stronger than steel but only a third of the weight) combined with microelectronics allow electrical signal generation by the moving structures. Typically a MEMS accelerometer consists of interlocking fingers that are alternately moving and fixed. Acceleration is sensed by measuring the capacitance of the structure, which varies in proportion to changes in acceleration. A capacitive approach allows several benefits when compared to the piezoresistive sensors used in many other accelerometers. In general, gaseous dielectric capacitors are relatively insensitive to temperature. Although spacing changes with temperature due to thermal expansion, the low thermal coefficient of expansion of many materials can produce a thermal coefficient of capacitance about two orders of magnitude less than the thermal resistivity coefficient of doped silicon. Capacitance sensing therefore has the potential to provide a wider temperature range of operation, without compensation, than piezoresistive sensing. Moreover, most of the available capacitive sensors allows for response to DC accelerations as well as dynamic vibration. These characteristics of MEMS capacitive accelerometer sensor combined with their extremely tiny dimension (few mm) and light-weight (few grams), their low power consumption made such sensors a convenient choice for personal biomedical devices design. C. Photo-plethismograph and temperature sensor Photoplestismographic (PPG) and Sp02 sensors are nowadays integrated in most of patient monitors. The use of light (red and infrared) throughout patient’s skin to estimate changes in artery diameter that in turn depend on blood pressure provides a non invasive way to gather information such as heart rate, percentage of oxygenated hemoglobin and qualitative data about blood pressure. However, such type of sensors result very sensitive to motion and usually require relatively high power consumption. Integrated temperature sensors provide a reliable, intrinsically linear voltage (or current) proportional to temperature, absorbing extremely low currents; uncalibrated sensors usually offer accuracy of tens of degree. Such sensors (few mm in dimension) can be easily integrated with other circuitry and can provide information about patient’s skin temperature, which in turn depends on local peripheral circulation muscle activity, etc. D. Signal acquisition and transmission Modern microcontrollers can easily perform channels multiplexing, analog to digital conversion, data packet and transmission stack protocol formation and can also drive RF circuitry. In order to resolve small signal variation a 12bit
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Bluetooth Portable Device for Continuous ECG and Patient Motion Monitoring During Daily Life
371
ECG ECG
A/D A/D
… MUX
3-axes accel.
12 12bit bit
… other sensors analog inputs
Microprocessor Microprocessor
Serial comm.
TX bluetooth
memory memory
Fig. 1 Personal monitor device: blocks schematic AD converter was employed. An integrated, commercially available, Bluetooth transmitter have been utilized: it operates at a frequency of 2.4 GHz (ISM band), the antenna is integrated on circuit board and allow 10 meters operative range. However, Bluetooth transmitter in normal functioning mode draws a relatively high current (about 50 mA) from batteries reducing their life to few hours of continuous transmission. A solution can be a temporary storage of data into internal personal device memory and intermittent data transfer, obtaining a good compromise of real-time data transfer, by using a Bluetooth power saving modality. Bluetooth offers three different current-saving modes: the Hold mode, Sniff mode and Park mode (see Bluetooth specification). In sniff mode, a slave is only active periodically; during the active phase, slave can receive and send data as usual. The master, knowing the active phase interval, only addresses the slave during this period; this explains the lower current consumption required. Of course, the possible bit rate is also reduced due to the pause time. Global architecture of the realized device is depicted in the following block schematics
Fig. 2 Picture of the realized circuit of patient monitoring device on market. Power supply of all circuitry was regulated to 3.3V. Bluetooth module current consumption was about 40 mA which be lowered down to 6 mA using the Sniff mode power saving modality, still allowing a continuous transmission of 10 kbps. The following figure show the entire circuit realized (PPG excluded) compared with one-cent coins. Circuit sizes are 20 by 43 by 5 mm. Software was designed to continuously receive data from the Bluetooth personal monitor. Following figures show the raw signals received (5m distance): color code is: ECG (blue); latero-lateral axis accel. (cyan); accel. anteroposterion axis accel. (green); acceleration caudo-cranial axis acceleration (red); PPG (purple), when available. Dry silicon rubber electrodes were used in most of the recordings. Due to the extremely high input impedance some recordings were successfully performed with patient immerse in water (not the transmitter!), using rough elastic bands with thin copper wires embedded as electrodes.
III. RESULTS Features of the different modules of the realized prototype device are here reported. ECG front end show the following characteristics: input impedance: >1014Ω; CMMR: 90 dB; gain: 730 V/V; bandwidth: 0.38-43 Hz; noise 0.1-10 Hz: 5μVpp; power supply: 3.3V; supply current: 1mA. MMA7260Q by Freescale MEMS accelerometer was used to measure 3-axes acceleration (range ±1.5g), frequency response from DC (gravity is measured) to 200 Hz, power supply: 3.3V; supply current:0.5mA max. Also a photopletismograph and an IC temperature sensor were assembled and optionally added to the realized devices. The signal ADC and the Bluetooth transmission module were acquired
Fig. 3 Example of raw signals received from a standing patient
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
372
P. Bifulco, G. Gargiulo, M. Romano, A. Fratini and M. Cesarelli
IC samples. Special thanks goes to Prof. A. Luciano for helpful discussions and suggestions.
REFERENCES 1.
Fig. 4 Example of raw signals received from a patient performing a series of squatting physical exercises (3 cycles are evident on accelerations)
IV. DISCUSSION AND CONCLUSIONS A small, battery powered (standard mobile phone cell), portable monitor capable to record ECG and body threeaxes acceleration and continuously wireless transmit (10m range) to any Bluetooth device including PDA or cellular phone was realised. The ECG front-end offers considerably high input impedance allowing a sort of electrode type independence (dry, long-lasting electrodes such conductive rubbers were used). A small size MEMS 3-axes accelerometer was also integrated. At present we are adapting the software to be executed on PDA and mobile phones. Also new SW features are being included to process data and to extract concise parameters valuable for medical studies (e.g. HRV, respiration rate extraction from ECG, physical activity, body position, etc.). In particular, estimation of body standard positions (upright, supine, prone, etc.) and activities (walking, running, sleeping) is being developed. Trials employing of textile sensors and device modification to be integrated in wearable systems are scheduled. Further studies are being started to adapt the device to monitor patient during sleep, athletes and to record fetal ECG. Other researches will be concentrates on evaluating different sensors (such as blood pressure, glucose, etc.), in order to design personal monitors for specific pathologies or for targeted studies.
ACKNOWLEDGMENT
A. Fratini, P. Bifulco, M. Bracale, M. Cesarelli. (2005) A prototype wireless personal ECG monitoring device connected via Bluetooth. IFMBE Proc. vol. 11, 3rd European Medical & Biological Engineering Conference, Prague, Czech Republic, Nov. 20-25, 2005. 2. Degree thesis n. 355 by A. Fratini (2005) ‘Bluetooth Patient wireless telemetry’ Biomedical Eng. Unit - D.I.E.T. University "Federico II" of Naples. 3. Degree thesis n. 378 by G. Gargiulo (2006) ‘Design and development of a prototype for continuous acquisition and processing of biomedical signals’ Biomedical Eng. Unit D.I.E.T. University "Federico II" of Naples. 4. Strath SJ, Brage S, Ekelund U (2005) Integration of physiological and accelerometer data to improve physical activity assessment. Med Sci Sports Exerc. 37(11 Suppl):S563-71 5. Searle A and Kirkup L. (2000) A direct comparison of wet, dry and insulating bioelectric recording electrodes. Physiol. Meas. 21:271-283 6. Scheer HJ, Sander T and Trahms L (2006) The influence of amplifier, interface and biological noise on signal quality in high-resolution EEG recordings. Physiol. Meas. 27: 109-117 7. Lin YH, Jan IC, Ko PC, Chen YY, Wong JM, Jan GJ (2004) A wireless PDA-based physiological monitoring system for patient transport.. IEEE Trans Inf Technol Biomed. 8(4):43947 8. SIG Bluetooth, (2001) ‘Specification of the Bluetooth System - Core’ version 1.1, February 2001 9. J.Bray, C.Sturmann, “Bluetooth: Connect without cables”, Prentice Hall 10. Bluetooth, The official bluetooth website: http://www.bluetooth.com 11. COMAR – Technical Information Statement (2000) Human exposure to radio frequency and microwave radiation from portable and mobile telephones and other wireless communication devices 12. ZigBee Alliance at http://www.zigbee.org Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Paolo Bifulco University ‘Federico II’ of Naples Via Claudio, 21 (I-80125) Napoli Italy
[email protected]
Author gratefully tanks Analog Device, Burr-Brown, Freescale Semiconductor, Maxims, which kindly to provide
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Clinical implication of pulse wave analysis R. Accetto1, K. Rener1, J. Brguljan-Hitij1, B. Salobir1 1
University Medical Center Ljubljana, Division of Hypertension, Ljubljana, Slovenia
Abstract— Conventional blood pressure measurement can not explain the link between hypertension and cardiovascular diseases. The missing link is arterial stifness wich can be meassured by noninvasive applanation tonometry. Although well known fenomenon, due to techological reasons it was not clinicaly used for diagnostic purposes. With computer and other technology we are able do detect and analize periferal pulse wave and central aortic pulse wave. Central aortic pulse wave is a function of arterial stifness. The process, by wich the arterial system interacts with left ventricle and coronary arteries can be demonstrated by analysing aortic root pressure waveform. In the young it is common to see no or small augmentation in contarst to older person. Examlpes are presented.
is simple to use, noninvasive and accurate method. The principal of applanation tonometry is partially compression of artery agains hard structure. The small sensor detects the force on the artery wall (2,3). We are using applanation tonometer produced by Sphygmocor (Fig 1.)
Keywords— Pulse wave, applanation tonometry.
I. RISK FACTORS Cardiovascular diseases are one of the main causes of death in western industrial countries as in Slovenia. The exact etiology is not known but we know that a growing number of risk factors including hypertension, diabetes, smoking, dislipidemia etc. lead to heart atacs, heart failure and stroke. Blood pressure is usualy measured by noninvasive auscultatory method introduced by Riva-Roci nad Korotkow more that 100 years ago, and newer oscilometric method. With these methods we are measuring the arterial pressure in brachial artery, since we are using upper arm cuff. The registration of the arterial pulse was used for clinica diagnosis in mid to late nineteenth century and first description of changes in the shape of pulse with age were described (1). The link between risk factor and cardiovascular disease is arterial stifness. It can be increase by three mechanisms: 1. A breakdown of elastin fibres 2. Damage to the endothelium/smooth muscle mechanism 3. An increase in mean arterial pressure The process, by wich the arterial system interacts with left ventricle and coronary arteries can be demonstrated by analysing aortic root pressure waveform. II. APPLANATION TONOMETRY The development of the hand-held tonometry probe means revival of pulse wave analysis in clinical practice. It
Fig. 1. Applanation tonometer
The Sphygmocor system incorporets the actual pulse recorded at the radial artery and the properties of the transfer functiom between the aorta and the radial artery to estimate central aortic pressure. The radial wave form is calibrated using systolic and diastolic pressures values from conventional cuff measurements. An average waveform is calculated from the ensemble average of a series of contigous pulses (4,5) III. AORTIC PRESSURE WAVEFORM The shape of aortic pressure pulse is a result of the ventricular ejection and the physical properties of the arterial system. Normaly, there is wave reflection. In the absence of wave reflection, the shape of the pressure wave during systole is determined by the ejection wave and the elastic and geometric properties of the ascendenting aorta. If wave reflection occures during systole, it will increase the pressure agains wich the ventricle has to eject blood. Knowledge of the pressure waveform will facilitate analysis of the coupling between the ejecting heart and the pressure load (Fig 2.)
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 354–356, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Clinical implication of pulse wave analysis
355
Fig. 4: Pulse wave analysis in older woman (RT 74 years) (Klinični oddelek za hipertenzijo, 2007)
Fig. 2. Pulse wave characteristics P1: first systolic ejection, P2: the systolic peak, ΔP: augmentation pressure, LVET: left ventricular ejection time
Difference between P1 and P2 is absolute augmentation and augmentation index can be calculated, related either to P1 or pulse pressure (systolic blood pressure – diastolic blood pressure). Stifness of arteries has major effect on aortic pulse wave. In the young it is common to see no or small augmentation as seen on Fig. 3 in contast to older person (Fig. 4.)
In younger person the radial peak is narrow, the late systolic shoulder in aortic pulse is lower than the early systolic peak. In older person there is increased late systolic shoulder in radial pulse and increased late systolic augmentation in the aortic pulse. Augmentation pressure during systole produces a different loading patteren on the miocardium, even if peak systolic values are identical.
IV. CONCLUSIONS Applanation tonomertry is noninvasive method for detecting and analysing pulse wave. By detecting periferal (radial) pulse wave, central aortic pulse wave can be calculated. Despite the same radial pulse wave, the central aortic palse wave can be different, influenced by age, hypertension, diabetes and other risk factors wich influence on arterial stiffness.
REFERENCES
Fig. 3.Pulse wave analysis in young woman (KR 30 years) (Klinični oddelek za hipertenzijo, 2007)
1. Mohamed. FA. The physiology and clinical use of the sphygmograph. Medical Time Gazette 1872;1:62 In: A clinical guide. Pulse wave analysis. SphygmoCor. Sidney, Australia 2006 2. Nichols WW, Orourke MF. McDonald´s Blood flow in arteries. Theoretica, experimental and clinical principles. 1998, 4th edition. Arnold, London 3. Kelly R, Hayward C, Avolio A et al. Non-invasive determination of age related changes in the human arterial pulse. Circulation 1989;80:1652-1659
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
356 4. Karamanoglu M, Orourke MF, Avolio P et al. An analysis of the relationship between central aortic and peripheral upper limb pressure waves in man. Eur Heart J 1993;14:160-7 5. Chen CH, Fetics B, Nevo E et al. Estimation of central aortic pressure waveform by mathematical transformation of radial tonometry pressure. Validation of generalized transfer function. Circulation 1997;95(7):1827-36
R. Accetto, K. Rener, J. Brguljan-Hitij, B. Salobir Author: doc. dr. Rok Accetto, dr. med. Institute: University Medical Centre Ljubljana, Division of Hypertension Street: Vodnikova 62 City: SI-1000 Ljubljana Country: Slovenia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Control Abilities of Power and Precision Grasping in Children of Different Ages B. Bajd and L. Praprotnik University of Ljubljana, Faculty of Education, Ljubljana, Slovenia Abstract— The aim of our study was to assess the grip force control under visual feedback in children of three age groups. We designed a tracking-based assessment system. The gripmeasuring device developed was used as an input to a tracking task where the children applied the grip force according to the visual feedback from the computer screen. The evaluation was performed in a group of healthy 6, 9, and 14- year old children. In our investigation we used three different target signals: randomized ramp, sinus, and rectangular signals. The children performed the tasks using the lateral grip precision and cylindrical power grip of their dominant hand. The results show that the relative root mean square error (rrmse) between the target and the measured response was noticeably decreased with increased age. The results showed no significant differences in performance between the precision and power grip. Keywords— human hand, power grip, precision grip, motor development
I. INTRODUCTION Grasping is defined as the application of functionally effective force of the hand to an object to accomplish a task within given constraints [1]. When an object is grasped, the fingers have to apply forces that satisfy functional constraints of the task and physical constraints of the object. The key goal in most grasping tasks is to maintain a stable grip by adapting the contact forces of the fingers and the hand. Based on the functional properties of the task, grip types can be divided based on the functional properties of the task into precision and power grips [2]. When the emphasis of the task is on strength and stability of the object, power grips are used (e.g. holding a hammer). The object is grasped between the fingers and palm to achieve high stability and to prevent slippage. Precision grips are used when high dexterity and manipulability of the grasped object is required (e.g. grasping a pencil). In precision grip the object is grasped between the tips of the thumb and the opposing fingers, providing high compliance and tactile feedback during manipulation. An important factor affecting grasping of an object is tactile sensing of the force applied by the fingertips and other parts of the hand. During grasping finger forces are controlled by the central nervous system which regulates the activity of the hand and arm muscles to act in synergy. The
central nervous system receives dynamic feedback information from the visual sensors and from other exteroceptive and proprioceptive body sensors while regulating the motor output. The development of sensory-motor functions shaping the hand skills begins in human at nursery age. Voluntary grasping develops at 4 months of age and the first precision grasping appears at the age of 10 months [3,4]. Grasping and manipulative skills further develop in subsequent years. The sensory and motor functions are enhanced during the childhood until they become fully developed. In our previous research we assessed the grip force control in the group of 10-year old children and the group of adults. The results of the children showed much larger variability among subjects as compared to the adults. The children produced more than twice as large tracking errors suggesting less developed grip force control in dynamic isometric tasks. In both groups, no significant difference was found in force control between the dominant and nondominant hand [5]. In another study it was our aim to assess how the grip force control under visual feedback is affected in children with Down syndrome. The results showed that the healthy children were able to quickly understand the tracking task and performed all tasks with good accuracy. The children with Down syndrome required more time to adjust to the tasks [6]. The aim of the present study was to evaluate the differences in control abilities of power and precision grasping in three different age groups, 6-year, 9-year and 14-year old children. II. METHODS AND MEASUREMENTS Tracking tasks were applied when assessing the grasping abilities in three groups of children. Tracking tasks are visually guided motor tasks which require a person to track the presented target by application of the grasping force. The force output was presented on a computer screen simultaneously with the target. The tracking task was focused on both, spatial accuracy, where the accuracy in the position relatively to the target was emphasized, and on temporal accuracy, where the rate of tracking was important. Dynamic targets were moving according to randomized sinus, ramp, and rectangular signals. The selection of the target signal depends on the purpose of the assessment. The sinu-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 365–368, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
366
B. Bajd and L. Praprotnik
soidal targets are aimed to assess accuracy of tracking and endurance. The ramp targets are used to evaluate motor activity with a constant output rate and also muscle fatigue. The rectangular targets are aimed to assess performance of predictive behavior and temporal parameters of the sensorymotor system (e.g. response time). The accuracy of tracking was assessed by the root mean square error (rrmse) between the target and the measured response. Figure 1 shows the basic scheme of the grip force tracking system used in our studies [7]. For the assessment of grip force control, the child was presented with a target signal and the measured response on a computer screen. The target signal was shown in blue color and the force response in red color. Vertical position of a blue ring, located in the center of the screen, corresponded to the current value of the target and the position of a red spot corresponded to the applied grip force in real-time. The red spot moved upwards when the force was applied and returned to its initial position when the grip was released. The aim of the tracking task was to track the target as accurately as possible by adapting the force on the grip measuring device. The complexity of the task was adjusted by selecting the shape of the target signal (e.g. ramp, sinusoidal, and rectangular shape), setting the level of the target force and changing the dynamic parameters (e.g. frequency, force-rate). The measuring system (Fig. 2) was based on a compact device, which was connected to a computer through standard parallel port. The system consists of two forcemeasuring units of different shapes (cylinder and thin plate) which are connected to a personal computer through an interface box. Each device consists of a single point load
Visual feedback information
cell (PW6KRC3 or PW2F-2, HBM GmbH, Darmstadt, Germany), which is mounted in a metal construction. The shape and the size of the measuring units are similar to the objects used in daily activities (e.g. cup and key). The cylindrical unit allows the assessment of power grasp forces up to 300 N with the accuracy of 0.02% over the entire measuring range. The second unit (precision grasp) is made up of two metal parts which shape into a thin plate at the front end, resembling a flat-shaped object (e.g. a key). The load cell used can measure forces up to 360 N with the accuracy of 0.1%. The electronic circuit of the interface box consists of an amplifier with supply voltage stabilizer and an integrated 12-bit A/D converter (MAX197, Maxim Integrated Products, Inc., Sunnyvale, CA, USA) capable of sending data to the parallel port of a personal computer. The maximal supported sampling frequency of the force measurement is 1 kHz. The grip force control was evaluated while tracking three different targets: ramp, rectangular, and sinusoidal target. The amplitude of the rectangular and ramp signal was changing randomly during the test. When applying the sinusoidal target, the frequency was randomly increasing with time. During the test the subject was seated in front of the computer screen on a chair with adjustable height. The gripmeasuring device was positioned at the edge of the table in the proximity of the subject's hand. The peak forces of the target signals were set at about 10% of the average maximal grip force. The duration of each tracking task was 60 seconds. The preliminary study was performed in three groups of randomly selected children of both sexes. In the first group there were five 6-year old children, while in the second and third group the age was 9 and 14 years respectively.
Target signal Response
Target
Response
Computer
Subject
Force Measurement
Analysis
Fig. 1 Grip force tracking system used for the assessment of grip force control
Fig. 2 Power and precision grip measuring
environment
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Control Abilities of Power and Precision Grasping in Children of Different Ages
367
III. RESULTS Figure 3 shows the average tracking errors and appertaining standard deviations while assessing cylindrical power grasp in three age groups. The tracking error was normalized by the peak value of the target. The results show that the oldest children performed the task with the lowest deviations. The largest tracking errors were found in the group of 6-year old children. The largest tracking error suggests that in group of 6-year old children the grip force control in dynamic tasks is not yet fully developed. Somewhat lesser differences among the age groups, as obtained when using the randomized rectangular signal, show that temporal accuracy is developed earlier than spatial accuracy.
Fig. 4 The average rrmse as obtained while assessing precision grip in three age groups of children during sinusoidal (above), ramp (middle), and rectangular (below) target signal. Similar results were obtained when studying the precision grip. Larger average tracking errors were found in 6year old children during precision as compared to power grasping, suggesting that precision grasps may develop later with age. IV. CONCLUSION
Fig. 3 The average rrmse as obtained while assessing power grip in three age groups of children during sinusoidal (above), ramp (middle), and rectangular (below) target signal.
In the paper we have presented a novel tracking method for evaluation of grasping using biofeedback on the grip force. Simple computer assisted tests using biofeedback can provide quantitative and reproducible measurements of physical activity which reflect subject's sensory-motor per-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
368
B. Bajd and L. Praprotnik
formance. The main focus of this work has been on the assessment of grip force control and its coordination in three different age groups of children. The results of the assessment in healthy children showed considerable differences in the grip force control among the three age groups. The results clearly demonstrate that the grip force control as well as the overall sensory motor functions are improved with age. Noticeable difference in performance between the precision and power grip was found only in 6-year old children. Future study should compare grip force control in still younger children to further investigate the changes of the motor control with age and to evaluate the sensitivity of the tracking method for possible use of the system for the assessment in young children with sensory-motor impairments.
REFERENCES 1. 2. 3. 4.
5.
6. ACKNOWLEDGMENT
The authors are grateful to Laboratory of Biomedical Engineering and Robotics at Faculty of Electrical Engineering, University of Ljubljana for lending the grip measuring device.
7.
MacKenzie CL, Iberall T (1994) Advances in Psychology, 104: The grasping hand, Elsevier Science B.V., Amsterdam Kurillo G, Bajd T, Munih M (2007) Assessment and rehabilitation of hand function by the grip force tracking method, in New Research on Biofeedback, Ed.: HL Puckhaber, Nova Science Publishers Forssberg H, Eliasson AC, Kinoshita H, Johansson RS, Westling G (1991) Development of human precision grip I: Basic coordination of force, Exp Brain Res, 85: 451-457 Gordon AM, Forssberg H, Johansson RS, Eliasson AC, Westling G (1992) Development of human precision grip III: Integration of visual size cues during the programming of isometric forces, Exp Brain Res, 90: 399-403 Kurillo G, Bajd B, Pikl V (2004) Grip force control of lateral grip in 10-year old children and adults, Medicon and health telematics 2004 : health in the information society : proceedings of the International Federation for Medical and Biological Engineering, (IFMBE proceedings, vol. 6), Ischia, Italy Bajd B, Kurillo G (2006) Assessment of grip force control in healthy children and children with Down syndrome, World Congress on Medical Physics and Biomedical Engineering, Imaging the future medicine, (IFMBE proceedings, vol. 14), Seoul, Korea Kurillo G, Gregorič M, Goljar N, Bajd T (2005) Grip force tracking system for assessment and rehabilitation of hand function, Technology and Health Care, 13: 137-149
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of a calibration bath for clinical thermometers I. Pusnik, J. Bojkovski and J. Drnovsek University of Ljubljana, Faculty of Electrical Engineering, Laboratory of Metrology and Quality, Ljubljana, Slovenia
Resistance bridge with scanner Furnace with a fixed point
I. INTRODUCTION In the recent years infrared ear thermometers (IRETs) became very popular in clinical practice for measuring the temperature of a human body. Furthermore, many different thermometer models were introduced and became commercially available to common users. All IRETs are advertised and specified by manufacturers as being accurate and reliable measuring devices. In order to be able to verify their performance, an appropriate calibration set-up is essential, which in principle consists of a blackbody radiator (BBR) with a reference thermometer. Several standards were issued underlying this requirement that specify the configuration to be used. All standards mention the use of an appropriate BBR based on a specially designed cavity immersed in a temperature regulated stirred-liquid bath. For calibration of IRETs and clinical contact thermometers an accuracy of 0,2 °C is required in the range of 35,5 °C to 42 °C, setting requirements for employed BBRs or calibration bath better than 0,1 °C. The IRETs shall be accurate within ±0,2 °C in their operating range from 35,5 °C to 42 °C, or within ±0,3 °C outside the given range. The respective requirement in the ASTM standard is valid below 36 °C and above 39 °C. Experts in radiation thermometry can help manufacturers and medical staff in checking of IRETs’ compliance with the requirements of related standards. The experts can also provide the most important service for appropriate daily use and that is traceability. Without traceability to national standards it is impossible to estimate the level of quality of measurements
Maximum permissible error (MPE)
“True” value or Conventional true value
Traceability chain
Keywords— clinical thermometer, calibration bath, accuracy, uncertainty
with IRETs. Although metrologists are familiar with the terms in metrology, many times other people involved in measurements are confused with the meaning and use of some terms. To explain the terms in metrology, we should use the vocabulary of metrology terms. For easier understanding some terms in metrology are presented in Figure 1.
Dissemination of units
Abstract— In Europe the Clinical Device Directive (Council directive 93/42/EEC of 14 June 1993 concerning clinical devices) requires conformity of actual characteristics of clinical devices with manufacturers' specifications. Therefore we developed a water bath for simultaneous calibration of clinical non-contact (tympanic, ear, forehead) and contact (liquid-inglass, digital) thermometers by comparison with a traceable reference thermometer. The bath is intended for use in hospitals, health and veterinary institutes or calibration laboratories. The developed bath fulfills also the requirements of other world standards and future ISO standard on general requirements for clinical non-contact and contact thermometers.
22.821 °C
Digital Reference thermometer resistor in an oil bath
Primary level realisation of units (Metrological laboratories of national institutes)
Secondary level dissemination of units (accredited calibration laboratories)
Average value Correction Uncertainty
Uncertainty
Third level dissemination of units (internal calibration laboratories, e.g. in hospitals)
Measured values Measurement error Systematic error
User level of instruments (e.g. doctors, patients)
Maximum permissible (measurement) error
Measured value Random error
Fig. 1 Presentation of metrological terms II. DEVELOPMENT OF THE BATH A. Prototype To compare metrological characteristics of the proposed blackbody shapes, we developed a specially designed prototype of a stirred-water bath, in which cavities of different shapes were mounted. The bath was evaluated in terms of temperature stability and homogeneity in the range from 35 °C to 42 °C. The dimensions and emissivity of the cavities were measured and calculated. With the developed calibration bath we were able to perform such calibration of an IRET, in which the uncertainty of a calibrated IRET is a prevailing contribution to the total uncertainty budget. The prototype bath was made of stainless steel and had a shape of an octagonal prism. The water level was at least 15 cm above the cavities, total of approximately 90 liters. The flow of water was regulated with the help of a cylinder placed around the motor shaft with the propeller. It forced water to circulate through the cylinder downwards and through the specially designed plates with manually adjustable openings up again, Figure 2. One plate was positioned 5 cm above the bottom of the bath. Another plate was positioned 5 cm below the surface of the water. The openings in
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 338–341, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
339
φ =38,6°
2,122
2,078
115,331
φ =36,1° 108,631
39,630
both plates were placed in three concentric circles. With appropriate adjustment of the openings we had achieved the stability of 2 mK/hour (2s) and the homogeneity of 18 mK (2s) inside the whole bath and in the worst case, which was at 42 °C. For the purpose of the bath evaluation eight small sealed platinum resistant thermometers were manufactured. They were calibrated by comparison in our laboratory with the uncertainty of 4 mK in the range from 0 °C to 50 °C. The thermometers were connected to the ASL F700B a.c. resistance bridge and arranged at different extreme positions inside the bath. To exclude the influence of uncertainty of thermometers their positions were exchanged several times. Also between the measurements their short-term stability was regularly checked in the ice point.
32,931
Development of a calibration bath for clinical thermometers
99,090 26,977
198,194
9,012
8,903
63,052
51,188
φ =120,0°
41,4 59
64,745
58,045
α =40,8°
1,955
2,018
54,068
8,857
196,537
9,125
Fig. 3 Shapes of the blackbody cavities a) EN, b) ASTM, c) JIS, d) LMK at the temperature 50 °C, Figure 4. Measurements of emissivity were performed on a copper disc, which was made and painted in the same way as the cavities. In addition also the surface treatment, painting and drying procedures were identical. 1
0,98 '50°C, angle 90°' '36°C, angle 90°
0,96
'50°C, angle 60° 0,94
'50°C, angle 45°
Emissivity
0,92
0,9
0,88
0,86
0,84
Fig. 2 Plates with manually adjustable openings for regulation of water flow and arrangement of the cavities in the prototype bath
0,82
0,8 8
9
10
11
12
13
14
15
16
Wavelength in micrometers
B. Blackbody cavities Inside the prototype bath different blackbody shapes (Figure 3) were mounted according to the suggested shapes in the following documents: - European standard EN 12470-5, [1], - standard ASTM, Designation E 1965 – 98, [2], - Japanese Industrial Standard JIS, Infrared ear thermometers - draft standard, [3]. - LMK design (elliptical shape) The cavity wall material was identical for all cavities, that was copper, coated with three layers of a highemissivity black paint (Pyromark 800). We performed the measurement of emissivity with a FTIR spectrometer [4], corresponding to the spectral range from 8 μm to 16 μm and
Fig. 4 Results of emissivity measurement The emissivity of each cavity was modeled based on its configuration, measured emissivity of the copper disc and temperature conditions in the bath. The effective emissivity was calculated by the software program STEEP, which was used in the TRIRAT project [5]. In Table 1 the results of the temperature stability and homogeneity, and of the calculated directional emissivity for all cavities under investigation are presented. It was taken into account that the opening angle of a reference IRET was 7 degrees, and the front of it was placed at the aperture of a blackbody cavity. It was assumed that in the worst case the field of view was twice as large, therefore the directional emissivity was calculated at the angle of 14 degrees. The emissivity value was stated as the worst case, taking into account the temperature gradients. The uncertainty of emissivity was considered as the rectan-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
340
I. Pusnik, J. Bojkovski and J. Drnovsek
gular distribution, where the emissivity of an isothermal cavity, temperature gradients at high temperatures (42 °C) and temperature gradients at low temperature (35 °C) were taken into account. The emissivity value for the EN shape of cavity was not calculated because it was not symmetrical around the horizontal axis. The value of its emissivity and the associated uncertainty was estimated as the value between the values of ASTM or JIS shape (higher emissivity values) and the value of LMK shape (lower emissivity value). The blackbody radiator with different cavities was compared together with a reference IRET with another blackbody and a reference IRET of the National Physical Laboratory of UK. Results of comparison were presented in detail in [6]. Table 1
Temperature stability, temperature homogeneity, and effective emissivity of ASTM, EN, JIS and LMK cavity
Type of theTemperature Temperature Emissivity (50 °C; blackbody stability at 42homogeneity at 8 μm - 14 μm), angle 14° cavity °C 42 °C
ASTM EN* JIS LMK
2 mK (2s) 8 mK (2s) 1,4 mK (2s) 6 mK (2s) 2 mK (2s) 17 mK (2s) 1,2 mK (2s) 6 mK (2s)
0,99980±0,00003 0,998±0,001*(estimated) 0,99974±0,00005 0,99734±0,00002
C. Bath for calibration of clinical non-contact and contact thermometers Following the results obtained with the prototype bath we developed a smaller portable stirred-liquid bath with approximate volume of 15 liters. The bath has a special
design with two tubes to achieve a laminar flow and hence a very high temperature stability and homogeneity, both better than ±0,02 °C. In the main tube an equalizing block for calibration of contact and a copper blackbody for calibration of non-contact thermometers are mounted. The shape of a blackbody cavity may be chosen by a customer among the suggested standard shapes. For the developed bath a patent application was filed. The bath is presented in Figure 5. III. CONCLUSIONS Based on several experiments, of which some results were presented in [7], the answer to the question, what do most commonly used IRETs measure, and whether they meet the requirements of the standards, cannot be given only by relying on manufacturers’ specifications. Therefore the compliance with the standards can and has to be verified, either by manufacturers or other relevant institutions, which are competent for radiation thermometric calibrations, and are traceable to a national measurement standard. Traceability must be assured via the BBR, which is evaluated in terms of emissivity and temperature homogeneity as well as stability. It is not appropriate to use quasi-calibrators with undetermined emissivity and temperature characteristics. Less problematic is calibration of clinical contact thermometers. Using the developed calibration bath all requirements in different standards related to calibration of non-contact and contact thermometers can be fulfilled at the same time.
ACKNOWLEDGMENT The authors wish to express their sincere thanks for support in development of the prototype and the calibration bath to the managing director Mr. Anton Kambic and the technical director Mr. Gorazd Kambic of the company Kambic Laboratorijska oprema. The development of the bath was partially supported by the European Fifth Framework Programme project INCOLAB under the specific programme “Promoting competitive and sustainable growth”, generic activity “Measurement and testing” and by the Ministry of Economics of the Republic of Slovenia.
REFERENCES 1.
Fig. 5 Developed bath for calibration of clinical non-contact and contact thermometers
EN 12470-5, Clinical thermometers – Part 5: Performance of infra-red ear thermometers (with maximum device), 2003, CEN, Brussels
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of a calibration bath for clinical thermometers 2.
3. 4.
5.
ASTM, Designation E 1965 – 98: Standard Specification for Infrared Thermometers for Intermittent Determination of Patient Temperature, Annual book of ASTM Standards 1998, West Conshohocken, PA 19428, USA Draft of JIS, Infrared ear thermometer, Japan Measuring Instruments Federation, 2001 Clausen S., Measurement of spectral emissivity by a FTIR spectrometer. Proceedings 8th International symposium on temperature and thermal measurements in industry and science (TEMPMEKO 2001), Berlin (DE), 19-21 June 2001, Vol. 1., pp. 259-264 Bosma R., van der Ham E. W. M., Schrama C. A., Test results on workpackage 5, 6 and 7 for the project TRaceability In RAdiation Thermometry (TRIRAT)– NMi/VSL contribution, February 1999, NMi/VSL, Delft, The Netherlands
341 6.
7.
Pusnik I., Simpson R., Drnovsek J., Bilateral comparison of blackbody cavities for calibration of infra-red ear thermometers between NPL and FE/LMK, IOP Physiol. Meas. 25 (2004) pp. 1239–1247 Pusnik I., E. van der Ham, Drnovsek J., IR ear thermometers what do they measure and how they comply with the EU technical regulation, IOP Physiol. Meas. 25 (2004) 699– 708IFMBE at http://www.ifmbe.org Address of the corresponding author: Author: Igor Pusnik Institute: University of Ljubljana, Faculty of Electrical Engineering. Laboratory of Metrology and Quality Street: Trzaska 25 City: SI-1000 Ljubljana Country: Slovenia Email:
[email protected] lj.si
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of Implantable SAW Probe for Epilepsy Prediction N. Gopalsami1, I. Osorio2, S. Kulikov3, S. Buyko3, A. Martynov3 and A.C. Raptis1 1
2
Argonne National Laboratory, Argonne, IL Flint Hills Scientific, Lawrence, KS and University of Kansas Medical Center, Kansas City, KS 3 Biofil Ltd., Sarov, Russia
Abstract— An implantable surface acoustic wave (SAW) microsensor has been developed for early detection and monitoring of seizures based on local temperature changes in the brain’s epileptogenic zones that occur prior to and during an epileptic event. Three SAW sensors were designed and fabricated: a 172 MHz filter, a 434 MHz filter, and a 434 MHz delay line. Their temperature sensitivities were tested by measuring the phase change between the input and output waveforms as a function of temperature. We achieved a phase sensitivity of 144 phase degrees per оC and a minimum detectable temperature of 5 mK for the 434-MHz, 10.2-µs delay line. Based on the sensitivity tests, a prototype 434 MHz SAW sensor was fabricated to a size of 11 x 1 x 1.1 mm, which is commensurate with existing brain implantable probes. Because of possible damping of the surface waves by the surrounding tissue or fluid, a glass housing with dry air was built on the top of the SAW substrate. Test and reference sensors were used in the prototype system to minimize the effect of source instabilities and to amplify the temperature effect. The phase change between the output waveforms of the sensors was measured with phase detector electronics after they were converted to lower (10.7 MHz) frequencies by standard mixers. The complete prototype sensor was tested in a saline water bath and found to detect as low as 3 mK changes of temperature caused by the addition of hot water. Operation ability of the system in its wireless variant was demonstrated. Keywords— Implantable, SAW, epilepsy, temperature sensor.
I. INTRODUCTION Epilepsy is a neurological disorder that affects at least 2.7 million Americans of all ages. This figure amounts to about 1% of all Americans, which is equivalent to that for all industrialized countries. The percentage presumably rises to as high as 10% for underdeveloped countries [1]. The fact that up to 40,000 Americans die each year directly from seizures in a country where medical care is the most advanced in the world underscores the malignancy of this disease. Most people with epilepsy, although of normal intelligence, are either unemployed or sub-employed due primarily to the unpredictability of seizures. Despite current advances in drug therapy, only 15% of those treated have neither seizures nor side effects. The negative impact of
epilepsy on the lives of those who suffer from it and on their families and communities can be considerably lessened if a means for early detection of seizures is found and innovative therapies, such as non-pharmacological treatments, are developed. Early detection of the onset of seizures is critical for implementing appropriate prevention measures, such as electrical stimulation, cryogenic cooling or drug delivery [2]. Although monitoring the electrical potentials with electrodes implanted in the brain is commonly used for the detection and prediction of seizures, changes in temperature associated with epileptic neuronal activity could provide additional relevant information. In addition, the neuronal activity could be tracked while electrical stimulation is being delivered to the area generating the activity of interest. Neuronal activity, particularly if intense, as in epilepsy causes local temperature changes in the brain tissue. Cortical temperature variations on the order of ±0.2 oC have been observed in animal tests with visual and other forms of stimulation [3]. Sensing local changes in the brain temperature associated with neuronal disorders requires accurate and implantable microsensors that can operate in the brain cortical zones for long periods of time and ideally can work without a power source. Furthermore, a microsensor implant must be compact (<2-mm OD, 5-15 mm long), sensitive to small changes in temperature (0.01oC), fast responding (<1 s), and biocompatible with the brain. A useful dynamic range of the sensor, if also used for monitoring the efficacy of possible cooling therapy, is 15-42oC. While solid-state temperature devices, such as thermistors, are small and inexpensive, they require a power source for their operation; the accuracy is low (±0.1oC) partly due to aging (drift) and self heating from the excitation current; and the response time for a coated thermistor is on the order of seconds [4]. Because surface acoustic wave technology can be used in a wireless mode and can provide better temperature performance in small sizes, this paper examines its applicability as a brain implant. We will discuss a SAW microsensor prototype that can be implanted in the epileptogenic or any other brain regions to indirectly but accurately monitor neuronal activity by measuring local changes in the tissue’s temperature and use its output to turn on a cooling or an electrical device
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 346–349, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Development of Implantable SAW Probe for Epilepsy Prediction
or both, to abate seizures. Cooling of epileptogenic brain tissue has been shown to suppress seizures [5]. While we are developing both wired and wireless versions of brainimplantable SAW sensors, we first present the wired version in this paper.
347
of 1 µs for commercial SAW devices [7], the expected change in delay time for detecting ∆T of 0.01oC is 10-12 s. For such a minute change in time delay, measurement of the phase change offers a more sensitive means of detection. The phase sensitivity of a SAW sensor is defined as: STΔϕ =
II. PROBE DESIGN Surface acoustic wave sensors are used in many applications, including filters, delay lines, resonators, and frequency control in telecommunication systems; chemical detection; and sensing of physical variables such as temperature and pressure in harsh environments [6]-[8]. Recently, they have also been used in the passive mode for remote monitoring of temperature and pressure in hard-toreach places [9]-[11]. An attractive feature of the passive SAW sensors is that they do not require power supply and can be interrogated remotely as in identification tags. They are small, inexpensive, and sensitive to a variety of measurement variables. Depending on the material, cut, and propagation angle, the sensor can be tailored to respond to the measurement variable of interest only. For example, lithium niobate is sensitive to temperature while quartz is not. Generally, a SAW sensor consists of a piezoelectric single crystal, such as lithium niobate, on which transmit/receive metallic interdigital (IDT) electrodes are imprinted to generate and receive surface acoustic waves. The gap between the electrodes forms the sensing region; a change of temperature in the substrate affects the velocity of sound and, in turn, the travel time between transmit/receive electrodes. The ability of lithium niobate SAW sensors to measure temperature has been demonstrated in the laboratory for both the active and passive modes [12], [13]; however, their application to biosensing has been limited. A. Analysis of Temperature Sensitivity A change in substrate temperature will affect the SAW propagation signal parameters, viz., delay time, phase, and resonance frequency. The delay time (Δt) depends on the SAW velocity (v) and distance between the transmitter and receiver electrodes (L):
Δt = L/v.
(1)
The change of ∆t due to a change of substrate temperature ∆Т equals: ∆t ≈ α t0 ∆T.
∂ ( Δϕ ) ∂T
(3)
From Eq. (4), the phase sensitivity increases with the operating or excitation frequency f, nominal delay time t0 (or gap length of the sensor), and the temperature coefficient α. Suppose we choose an excitation frequency of 430 MHz. Then, the expected phase change for 0.01oC is given by:
Δϕ ≈ f Δt 360o ≈0.15o.
(4)
While a direct measurement of such fractional changes in phase between the input and SAW-delayed waveforms is feasible at these high frequencies (e.g., using Analog Devices AD8302 chip), the use of a heterodyne system with IF amplification allows higher amplification of generally weak signals due to high insertion loss of SAW sensors and more accurate phase measurement. This is particularly true in passive systems where the signal from the generator to antenna to SAW to antenna is attenuated by a factor of 104. Hence, we will use a heterodyne system in our phase detection scheme. B. Fabrication of SAW Sensors Three SAW sensors (two filters and one delay line) were fabricated by the Etalon plant in Russia according to the specifications in Table II. To test the effect of design parameters on the temperature sensitivity, we used two operating frequencies (172 and 434 MHz) and three delay times (0.25, 0.1, and 10.2 µs). The SAW filter is based on an internal longitudinally bounded resonator structure with reflection gratings. A standing wave is formed in the cavity between the reflector gratings. The input IDT excites the cavity and the output IDT couples out the resonance signal. Table 1 Temperature sensor specifications for SAW Sensor Etalon filter Etalon filter Etalon delay line
Operating frequency (MHz) 172 434 434
Dimensions (mm) 9x7x2 9x7x2 14 x 8 x 2.5
(2)
where α - linear temperature coefficient, t0 is the nominal delay time of the SAW probe. Using a typical delay time t0
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
348
N. Gopalsami, I. Osorio, S. Kulikov, S. Buyko, A. Martynov and A.C. Raptis
III. TEMPERATURE SENSITIVITY We evaluated the phase sensitivity of the fabricated SAW devices to temperature using a specialized test bench. The SAW sensor body was heated up by a thermal fan or cooled down by ice and liquid nitrogen to the given temperature. Temperatures of the SAW sensor body were measured by standard thermocouples and by a diode device, fabricated by BIOFIL, with an accuracy of ±0.05oС. A sinusoid signal was applied on the SAW sensor input according to its operating frequency. The input and output signals were recorded by a digital oscilloscope. From the measured time shift ∆s of output signal relative to input signal period S by the oscilloscope, the phase change Δϕ was calculated as: Δϕ =
360 ⋅ Δs S
(5)
Computer processing of waveforms, obtained with the use of a digital oscilloscope, allows phase measurements with an error of ≈0.7 phase degree. Table 2 Measured values of SAW temperature characteristics Operating frequency (MHz) 172.2 ± 0.1 (filter) 434 ± 0.1 (filter) 434 ± 0.1 (delay line)
Temperature resolution δT (0C)
Delay time (µs)
Phase sensitivity (degree/0C)
0.25±0.025
3.0 ± 0.2
± 0.07
~1.5
0.14
144 ± 5
± 0.0015
~ 0.1 10.2 ± 0.2
The phase sensitivities of the three sensors calculated from Eq. (5) are given in Table 2. The measured phase sensitivity of the 434-MHz sensor is 144 degree phase change per oC, resulting in a minimum detectable change of 5 mK, which far exceeds the resolution needed for the epilepsy application. IV. PROTOTYPE SENSOR Based on the sensitivity tests and the brain-implant size requirements, a prototype SAW sensor was built to the following specifications: • • •
Size: 11 mm long, 1 mm wide, and 1.1 mm deep Operating frequency: 434 MHz Delay time: 10.2 µs
Fig. 1 Prototype SAW sensor: schematic drawing face would not be damped by the brain tissue or fluid. The housing is biocompatible and filled with dry air. Because the substrate is thin, heat transfer from the bottom side of the substrate allows fast thermal response. The overall dimensions are commensurate with the size of existing implants used for electrical stimulation. For measurement of phase change, we have devised a differential sensing scheme with two identical sensors. The test sensor is located in an epileptogenic region of the brain, and the reference sensor is placed in a non-epileptogenic region of the brain or outside the head in a known temperature environment. The phase change between the reference and test sensor outputs is measured by mixing down the respective output frequencies to 10.7 MHz with a local oscillator at 423.3 MHz. The dual sensor system was tested in a simulated environment in which the test sensor was dipped into a beaker with 0.5 L of saline water, and the reference sensor was kept in a thermally isolated box outside the beaker. The sensitivity of the sensor was tested by adding small amounts of hot water into the beaker. Based on the previously measured sensitivity of 1440 phase per 0С, the measured detection sensitivity was 1.7-3.4 mK. V. ANTENNA TESTING The sensor was tested at operation in the wireless variant. Etalon delay line sensors were used. A stable operation of the temperature sensor was demonstrated at a distance between antennas about 10 cm.
A glass housing was built on the top of the substrate (Fig. 1) so that the Rayleigh waves traveling over the sur-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of Implantable SAW Probe for Epilepsy Prediction
349
VI. CONCLUSIONS We have described the design and development of a SAW microsensor implant for measuring local temperature changes in the brain tissue as a means of detecting and monitoring epileptic seizures. An analysis of the temperature sensitivity of SAW sensors led to the design of a Y/Z cut piezoelectric substrate made of lithium niobate. The measurement scheme uses the phase change between the input and output waveforms. An operating frequency of 434 MHz was chosen based on considerations of sensitivity and the approved medical frequency band for telemetry. Testing of a 434-MHz, 10.2-µs delay line manufactured at the Etalon plant in Russia showed a minimum detectable temperature of 5 mK, which exceeds the goal of 0.01 K needed for the application. An implantable prototype SAW sensor was built with dimensions of 11 x 1 x 1.1 mm and contained a glass housing on the wave propagation side; the size of this prototype is commensurate with existing brain implants. To minimize the effect of source frequency instabilities and to amplify the temperature change, we devised a differential measurement scheme with two sensors that are located in a test (epileptogenic) and a reference (non-epileptogenic) zone. A pair of mixers lowers the sensor output frequencies to 10.7 MHz for an accurate detection of phase change. The prototype sensor was tested in a saline water bath and found to have a temperature sensitivity of ~3 mK. Operation ability of the system in its wireless variant was demonstrated. The sensor can be used to predict an epileptic onset as well as to monitor the efficacy of mitigation therapies, such as electrical stimulation or cryocooling by measuring the temporal changes in the local brain temperature. Plans are under way to test the sensor in animals.
ACKNOWLEDGMENT The authors wish to thank the Office of the Initiatives for Proliferation Prevention (IPP) of the Department of Energy (DOE) for financial support. The support and encouragement of Dr. Dave Ehst of Argonne National Laboratory, Dr. Jim Noble of DOE-IPP, and Dr. Sergei Garanin of Biofil are gratefully acknowledged.
REFERENCES 1. 2.
3.
4. 5. 6. 7. 8. 9. 10.
11.
12.
13.
(closed-loop) brain electrical stimulation for seizure blockage, to ultra-short-term clinical trials, and to multidimensional statistical analysis of therapeutic efficacy,” J. Clin. Neurophysiol. vol. 18, pp. 533-544, 2001 D. A. Yablonskiy, J. J. H. Ackerman, and M. E. Raichle, “Coupling between changes in human brain temperature and oxidative metabolism during prolonged visual stimulation,” PNAS, vol. 97, pp. 7603-7608, 2000 J. Fraden, AIP Handbook of Modern Sensors, New York: American Institute of Physics, pp. 497-531, 1993 X. F. Yang, J. H. Chang, and S. M. Rothman, "Long-lasting anticonvulsant effect of focal cooling on experimental neocortical seizures," Epilepsia, vol. 44, pp.1500-1505, 2003 C. Campbell, Surface Acoustic Wave Devices and Their Signal Processing Applications, Boston: Academic Press, 1989 U. Wolff, F. L. Dickert, G. K. Fischerauer, W. Greibel, and C. C. W. Ruppel, “SAW sensors for harsh environments,” IEEE Sensors J., vol. 1, pp. 4-13, 2001 D. W. Galipeau, P. R. Story, K. A. Vatelino, and R. D. Mileham, “Surface acoustic wave microsensors and applications,” Smart Mater. Struct., vol. 6, pp. 658-667, 1997 A. Pohl, "A review of wireless saw sensors," IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 47, pp. 317-332, 2000 V. K. Varadan, P. T. Teo, K. A. Jose, and V. V. Varadan, “Design and development of a smart wireless system for passive temperature sensors,” Smart Mater. Struct., vol. 9, pp. 379-388, 2000 W. Bufff, S. Klett, M. Rusko, J. Ehrenpfordt, and M. Goroll, “Passive remote sensing for temperature and pressure using saw resonators,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 45, p. 1388-1392, 1998 R. B. Ward, “Temperature coefficients of saw delay and velocity for y-cut and rotated LiNbO3,” IEEE Trans. on Ultrasonics, Ferroelectrics and Frequency Control, vol. 37, pp. 481-483, 1990 J. Neumeister, R. Thum, and E. Luder, “A SAW delay line oscillator as a high resolution temperature sensor,” Sensors and Actuators, pp. 670-672, 1990 Address of the corresponding author: Author: S. Kulikov Institute: Street: City: Country: Email:
Biofil Ltd. Mira, 37 Sarov Russia
[email protected]
W. A. Hauser and D. C. Hesdorffer, Epilepsy: Frequency, Causes, and Consequences, New York: Demos, 1990 I. Osorio, M.G. Frei, B.F.J. Manly, S. Sunderam, N.C. Bhavaraju, and S.B. Wilkinson, “An introduction to contingent
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of the ISO standard for clinical thermometers I. Pusnik University of Ljubljana, Faculty of Electrical Engineering, Laboratory of Metrology and Quality, Ljubljana, Slovenia Abstract— Clinical thermometers are widely used for measurement of temperature of a human body. Requirements for their performance are written in several standards (ASTM, EN, JIS, etc). Some requirements are similar, while others differ from each other. A few years ago International Organization for Standardization (ISO) and International Organization for Legal Metrology (OIML) agreed that it is necessary to unify the requirements for clinical thermometers from several standards. They formed a joint working group under leadership of the ISO with the task to develop a new standard for clinical thermometers, which would cover all present types of clinical thermometers. The OIML would later adopt the ISO standard as an OIML standard. Keywords— clinical thermometer, standardization
I. INTRODUCTION An ultimate goal of a clinical thermometer is to assess a true body site temperature. Thus, accuracy of such a thermometer can be verified only by comparing its output with that of a reference thermometer having specified uncertainty for measuring true body temperature. For an equilibrium clinical thermometer, this can be sufficiently accomplished under laboratory conditions that create an equilibrium state between the two thermometers. For an adjusted clinical thermometer, the laboratory verification alone is not sufficient as the adjustment algorithm relates to the subjects and environment. Thus, accuracy of an adjusted clinical thermometer shall be additionally verified by statistical methods of comparing its output with that of a reference clinical thermometer having a specified uncertainty in representing a specified body site temperature. Therefore, accuracy of an adjusted clinical thermometer shall be verified separately for its temperature sensor in a direct mode (laboratory accuracy) and for the entire thermometer with a sufficiently large group of human subjects (clinical accuracy), while the thermometer is at an adjusted operating mode. For verification of both laboratory and clinical accuracy similar and yet different requirements were laid in several standards. In the USA a series of ASTM standards was developed, [1], [3], [3], [4]. In Europe a series of EN standards was developed, [5], [6], [7], [8], [9]. In Japan a series of JIS standards was developed but they mainly related on EN standards and partly on ASTM standards. For particular thermometers also some special standards were developed
[10], [11]. A few years ago initiative was raised for development of one standard under leadership of the ISO. The standard should cover all types of clinical thermometers. The joint working group ISO/TC 121/SC 3 JWG 8 Clinical thermometers was formed with the task of development of a new standard for clinical thermometers. As an expert with a lot of recent experiences in non-contact clinical thermometers and with some important publications [12], [13], [14], I was invited to the joint working group both from the OIML and the ISO side. In spite of intentions to develop one standard it seems that due to some regulatory or statutory reasons there will be a few standards developed. For example ISO/IEC could not include liquid-in-glass thermometers because they are non-electrical thermometers. Therefore a separate ISO standard for liquid-in-glass thermometers should be developed. Another problem is that OIML members are entitled to get OIML standards without charges, while ISO standards are not available free of charge. That would lead to development of a separate OIML standard. Fortunately, although there may be a few standards developed, their requirements would probably be the same. II. DEVELOPMENT OF THE STANDARD A. ISO rules Every standard is developed according to some procedures, which are defined by a standardization body. Every ISO member body (full or P-members) has the right to take part in the development of any standard, which it judges to be important to its country's economy. No matter what the size or strength of that economy, each member body in ISO has one vote. ISO's activities are thus carried out in a democratic framework where each country is on an equal footing to influence the direction of ISO's work at the strategic level, as well as the technical content of its individual standards. A correspondent member (O-members) is usually an organization in a country, which does not yet have a fully-developed national standards activity. Correspondent members do not take an active part in the technical and policy development work, but are entitled to be kept fully informed about the work of interest to them. Subscriber membership has been established for countries with very
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 401–404, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
402
small economies. Subscriber members pay reduced membership fees that nevertheless allow them to maintain contact with international standardization. ISO standards are developed by technical committees comprising experts from the industrial, technical and business sectors, which have asked for the standards, and which subsequently put them to use. These experts may be joined by others with relevant knowledge, such as representatives of government agencies, testing or calibration laboratories, consumer associations, environmentalists, academic circles and so on. The experts participate as national delegations, chosen by the ISO national member institute for the country concerned. These delegations are required to represent not just the views of the organizations in which their participating experts work, but of other stakeholders too. According to ISO rules, the member institute is expected to take account of the views of the range of parties interested in the standard under development and to present a consolidated, national consensus position to the technical committee, [15]. An International Standard is the result of an agreement between the member bodies of ISO. It may be used as such, or may be implemented through incorporation in national standards of different countries. International Standards are developed by ISO technical committees (TC) and subcommittees (SC) by a six step process: * Stage 1: Proposal stage * Stage 2: Preparatory stage * Stage 3: Committee stage * Stage 4: Enquiry stage * Stage 5: Approval stage * Stage 6: Publication stage The following is a summary of each of the six stages: Stage 1: Proposal stage The first step in the development of an International Standard is to confirm that a particular International Standard is needed. A new work item proposal (NP) is submitted for vote by the members of the relevant TC/SC to determine the inclusion of the work item in the programme of work. The proposal is accepted, if a majority of the P-members of the TC/SC votes in favour and at least five P-members declare their commitment to participate actively in the project. At this stage a project leader responsible for the work item is normally appointed. Stage 2: Preparatory stage Usually, a working group of experts, the chairman (convener) of which is the project leader, is set up by the TC/SC for the preparation of a working draft (WD). Successive
I. Pusnik
working drafts may be considered until the working group is satisfied that it has developed the best technical solution to the problem being addressed. At this stage, the draft is forwarded to the working group's parent committee for the consensus-building phase. Stage 3: Committee stage As soon as a first committee draft (CD) is available, it is registered by the ISO Central Secretariat. It is distributed for comments and, if required, voting, by the P-members of the TC/SC. Successive committee drafts may be considered until consensus is reached on the technical content. Once consensus has been attained, the text is finalized for submission as a draft International Standard (DIS). Stage 4: Enquiry stage The draft International Standard (DIS) is circulated to all ISO member bodies by the ISO Central Secretariat for voting and comment within a period of five months. It is approved for submission as a final draft International Standard (FDIS), if a two-thirds majority of the P-members of the TC/SC are in favour and not more than one-quarter of the total number of votes cast are negative. If the approval criteria are not met, the text is returned to the originating TC/SC for further study and a revised document will again be circulated for voting and comment as a draft International Standard. Stage 5: Approval stage The final draft International Standard (FDIS) is circulated to all ISO member bodies by the ISO Central Secretariat for a final Yes/No vote within a period of two months. If technical comments are received during this period, they are no longer considered at this stage, but registered for consideration during a future revision of the International Standard. The text is approved as an International Standard if a twothirds majority of the P-members of the TC/SC are in favour and not more than one-quarter of the total number of votes cast are negative. If these approval criteria are not met, the standard is referred back to the originating TC/SC for reconsideration in the light of the technical reasons submitted in support of the negative votes received. Stage 6: Publication stage Once a final draft International Standard has been approved, only minor editorial changes, if and where necessary, are introduced into the final text. The final text is sent to the ISO Central Secretariat which publishes the International Standard. If a document with a certain degree of maturity is available at the start of a standardization project, for example a stan-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of the ISO standard for clinical thermometers
dard developed by another organization, it is possible to omit certain stages. In the so-called "Fast-track procedure", a document is submitted directly for approval as a draft International Standard (DIS) to the ISO member bodies (stage 4) or, if the document has been developed by an international standardizing body recognized by the ISO Council, as a final draft International Standard (FDIS, stage 5), without passing through the previous stages. B. Standard for clinical thermometers A new work item proposal was approved by the ISO members in July 2005, when also a working draft was approved. Until December 2006 three meetings of the joint working group ISO/TC 121/SC 3 JWG 8 Clinical thermometers were held. We were able to reach Committee stage (stage 3) that was issue of the committee draft (CD). It is expected that comments received by the ISO members will be discussed during the next meeting in April 2007 and we could proceed to the Enquiry stage (stage 4) with the issue of the draft International Standard (DIS). Although the target date for publication of the International Standard is March 2009, the members of the JWG believe we could develop the standard by March 2008. Therefore we have two meetings per year and several contacts via e-mail and Internet. The most straining was the second stage, when we had to develop a framework of the standard. A problem was that in the new work item proposal presently developed standards for several thermometers were mixed together. It took a lot of effort to build a new structure, which was clear, correct and executable from the technical perspective. Many times it is easier to develop a new standard than to develop a standard, which is based on several old standards. Due to the mixed, ambiguous and multiplied requirements from several standards the JWG executed the risk assessment first. Based on the results of the risk assessment, we confirmed old or set new requirements for clinical thermometers. A basis for decisions in risk assessment was risk classification of accidents (Table 1) and interpretation of risk level (Table 2). The most important consensus achieved so far in the development of the standard was that the requirements were divided to laboratory performance requirements and clinical accuracy evaluation. For equilibrium clinical thermometers testing against laboratory performance requirements is sufficient to determine their quality of performance, while for adjusted clinical thermometers laboratory testing is not sufficient. Such thermometers shall be tested also for clinical accuracy under normal operating conditions that occur in every day use.
403
Table 1: Risk classification of accidents Occurence
Consequence (severity)
Frequent (F)
Catastrophic (CAT)
Critical (CR)
Marginal (M)
Negligible (N)
I
I
I
II
Probable (P)
I
I
II
II
Occasional (O)
I
II
II
II
Remote (R)
II
III
III
IV
Improbable (IM)
III
III
IV
IV
Incredible (IN)
IV
IV
IV
IV
Table 2: Interpretation of risk level Risk level
Interpretation
I
Intolerable risk
II
Undesirable risk, tolerable only if reduction is impractical or if the costs are grossly disproportionate to the improvement gained
III
Tolerable risk if the cost of the risk reduction would exceed the improvement made Negligible risk
IV
Standard will systematically determine laboratory performance requirements in terms of unit of measure, reference environmental conditions, minimum and maximum rated output range, maximum permissible error within reference and changing environmental conditions, maximum permissible error within minimum and maximum rated output range, time response for continuous clinical thermometer, effect of storage , thermal shock and humidity, electromagnetic compatibility, mechanical strength and safety, warning signals, additional requirements (material, electrical performance and safety requirements) and cleaning and/or disinfection. The novelty of the new standard will be the requirement for manufacturers to reveal the results of clinical accuracy evaluation. Probably, there could also be a requirement for clinical accuracy as a value. Namely, in the present standards clinical accuracy evaluation was a requirement but a value of clinical accuracy was not required neither manufacturer had to reveal the results of clinical accuracy evaluation. III. CONCLUSIONS The need for a standard is usually expressed by an industry sector, which communicates this need to a national member body. In the case of clinical thermometers this had happened in Germany but also based on positive initiative from USA and Japan. The German standardization body DIN proposed the new work item to ISO as a whole. Once
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
404
I. Pusnik
the need for an International Standard had been recognized and formally agreed, the first phase involved definition of the technical scope of the future standard. This phase was carried out in a working group which comprised technical experts from countries interested in the subject matter. Once agreement had been reached on which technical aspects were to be covered in the standard, a second phase was entered during which countries negotiated the detailed specifications within the standard. This was the consensusbuilding phase, which resulted in a committee draft. A few more meetings will be held until the ISO standard for clinical thermometers will be published, which is expected in 2008. The interested community expects that this standard will cover all clinical thermometers with the state-of-the art requirements. Due to some formal requirements this standard will not include non-electrical clinical thermometers for which probably a new standard will be developed, if there will be enough interest. It is also not clear at the moment, how OIML could full y adopt the ISO standard an OIML standard and make it available free of charge to OIML members.
3.
ACKNOWLEDGMENT
12.
I would like to express sincere thanks to the convenor and members of the working group ISO/TC 121/SC 3/JWG 8 “Clinical thermometers” for their valuable contributions in the course of the development of the standard.
4.
5. 6. 7. 8. 9. 10. 11.
13.
14.
REFERENCES 1.
2.
ASTM E825, 1998, Standard specification for phase changetype disposable fever thermometer for intermittent determination of patient temperature. Annual book of ASTM Standards West Conshohocken, PA, USA. ASTM E1299, 1996, Standard specification for reusable phase change-type fever thermometer for intermittent determination of patient temperature. Annual book of ASTM Standards West Conshohocken, PA, USA.
15.
ASTM E1112, 2000, Standard specification for electronic thermometer for intermittent determination of patient temperature. Annual book of ASTM Standards (West Conshohocken, PA, USA: ASTM). ASTM E1965, 1998, Standard specification for infrared thermometers for intermittent determination of patient temperature. Annual book of ASTM Standards (West Conshohocken, PA, USA: ASTM). EN 12470-1, 2000, Clinical thermometers—Part 1: Metallic liquid-in- glass thermometers with maximum device. CEN. EN 12470-2, 2001, Clinical thermometers—Part 2: Phase change type (dot matrix) thermometers. CEN. EN 12470-3, 2000, Clinical thermometers—Part 3: Performance of compact electrical thermometers (non-predictive and predictive) with maximum device, CEN. EN 12470-4, 2001, Clinical thermometers—Part 4: Performance of electrical thermometers for continuous measurement. CEN. EN 12470-5, 2003, Clinical thermometers—Part 5: Performance of infra-red ear thermometers (with maximum device). CEN. BS 691, 1987, Subnormal range, ovulation and dual scale clinical maximum thermometers (mercury-in-glass, solid stem) BSi. BS 6985, 1989, Dual-scale and ovulation clinical maximum thermometers(mercury-in-glass, enclosed scale). BSi. Pusnik I., van der Ham E., Drnovsek J., IR ear thermometers what do they measure and how they comply with the EU technical regulation, IOP-Physiological Measurements, Vol. 25, pp. 699-708, 2004 Pusnik I., Simpson R., Drnovsek J., Bilateral comparison of blackbody cavities for calibration of infra-red ear thermometers between NPL and FE/LMK, IOP Physiol. Meas. 25 (2004) pp. 1239–1247 Pusnik I., Drnovsek J., Infrared ear thermometers–parameters influencing their reading and accuracy, IOP Physiol. Meas. 26 (2005) pp. 1075–1084 www.iso.org Address of the corresponding author: Author: Igor Pusnik Institute: University of Ljubljana, Faculty of Electrical Engineering. Laboratory of Metrology and Quality Street: Trzaska 25 City: SI-1000 Ljubljana Country: Slovenia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of muscle dynamic response measured before and after treatment of spastic muscle with a BTX-A − A case study D. Krizaj1, K. Grabljevec2, B. Simunic3,4 1
University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia 2 Institute for rehabilitation, Ljubljana, Slovenia 3 University of Primorska, Koper, Slovenia 4 TMG-BMC Ltd., Ljubljana, Slovenia
Abstract— Contraction properties of spastic muscle has been evaluated using tensiomyographic method before and after treatment of spastic muscles with BTX-A. Significant differences are observed in TMG responses of a spastic muscle of cerebral origin before and after treatment with BTX-A. Typically, a TMG parameter Dm increases while time related TMG parameters Tr and Ts decrease after treatment of a muscle with a BTX-A. A parameter Tr/Dm has been found the most sensitive to changes of the muscle’s contractile properties. It is expected that the method can be used for determining muscle selection and therefore more effective use of expensive medicine and to evaluate the efficiency of the treatment. Keywords— tensiomyography, TMG, spasticity, botulinum toxin, Dysport.
I. INTRODUCTION Spasticity is a disorder of sensorimotor system characterized by a velocity-dependent increase in muscle tone with exaggerated tendon jerks. This hyperexcitability is hypothesized to occur through a variety of mechanisms, not all of which have been yet demonstrated in humans. In cerebral model of spasticity the very possible background is enhanced excitability of monosynaptic pathways with overactivity in the antigravity muscles [1]. Chronic spasticity can further lead to changes in the rheologic properties of the involved and neighboring muscles. Stiffness, contracture, atrophy and fybrosis may interact with pathologic regulatory mechanisms to prevent normal control of limb position and movement. Focal spasticity can seriously interfere with function, presumably with gait, ADLs, comfort and care giving in patients after central nervous system injury (1). BTX-A affects the neuromuscular junction through binding, internalization, and inhibition of acetylcholine release. It must enter the nerve endings to exert its chemodenervating effect. Once inside the cholinergic nerve terminal cell, BTX-A inhibits the docking and fusion of acetylcholine vesicles at the pre-synaptic membrane. The effect can clinically be seen in four to seven days and duration of effect is usually 3 to 4 months, but can be longer or shorter. Gradu-
ally, muscle function returns by the regeneration or sprouting of blocked nerves forming new neuromuscular junctions. BTX-A is dose-dependent and reversible secondary to the regeneration process. Local injections of BTX-A are particularly valuable in relieving focal spasticity around a joint or a series of joints. In most clinical situations, an acceptable dose may be administered into muscles around one or two joints per treatment session. Therefore, when planning treatment, muscle selection is crucial. Focal and segmental spasticity in pre- and post- treatment phase can be evaluated with EMG analysis (demonstration of co-contraction of antagonists during attempted agonist contraction), but EMG analysis is invasive and painful and depends on rather expensive equipment as well specially educated personnel. Another approach that is described in this paper verifies a method based on evaluation of a transversal response of skeletal muscles subjected to an electrical stimulation pulse. A technique can be regarded as a MMG (mechanomyographic) technique due to analysis of a mechanical muscle response. However, due to some specific operation principles (described in mode detail in the Method section) it has been named Tensiomyography with abbreviation TMG [3]. The parameters of the average TMG response measured on spastic muscle was compared with responses obtained on non affected contra lateral side (NACLS). In previous pilot study we noticed changes in time parameters (Ts, Tr) fourth day after BTX-A injection and well before clinical signs of reduced muscle tone [2]. In this work we strived to evaluate the TMG parameters as a measure of the efficacy of the treatment of spastic muscles with botulinum toxin. II. METHODS Two female and one male subject with spastic hemiplegy after traumatic brain injury (average age 40.3yrs ± 13yrs) have been included in this evaluation study. Muscles were chosen for BTX-A injection after clinical examination and regarding functional goals of patient. Only muscles lying
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 393–396, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
394
D. Krizaj, K. Grabljevec, B. Simunic,
• • •
subject A, we measured biceps brachii and brachioradialis muscles; subject B, we measured gastrocnemius medialis and gastrocnemius lateralis; subject C, we measured biceps brachii and brachioradialis muscles;
The measurements were performed before BTX-A (Dysport®) injection and after one week. BTX-A was injected in the chosen muscles after standard protocol. Passive range of motion in adjacent joints and muscle tone were evaluated and position of the limb was described to perform another measurement in the same position. The stimulation pulses were 1 ms wide, with supramaximal amplitude. The measuring conditions were isometric. Two supramaximal responses were detected and an average was further analyzed. Typical difference between TMG response of a spastic and NACLS muscle together with presentation of definition of extracted TMG parameters is presented in Figure 1. Four time related parameters: delay time Td, contraction time Tc, sustain time Tc and relaxation time Tr and one amplitude related parameter: maximal displacement Dm are extracted from a TMG response.
III. RESULTS Figure 2 presents TMG response before BTX-A treatment and after one week for a subject A (muscle brachioradialis). A clear increase of Dm and decrease of Ts and Tr can be observed in TMG response of a spastic muscle one week after injection of BTX-A. A comparison has been made with TMG results on a NACLS muscle on another side of a body presented in Figure 2b. It is evident that for this subject the treatment with BTX-A significantly in creased the maximal amplitude response as well as shorted the sustain and relaxation time. Subject B had spastic gastrocnemius lateralis (GL) muscle. A TMG response of a muscle treated with BTX-A is presented in Figure 3a and can be compared with TMG response of a non-spastic GL muscle in Figure 3b.
Displacement [mm]
superficially were than evaluated with TMG method and measurement was performed on spastic as well NACLS muscles. In the case of:
Brachioradialis - spastic side before 4
after botulinum
2 0 0
100
200
300
400
500
Time [ms]
Displacement [mm]
Brachioradialis - NACLS before 4
after botulinum
2 0 0
100
200
300
400
500
Time [ms] Fig. 2.Comparison of TMG response before and after treatment of spastic muscle brachioradialis for a) spastic muscle and b) NACLS muscle Fig. 1. TMG parameters and TMG responses of a non-spastic and spastic muscle brachioradialis (measured on a same person). Delay time (Td) is time between electrical stimulus and 0.1 of maximal amplitude of the response – maximal displacement (Dm). Contraction time (Tc) is time between 0.1 and 0.9 Dm. Sustain time (Ts) is time between 0.5 Dm on both sides of the response. Relaxation time (Tr) is time between 0.9 and 0.5 Dm.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of muscle dynamic response measured before and after treatment of spastic muscle with a BTX-A - A case study
Biceps brachii - spastic side
Gastrocnem ius Lateralis - spastic side
Displacement [mm]
after botulinum
4 2 0 0
100
200
300
400
before
12 10 8 6 4 2 0
Displacement [mm]
before
6
500
after botulinum
0
100
a)
400
500
Biceps brachii - NACLS
Displacement [mm]
before after botulinum
4 2 0 300
400
500
Tim e [m s]
Displacement [mm]
6
200
300
a)
Gastrocnem ius Lateralis - NACLS
100
200
Tim e [m s]
Tim e [m s]
0
395
22 20 18 16 14 12 10 8 6 4 2 0
before after botulinum
0
100
b) Fig. 3. Comparison of TMG response before and after treatment of spastic muscle gastrocnemius lateralis for a) spastic muscle and b) NACLS muscle.
IV. DISCUSSION It is not clear yet, how the rheologic properties of spastic muscle (atrophy, fibrosis, fat tissue) - that interact with pathologic regulatory mechanism – do influence the results gained with the method. Nevertheless, we found visible differences between the TMG responses of spastic muscles before the injection and after injection compared to less significant changes in the TMG response of NACLS muscles (healthy muscles measured on the collateral side). Due to a small number of subjects, we can not confirm statistically significant difference in time parameters of spastic / NACLS muscle contraction. The calculated ratio of time parameters between spastic and NACLS muscles is shown to be higher for Ts (sustain period) and Tr (relaxation period) during contraction. In spastic muscles Ts and Tr seems to be prolonged, which can be explained with enhanced reflex activity of the muscle, with a spread of phasic reflexes during contraction as response to single twitch stimulation and with rheologic changes
200 300 Time [ms]
400
500
b) Fig. 4. Comparison of TMG response before and after treatment of spastic muscle biceps brachii for a) spastic muscle and b) NACLS muscle.
In order to quantify the differences in responses several different procedures can be implemented; eg. we can observe differences in certain parameters, relative differences etc. We decided to compare the ratio of time parameters to the displacement parameter. These ratios could serve as a measure of the shape of the TMG response. As it is expected that the relaxation time decreases and maximal displacement increases after the injection, this ratio Tr/Dm could serve as a measure of the efficacy of the treatment. The data in Figure 5 were obtained using the following formula where abbreviation rTD is used for ratio Tr/Dm: rel Tr / Dm =
rTD ( after ) − rTD (before ) rTD ( after )
Figure 5 shows average values of relative ratios for all subjects and all muscles (spastic and NACLS) measured before and after one week after the injection. The ratio is negative for spastic muscles due to the decrease of the relaxation time after the treatment. The relative differences are large (from 30 % for Tc/Dm to almost 100% for Tr/Dm). It is interesting to note that NACLS muscles have
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
396
D. Krizaj, K. Grabljevec, B. Simunic,
0.4 0.2 0 -0.2
Dm
-0.8
/ Tc
m /D Tr
Dm
-0.6
/ Ts
-0.4
SPASTIC NACLS
-1
tion time reduces and contraction time increases together with an increase of the maximal displacement. In order to evaluate these changes a relative difference between the relaxation time and maximal displacement is suggested as a measure of the success of the treatment. In conclusion, tensiomyography has been found a promising method for quantitative evaluation of the efficiency of the treatment of spastic muscles with botulinum toxin. Further studies should aim to increase the number of evaluated subjects (muscles) and statistically confirm the findings of this investigation.
-1.2
ACKNOWLEDGMENT
AVERAGE RATIOS
Fig. 5. Average relative ratios Ts/Dm, Tr/Dm and Tc/Dm determined from TMG parameters compared for spastic and NACLS muscles.
all positive average relative values although it was expected that the values would be on average close to zero as no (clinical) treatment of these muscles has been performed. V. CONCLUSIONS Measurements of the radial muscle displacement evoked by electrical stimulus, described as tensiomyographic method, has been used to evaluate the differences in the muscle response before and after the BTX-A injection to spastic muscles of subjects with spastic hemiplegy after traumatic brain injury. Objective evaluation of efficiency of the treatment is of paramount importance for efficient use as well as economy of the treatment. Three subjects were treated and altogether 6 different muscles were measured on a spastic as well as healthy collateral side. In case of successful treatment significant changes in the TMG response could be observed. In particular, the relaxa-
The research was supported by the Slovenian Agency for Research – ARRS.
REFERENCES 1.
2.
3
Mayer NH. Clinicophysiologic concepts of spasticity and motor diffunction in adults with an upper motoneuron lesion. In: Spasticity: Etiology, Evaluation, Management and the role of Botulinum Toxin type A, MF Brin, ed. Muscle Nerve 1997; 20 (suppl 6): S1-S13. K. Grabljevec, B. Šimunič, K. Kerševan, D. Križaj, M. Gregorič. Detecting contractile parameters of spastic muscles with tensiomyography (TMG). In: Spasticity evidence based measurement and treatment, Centre for Life, Newcastle upon Tyne, 9th to 11th December, 2004: abstracts. pp. 122-123. V. Valenčič, N. Knez. Measuring of skeletal muscle's dynamic properties. Artif. organs, 1997, vol. 33, No. 3, pp. 240-242. Author: Institute: Street: City: Country: Email:
Dejan Krizaj University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of non-invasive blood pressure simulators G. Gersak1 and J. Drnovsek1 1
University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
Abstract— Non-invasive blood pressure (NIBP) simulators are electro-mechanical devices used for testing and evaluating oscillometric non-invasive blood pressure monitors. Simulators are used mainly in clinical environment to assist with routine and after-repair testing of NIBP monitors. In this paper we suggest basic procedures for evaluating a NIBP simulator; assessing its suitability and quality. Proposed evaluation procedure consists of a static calibration and a dynamic evaluation. In static calibration the simulator is calibrated as a common indicating barometer. In dynamic evaluation the output waveforms are investigated (repeatability of the output according to different static pressures and heart rates, repeatability of the output at a constant blood pressure magnitude). Proposed evaluation procedure represents a minimal set of tests to ensure the simulator can be used for testing NIBP monitors. A commercial simulator SmartArm (by Clinical Dynamics, USA) was evaluated according to it and the results are presented. Keywords— blood pressure measurement, NIBP, oscillometry, simulator
I. INTRODUCTION Oscillometric method for measuring blood pressure, dating from the late 1800s when it was first described by Marey [1], several decades predating today classical Korotkov auscultation method, is widely used nowadays. It has been in clinical use since 1980s and its usage is increasing with decreasing costs and increasing computational power of electronics. Oscillometric NIBP monitors are more and more popular because of their simplicity of measurement and consequently a substantial decrease of observer’s error. The other reason in favor of oscillometry are problems of its main non-invasive alternative, i.e. measuring blood pressure using classic mercury sphygmomanometers. The latter have environmental and neuro-toxic concerns, user training issues and difficulty of sequential measurements, important for the repeatability of the measurement. Oscillometric method is based on observing pressure pulses in the bladder of a non-invasive cuff, wrapped around the subject’s limb over an artery. Arterial pulse waves are transmitted to the inflated bladder. Pressure in the bladder is measured by a pressure transducer. The amplitude and shape of these pulses vary as the static pressure in the bladder is reduced from above systolic (SYS) to below diastolic (DIA) blood pressure. Oscillometric NIBP moni-
tors measure the shape of the pressure pulse envelope as a function of the bladder pressure. By applying a (proprietary) algorithm they estimate the values of SYS and DIA, and some also the mean arterial pressure (MAP). Manufacturers of oscillometric NIBP monitors usually use empirical algorithms and are not required to disclose them, or the patient population the devices were tested on. Since algorithms differ (strongly) from manufacturer to manufacturer and model-type to model-type, there has to be a clinical validation performed on the monitor, in order to confirm its accuracy. Validation includes comparison of NIBP monitor and reference instrument readings. Reference measurements can be either invasive or non-invasive auscultatory made by trained observers. Main organizations issuing validation procedures for clinical tests of NIBP monitors are European Standards Committee, Association for the Advancement of Medical Instrumentation (AAMI), USA, British Hypertension Society (BHS), European Society of Hypertension (ESH), Deutsche Hochdruckliga and Deutsches Institut für Normung (DIN), Germany, etc. [2, 3, 4, 5]. All clinical validation tests require a large and complex study group of human volunteers (of different age, sex, specific blood pressure level, etc.), a suitable clinical environment and highly trained medical staff. Due to all these there are many devices on the EU market that have not been clinically validated, or that have been validated for restricted group of subjects. The logistical difficulties, time-consuming process, and high costs involved in clinical validations were the main reasons that in the early 1990s commercial NIBP test simulators were introduced. Two types of simulators were developed; limb simulators and waveform simulators. The first type models an artificial limb and the latter generates a waveform of pressure pulses according to static cuff pressure. Limb simulators consist of an artificial limb, usually arm, which incorporates an artificial artery with pulsating fluid. The cuff of validated NIBP monitor is wrapped around the artificial arm. Waveform simulators are generating oscillometric waveform which is fed into the pneumatic hose of the validated monitor. This type of simulators is predominantly used today. Currently the most successful models on the market are BP Pump 2 and Cufflink by Fluke Biomedical, SmartArm by Clinical Dynamics, QA-1290 by Metron.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 342–345, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Evaluation of non-invasive blood pressure simulators
However, the simulators themselves generate pressure pulses of certain shape and thus cannot be used as a substitutes for clinical validations to measure the accuracy of NIBP monitors. On the other hand, being basically electromechanical pressure generators their output is very stable, repeatable. Compared to extreme physiological variability of natural human blood pressure, influenced by emotional state, pulse rate, breathing etc., the simulators can be very useful in assessing the repeatability of NIBP monitors. There are many studies reported, where NIBP monitors were calibrated by using a simulator [7, 8, 9]. Calibration is defined as a set of operations, which by employing a reference value enables the measuring error and the measuring accuracy to be determined [10]. Commercial simulators generate different pressure pulse envelopes [6]. Thus a simulator’s output cannot always be a reference for calibration. Most studies were assuming that simulators generate the “correct” shape of pressure pulses. But the fact remains, that NIBP monitor can only be calibrated (in terms of determining NIBP monitor accuracy) using a certain, suitable and prescribed simulator. Otherwise the process is called evaluation of a simulator. The fact is that simulators are more and more used in clinical environments for quick tests of NIBP devices, but seldom are critically doubted regarding their performance and function. Not many papers were reported on observing the differences of simulators [6, 11, 12]. In this paper a set of calibration procedures and evaluation methods is proposed, which could be used for evaluation of a waveform simulator and comparing different commercial waveform simulators and evaluating their suitability. II. EVALUATION OF WAVEFORM NIBP SIMULATOR The main function of a NIBP simulator is to generate controllable and repeatable pressure pulses that resemble oscillometric pulses and conform to a predefined oscillometric pulse envelope. Waveform NIBP simulators are electro-mechanical devices capable of generating pressure pulses of different amplitudes and of different envelope shapes. They generate pressure pulses by means of a voice-coil, a stepper motor linear actuator, DC motor or similar. One of their basic elements is the pressure transducer, measuring pressure. Evaluation of NIBP simulators can be divided into two parts. First part represents the static calibration of device, which includes static calibration of transducer, repeatability and reproducibility of the sensor, environmental dependence of the device, etc. Second part is dynamic testing of simulator, which tests stability of simulator’s output, de-
343
pendence of the envelope from the generated magnitude of pressure or heart rate, etc. A. Static calibration of simulator Simulator is calibrated as a precision indicating barometer [13]. In this way the whole measuring system, comprising of more parts – sensor, conditioning and auxiliary circuits and display – can be calibrated as a whole, without the necessity to know properties of single parts. Commonly simulators have a leak testing mode, intended for testing of air-tightness of cuffs, hoses and other tubing of NIBP monitor. Static calibration is usually performed in this mode. Static calibration of a simulator is usually limited to calibration range from 0 mmHg to 400 mmHg gauge pressure, which is wide enough for majority of NIBP monitors. Comprehensive calibration procedures should be employed, i.e. the sensor should be preloaded and then tested stepwise at different pressures. Since pressure sensors are prone to hysteresis error, calibration usually consists of series of increasing and decreasing pressure. Every measured value should according to Guide to Expression of Uncertainty in Measurement [14] be equipped with explicit statement of measuring uncertainty and conditions under which the calibration was performed. It is common practice and most advisable for the reference pressure gauge to be traceable to higher pressure standards. This is the right way for reliable measurements. The important ingredient of reference pressure gauge traceability chain is the measurement uncertainty of the reference instrument. The combined uncertainty of calibration is composed of several contributions, which are geometrically added [14]. These are repeatability uncertainty due to standard deviation of a series of successive measurements, uncertainty due to effective resolution of simulator’s display, uncertainty due to the repeatability of readings in the same pressure points of different measurement series, standard uncertainty of the reference instrument as stated in its calibration certificate, uncertainty due to repeatability of reference reading, correction of the reference pressure gauge, uncertainty due to the short-term and long-term drift of the reference gauge, uncertainty due to measuring set-up, e.g. hydrostatic head correction because of difference in height of the simulator and reference sensors Commonly, reference gauge uncertainties are quite negligible and the largest contribution in the budget of uncertainty is the uncertainty due to the repeatability of readings. Although simulator being a very stable electro-mechanical device, its parts are still subject to environmental conditions and time dependencies. A commercial NIBP simulator (SmartArm, Clinical Dynamics, USA) with specified accuracy of 0.5 mmHg was
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
344
G. Gersak and J. Drnovsek
tested. Results of static calibration showed the expanded combined uncertainty of less than 0.20 mmHg. The uncertainty due to the repeatability of readings in this case was approximately 0.14 mmHg. Measuring error over the entire calibration range was of smaller than specified 0.5 mmHg (Fig. 1).
the measurement should not exceed ± 3 mmHg at any single condition within ambient range from 15 °C to 25 °C of temperature and 15 % to 85 % (noncondensing) relative humidity [2]. Therefore it is recommended to test simulators at least in the same range. B. Dynamic evaluation of simulator
0,50 0,40 0,30
error / mmHg
0,20 0,10 0,00 -0,10 -0,20 -0,30 -0,40 -0,50 0
50
100
150
200
250
300
350
400
gauge pressure / mmHg
Fig. 1 Measuring error of a simulator as a function of the measured (cuff) pressure. Six curves represent three series of measurement for both increasing and decreasing pressure. Solid line is the mean trend line. Error bars represent expanded uncertainties of measurement. Simulator under test was SmartArm by Clinical Dynamics, USA. Environmental dependence of the simulator can be evaluated by exposing it to different temperature and relative humidity conditions. Changes in atmospheric pressure are monitored during all the measurements, but usually do not contribute significantly to simulator’s performance, since it measures gauge pressure and not absolute pressure. Temperature and relative humidity dependence is evaluated by means of a climatic chamber (Fig. 2). NIBP monitors on the European market should be in conformance with requirements of the European standard EN 1060-2:1995, in which limits of the permissible error of the cuff pressure indication are stated. NIBP monitor’s error of
climatic chamber
simulator
p(t)
PC
thermometer hygrometer
T rH barometer
patm
Fig. 2 Environmental testing of NIBP simulator in a climatic chamber. Temperature (T) and relative humidity (rH) were measured in 9 points of the chamber inner-space. Barometric pressure (patm) was monitored. Pressure waveform acquisition was performed by means of data acquisition system (PC).
In static conditions, performance of commercial simulators is rarely not sufficient for blood pressure measurements. Rarely simulators are not suitable for testing NIBP monitors. On the other hand, in dynamic conditions there are differences among different simulators. Dynamic evaluation is a series of tests and measurements to evaluate the dynamic, time-dependant functionality of the simulator. Since the main purpose of a NIBP simulator is generation of dynamic inputs for the tested NIBP monitors, dynamic test of a simulator is usually more important than static one. In general, dynamic evaluation is performed by acquiring the pressure output of the simulator. Fast enough pressure transducer and acquiring system with large enough sampling frequency should be used. Frequency of pressure pulses are that of the human heart beat, i.e. in the order of 1 Hz. Thus, finding a suitable transducer is not a difficult task. In our case, the pressure waveform acquisition system was composed of a pressure transducer (Fujikura, XFPM 050KPG-P1), a GPIB interfaced digital voltmeter (Agilent, 3458A) and software environment on personal computer (National Instruments, LabVIEW). Initial test is a test of simulator’s ability to generate repeatable signal at a given static pressure. Simulator was set to a hypertensive patient with SYS/DIA/MAP of 150/170/117 mmHg. Constant pressure was maintained at the MAP level (117 mmHg) in order to generate pressure pulses of maximal amplitude. Static pressure was measured by a reference manometer (General Electric, Druck DPI 515). Thus repeatability of the output could be evaluated. Waveform was acquired by a pressure transducer, sampled with a digital voltmeter and conditioned and stored in LabVIEW. Mean peak value of oscillometric pressure pulses was 1.9797 mmHg and standard deviation 0.0023 mmHg. Next test we propose to include in dynamic evaluation, is determining the repeatability of the pulse pressure envelope shape. Commercial simulators generate approximately the same shape of envelopes for all target blood pressure levels. Usually, simulators offer pre-programmed target values, e.g. hypertension, hypotension). In most cases the envelope shape is not related to the NIBP monitor model-type to be tested [6]. On the other hand, shapes of envelopes compared to other commercial simulators are usually not adjustable
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of non-invasive blood pressure simulators
for a certain NIBP monitor. There are some exceptions on the market (AccuPulse by Clinical Dynamics, USA). Test of the repeatability of the pulse pressure envelope shape was performed. Pressure cuff was maintained constant by a pressure controller (Druck DPI 515 by General Electric) whilst simulator was operating. Simulator detected the applied static pressure and superimposed suitable pressure pulses according to the pressure detected. Combined pressure was measured by our waveform acquisition system. Pulse pressure envelope was gained by plotting peak amplitudes of oscillometric pressure pulses against the cuff pressure. Evaluating stability of the simulator’s output can be achieved by measuring a series of subsequent envelopes and calculating correlation coefficients against average value. In our case minimal correlation coefficient was 0.938. III. CONCLUSIONS Simulators are devices capable of simulating oscillometric waveform. It should produce repeatable waveform over a wide range of pressures and heart pulse rates. Its waveform should minimally resemble true patient waveform. We can conclude that simulators can be used to evaluate the repeatability of NIBP monitors in determining blood pressure of the same or nearly the same set of simulated waveforms. They can be employed to asses the variation of the repeatability of the NIBP monitor when measuring different magnitudes of blood pressure. Also monitor’s response to different sets of waveforms can be determined. Simulators can be used for comparison of responses of different monitors to the same set of waveforms [6]. The simulator is not capable of taking into account the transfer function between arterial pressure in the limb and air pressure in the cuff, the effects of cuff bladder size, placement of the cuff, or material of the cuff [15]. Until now these could only be assessed by performing a clinical validation on human subjects, because currently available simulators are not able to replay realistic human signals. But new generation of simulators with real physiological signals are arising [16]. In this paper we proposed a basic evaluation procedure for waveform NIBP simulators, composed of static calibration and dynamic evaluation. We performed an evaluation on a commercial simulator. Additionally, we believe evaluations should be performed regularly with a suitable time interval. We are proposing a period of 6 months to assess also the time drifts of device.
345
REFERENCES 1.
2.
3. 4. 5. 6.
7. 8.
9. 10. 11. 12. 13. 14. 15. 16.
Baker P D, Westenskow D R, Kück K (1997) Theoretical analysis of non-invasive oscillometric maximum amplitude algorithm for estimating mean blood pressure. Med Biol Eng Comput 35:271278 CEN (1995) EN 1060 non-invasive sphygmomanometers: EN 1060-3 part 3: supplementary requirements for electromechanical blood pressure measuring systems. EN 1060-4 part 4: test procedures to determine the overall system accuracy of automated non-invasive sphygmomanometers AAMI (1993) American national standard ANSI/AAMI SP101992: Electronic or automated sphygmomanometers, Association for the Advancement of medical Instruemntation, Arlington, USA O’Brien E, Petrie J, Littler R et al. (1993) The British Hypertension Society protocol for the evaluation of blood pressure measuring devices. J Hypertens 11(Suppl 2):S43–S63 O’Brien E, Waeber B, Parati G et al (2001) Blood pressure measuring devices: Recommendations of the European Society of Hypertension. BMJ 322:531–536 Sims A J, Reay C A, Bousfield D R et al. (2005) Oscillometric blood pressure devices and simulators: measurements of repeatability and differences between models. J Med Eng & Technol 29 3:112-118 Amoore J N, Geake W B (1997) Evaluation of the Criticon 8100 and Spacelabs 90207 non-invasive blood pressure monitors using a test simulator. J Hum Hypertension 11:163-169 Davis P D, Dennis J L, Railton R (2004) Evaluation of the A&D UA-767 and Welch Allyn Spot Vital Signs noninvasive blood pressure monitors using a blood pressure simulator. J Hum Hypertension 1-7 Yarrows A S, Brook R D (2000) Measurement Variation Among 12 Electronic Home Blood Pressure Monitors. AJH 13:276-282 ISO (1993) International Vocabulary of Basic and General Terms in Metrology. ISO, Geneva Ng K G, Small C F (1994) Update on Methods & Simulators for Evaluation of Noninvasive Blood Pressure Monitors. J Clin Eng 19 2:125-133 Amoore J N, Geake W B, Eng B (1997) An Evaluation of Three Oscillometric Non-Invasive Blood Pressure Simulators. J Clin Eng 22 2:93-99 Guidelines on the Calibration of Electromechanical Manometers EA 10/17 (2002) at http://www.european-accreditation.org ISO (1995) Guide to Expression of Uncertainty in Measurement. ISO, Geneva Ng K G (1996) A basis for use of long cuff bladders in oscillometric blood pressure measurement. J Clin Eng 21, 3:226-244 Amoore J N, Vacherb E, Murray I C et al. (2006) Can a simulator that regenerates physiological waveforms evaluate oscillometric non-invasive blood pressure devices? Blood Press Monit 11:63-67 Author: Institute: Street: City: Country: Email:
Gregor Gersak University of Ljubljana, Faculty of Electrical Engineering Tržaška 25 Ljubljana Slovenia
[email protected].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Experimental Measurements of Potentials Generated by the Electrodes of a Cochlear Implant in a Phantom G. Tognola1, A. Pesatori2, M. Norgia2, F. Sibella1, S. Burdo3, C. Svelto2, M. Parazzini1, A. Paglialonga1,4, P. Ravazzani1 1 Istituto di Ingegneria Biomedica, Consiglio Nazionale delle Ricerche, Milano, Italy Department of Electronics and Information and CNR-IEIIT, Polytechnic of Milan,. Italy 3 Servizio di Audiovestibologia, Ospedale di Circolo di Varese, Italy 4 Dipartimento di Bioingegneria, Politecnico di Milano, Italy
2
Abstract— The design and development of an experimental setup to measure in-vitro electric potential distribution delivered by a multichannel cochlear implant varying conditions and parameter of usage is described, with particular attention to the spatial distribution of the electric potential generated by this implant. This study enables a better comprehension of the relationships between parameters of stimulations and the electric potential delivered by the cochlear implant, fundamental to develop a more efficient and spatially localized stimulation of the nervous tissues. Keywords— active medical implanted devices, cochlear implants, potential distribution, electrode configurations
Transmitter/receiver ball electrode
electrodes array
I. INTRODUCTION A multichannel cochlear implant (CI) (Fig. 1) is a prosthesis implanted into the inner ear, i.e., the cochlea, used to electrically stimulate localized populations of the nerve fibers of the spiral ganglion (the inferior root of the acoustic nerve). The neural discharges resulting from such electrical stimuli induce auditory sensations at the level of the brain cortex area and can restore partial hearing to severe-to-profound deaf people [1-2]. The electrodes implanted into the cochlea stimulate the ear nervous terminals by means of a series of bipolar current pulses (in the following ‘biphasic pulses’), whose amplitude, width, and frequency are controlled by a speech processor. It is well known that stimulation parameters, such as the configuration of the electrodes, (i.e., monopolar (MP), one activated electrode and ground electrode not on the array; common ground (CG), one activated electrode and others electrodes grounded; bipolar (BP), two electrodes activated on the array), stimulus intensity, pulse stimulation rate, and pulse duration affect, through the generated electric field, the thresholds for electrical neural stimulation, the number of fibers excited or hyperpolarized, the initiation site of excitation, and the patient pitch perception [3]. A deeper knowledge of the relationship between the stimulation parameters and the electric field in the physiological tissue is
Fig. 1 Particular of NUCLEUS-CI24M cochlear implant transmitter/receiver. In the panel are underlined: transmitter/receiver, ball electrode, and electrodes array. crucial to develop more efficient and spatially focused excitations of cochlear neural tissues [4], [5]. II. PROPOSED APPROACH The experimental setup for the measurements (see schematic in Fig. 2) was designed as follows [4]: a NUCLEUSCI24M cochlear implant was placed inside a small plastic tube with dimensions that resembles cochlear volume (~ 100 mm3), filled with Ringer's solution, a liquid that shows similar electrical characteristic (permittivity and conductivity) of perylimph, the liquid filling cochlear canal. The electrodes array of the implant were connected to a cochlear implant PC programming platform (WinDps by Cochlear, Lane Cove, NSW, Australia), used to deliver through the implant charge-balanced biphasic pulses with 100 μs of pulse width, 25 μs of phase gap (which is the time interval between the negative and positive phase of the biphasic pulse), and with three different amplitudes of 123 c.u., 194 c.u., and 229 c.u, that correspond respectively
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 390–392, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Experimental Measurements of Potentials Generated by the Electrodes of a Cochlear Implant in a Phantom
391
PC
ball electrode Electrodes array
z y Cochlear Implant
Three-axis translator
Oscilloscope X
Three-axistranslator
Probe
Probe
Tank ball electrode
Fig. 3 Photo of experimental setup for PC-controlled Tank
potential measurement.
Fig. 2 Diagram of experimental setup for in-vitro measurements of electric field potential
to 150 μA, 600 μA, 900 μA of current. MP and BP electrode configurations were tested with three different number of inactive electrode between the electrodes (namely BP, BP+2, BP+3, where the number indicates the progressively increased distance between activated electrodes), and CG electrode configurations were tried. Every electrode configuration was tested with electrode #11 “active”. The electric potential distribution was measured by an insulated probe (a copper wire of 0.3 mm diameter) fixed in on an automated 3D stepper motor (8MT173-20-50 by Standa, Vilnius, Lithuania) with a step resolution of 1.25 μm controlled by the computer (Fig. 3). The ground (indifferent) electrode was a wire dipped into the Ringer solution filled tank. The difference potential measured by the probe was detected with a 200 MHz digital oscilloscope (TDS210 Tektronix, Beaverton, Oregon, U.S.A.) and stored with 8 bits per sample on a PC. The above bandwidth was needed to observe the rise and fall time of the biphasic stimuli differently from previous work were only a single frequency tone was recorded. The measurements were performed at 0.5 mm steps along the vertical, z, and horizontal, x axis. The three dimensional voltage distribution was deduced from the above using cylindrical symmetry of the simplified system. In order to increase the measurement accuracy, an averaging over 64 samples was implemented. At each position of the stepper motor the system recorded the biphasic potential waveform from the implant.
Fig. 4 Interface of the custom made software that controlled the three-axis translation stage
The measurement was completely automated using a custom-made LabVIEWTM Virtual Instrument (Fig. 4), which controlled the stepper motor and, the digital oscilloscope. During the measurements the program recorded the position of the probe and the amplitude of electric potential waveform.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
392
G. Tognola, A. Pesatori, M. Norgia, F. Sibella, S. Burdo, C. Svelto, M. Parazzini, A. Paglialonga,, P. Ravazzani
Potential (V)
The comparison of the waveforms of potential measured in the Ringer solution tank with the array in three different configurations (MP, BP+2, CG) was done. The potential was measured on the vertical line where the measured electric potential amplitude was maximum, about 0.5 mm from the active electrode. Nominally, MP configuration shows higher amplitude values compared to BP and CG, that generate lower amplitude values. Other considerations can be done about the shape the three different configurations electric potential distributions. BP (Fig. 5) configuration produces a more focused stimulation: the potential results concentrated near the active electrode and decreases rapidly away from the electrode array along the transverse direction. Differently both MP and CG (Fig. 6) shows less focused distributions, but MP (0.3 V) presents higher values of maximum amplitude than CG (0.05 V). Moreover, as expected, the maximum and minimum of the potential are found near the active and the indifferent electrodes, respectively. The electric potential rapidly decrease away from the electrode array along the transverse direction. This approached described in this paper can result useful to estimate the effects of various stimulation parameters on the potentials generated by a cochlear implant electrode array, that could influence the optimization process of the active implanted auditory prostheses. These results might give an important contribution to a deeper understanding of the mechanism of the electrical auditory nerve stimulation and to improve the development of optimized auditory prostheses.
z (μm)
Potential (V)
III. RESULTS AND DISCUSSION
z (μm)
Fig. 6 Comparison of potential distributions for different stimulation amplitude from 123 c.u. to 229 c.u. with common ground electrode configuration in the ringer solution filled tank. The potential was measured on the vertical line where the measured electric potential amplitude was maximum.
REFERENCES 1.
G.M. Clark, Y.C. Tong, R. Black, I.C. Forster, J.F. Patrick, and D.J. Dewhurst, “A multiple electrode cochlear implant,” J. Laryngology and Otology, vol. 91, pp. 935-945, 1977. 2. Kou B. S., Shipp D. B., Nedzelsky J. M., “Subject benefits repoted by adult Nucleus 22-channel cochlear implant users”, Journal of Otolaryngology (Canada), vol. 23, pp. 8-14, 1994. 3. Loeb, G.E., Byers, C.L., Rebscher, S.J., Casey, D.E., Fong, M.M., Schindler, R.A., Gray, R.F., Merzenich, M.M., “Design and fabrication of an experimental cochlear prosthesis”, Med. Biol.Eng. Comput. vol. 21, pp. 241-254, 1983. 4. Ranck J.B., “Which elements are excited by electrical stimulation of mammalian central nervous system: a review”, Brain Res., vol.98, pp.417-440, 1975. 5. Rattay F., “ Analysis of models for extracellular fiber stimulation”, IEEE Trans Biomed Eng. , vol.36, pp. 676–92,1989. 6. Tognola G.; Pesatori A.; Norgia M.; Parazzini M.; Di Rienzo L.; Ravazzani P.; Burdo S.; Grandori F.; Svelto C., “Numerical Modeling and Experimental Measurements of the Electric Potential Generated by Cochlear Implants in Physiological Tissues IEEE Trans. Instr. Meas., vol. 56(1), pp. 187-193, 2007. 7. Ifukube T., White L., “A speech processor with lateral inhibition for an eight channel cochlear implant and its evaluation”, IEEE Trans. Biomed. Eng., vol. 34, pp. 876-882, 1987. 8. Jolly C.N., Spelman F.A., Clopton B.M., “Quadrupolar stimulation for cochlear prostheses: Modelling and experimental data”, IEEE Trans. Biomed. Eng., vol. 43, pp. 857-865, 1996. 9. Suesserman M.F., Spelman F.A., Rubinstein J.T., “In vitro measurement and characterization of current density profiles produced by non recessed, simple recessed and radially varying recessed stimulating electrodes”, IEEE Trans. Biomed. Eng., vol. 38, pp. 401-408, 1991. 10. Kral A., Hartmann R., Mortazavi D., Klinke R.,”Spatial resolution of cochlear implants: the electrical field and excitation of auditory afferents”, Hearing research, , vol. 121, pp. 11-28, 1998.
Fig. 5 Comparison of (potential) distributions for different bipolar configurations in the ringer solution filled tank for a stimulation amplitude of 229 c.u. The potential was measured on the vertical line where the measured electric potential amplitude was maximum.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Hardware optimization of a Real-Time Telediagnosis System Muhammad Kamrul Hasan1, Md. Nazmus Sayadat1, and Md. Atiqur Rahman Sarker1 1
Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
Abstract— Integration of home and healthcare environments and the need for precise and detailed diagnosis data are the driving forces of the recent research works in the area of wireless medical devices. The aim of this paper is to make a comparative analysis on various research works done in this sector in the last couple of years and suggest some approaches for the design of a portable, power efficient, affordable and ergonomic system with the assurance of quick and reliable response. We focus on the relative-discussion on various biosignal analysis methods, circuit design, microcontrollers, wireless communication protocols on the basis of performance, reliability, size and cost regarding the design of a telediagnosis system. Though we have limited this paper on the ECG analysis only, the proposed hardware can be extended anytime for monitoring other biosignals. Keywords— Communication Protocol, ECG analyzer, Microcontroller, Personal Area Network.
I. INTRODUCTION Current healthcare system is not structured to be able to adequately service the rising needs of the aging population, and is dominated by infrequent and expensive patient-visits to physicians’ offices and emergency rooms for prevention and treatment of illness. The failure to do more frequent and regular health monitoring is particularly problematic for the elderly with multiple co-morbidities and rapidly changing health states. Recent technological advances in wireless networking, microelectronics integration and miniaturization, sensors, the internet and the mobile telephony allow us to fundamentally modernize and change the way health care services are deployed and delivered. This paper mainly covers the hardware section of the portable real-time telediagnosis system. Section 2 outlines the detail and in-depth account of the modules and circuitry in several subsections. In this section comparison between different product modules has been studied and some particular modules and circuits have been suggested for an optimum design. II. HARDWARE ARCHITECTURE While developing a design for telediagnosis system keeping all the aforementioned criteria in mind, we observed that for
a reliable and efficient system the following challenges have to overcome first: Firstly sensors have to be set in different parts of the body for detecting various biosignals. A decision has to make on weather to use the dry electrode or gel type electrode. To make the system ergonomic dry lid is the perfect one. But using dry electrodes makes the design of the ECG amplifier more difficult as the impedance between electrode and the skin increases and the dry electrodes do not sit as flush with the skin as the gel type. Two major impedances are encountered in the design of the ECG sensor. One: the signals sent by the sensors are quite small, and many other signals such as noise are larger and stronger; and two: the signal varied into positive and negative quantities at different times. This is a problem because the microcontroller’s A/D converter can only read voltages from zero to five volts, and the power supply of the microcontroller only allows positive voltages to be delivered. Apart from this the amplifier unit, the filters and the microcontroller have to conform to some basic conditions. Here arises the question of choosing the right type of filters, amplifiers and microcontroller for this case. Choosing the right communication protocol from the many developed since now, considering the issue of power consumption, portability, cost and reliability is one of the major decisions which is to be made while designing such a system. Finally to develop a system which will be power efficient and can work at a stretch for a long time. It should be portable cost effective and extendable. That’s to say, we can include several other sensors for detecting various biosignals in this telemonitoring module if we want to. A. Sensor The electrocardiogram is the wave representation of the potential difference caused by heart activity. The heart rate sensor consists of an ECG circuit with electrodes placed on the body for measuring the heart activity. The actual heart rate is measured by detecting the duration between two consecutive QRS complex peaks on the ECG waveform. Firstly we have to decide on which kind of electrode can be
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 405–409, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
406
Muhammad Kamrul Hasan, Md. Nazmus Sayadat and Md. Atiqur Rahman Sarker
chosen for portable telediagnosis system. Standard gel type electrodes for the ECG sensor provide an excellent signal quality and relative immunity to noise and movement artifacts. However, this approach is not applicable for this type of device, particularly during prolonged monitoring. Because the gel type electrodes can become dry and irritate the wearer and can cause degradation in the ECG waveform. It is also more practical to have electrodes that are reusable [2].
enables an assessment to be made of the frequency and its variations (Heart Rate Variability [6]); irregularity beyond certain preset physiological thresholds indicates the existence of arrhythmias. If the heart rate strays outside preset levels, the sensor node sends the appropriate AT (Automatic Transmission) commands to the mobile phone for transmitting an SMS Alert indicating possible abnormality. A data mode call will then probably be established from the Database system or the physician’s end for sending the patient’s ECG in real time.
B. QRS Position Detection Algorithm: Many researchers have been working on various QRS detection algorithm and several others have been working on to make comparison of those algorithms on the basis of reliability and noise sensitivity [3]. The QRS position can be detected using an adaptation of the one developed by Englese and Zeelenberg [4]. Extended experiments published in [3] show that the algorithm developed by Englese and Zeelenberg can effectively detects QRS events in the presence of various types of noise, such as: power line interference, electrode contact noise, motion artifacts, muscle contraction, baseline drift and ECG amplitude modulation with respiration. According to this algorithm, the enhanced ECG signal is passed through a digital high pass filter, which abstracts its mean value. A low pass filter follows which highlights the signal’s pick values. The output of the filtered data is then scanned until a point with amplitude greater than a positive threshold is reached. The most important part of threshold detection is the threshold level. This needs to be adequate so that only a beat surpasses it. The following matters should be taken under consideration at the time of setting the threshold level [5]: (i) Different wearers have different QRS complex amplitudes (ii) Different situations will have different noise floors. For example, in an environment with fluorescent lights or in a computer lab, the 50Hz hum will be very large compared to that encountered outside this environment, and (iii) Because the device is battery powered the DC bias point will drop as the battery voltage decreases. The number of the alternate threshold crossings is used to classify the initial crossing as either a baseline shift a QRS candidate or as noise. If no other threshold crossings appear within the 160 ms search area, the occurrence is classified as a baseline shift. Otherwise, three conditions concerning the signal amplitude is examined to detect the presence of a QRS within a time window of 160 ms. The diagnostic system recognizes significant changes in the QRS wave. The aim of the ECG process is to implement a number of basic rules in heart rate and pattern-matching methods in the QRS wave. The length of the heart cycle, for its part,
C. Amplification and Filtering Instrumental Amplifier: The ECG signal is usually small (~1mV peak-to-peak). Before the signal is digitized, it has to amplified (gain >1000) using low noise amplifier and filtered to remove noise. Filters are used to remove the 50Hz hum and suppress the T-wave of the ECG signal (not required to detect the heart rate) to help keep a steady DC point. The use of an op-amp appeals in this situation. However further consideration reveals it is not so suitable. The previously alluded to noise would saturate the op-amp upon amplification, and the signal would be lost. The medical industry uses “instrumentation amplifier” in situations like these. Instrumentation amplifiers have the property of passing common mode signals (like 50Hz noise from electricity) as a small percentage of the differential of the signal. The common-mode rejection ratio (CMRR) is the ratio of amplification of the signal divided by amplification of the common mode input. A high commonmode value is recommended in this application. A standard single supply instrumental amplifier is used for the differential bioelectrical amplifier in the ECG-sensor. The differential amplifier can be used is Burr Brown INA118 [7]. The INA114 is a low cost, general purpose instrumentation amplifier offering excellent accuracy. Its versatile 3-op amp design and small size make it ideal for a wide range of applications. It is very suitable for this application with a large common mode rejection ratio, approximately 120dB, and low supply voltage, 1.35V. This particular amplifier has a CMRR of 115dB, which is considered high and meets the American Advancement of Medical Instrumentation’s (AAMI) specification on electrocardiography. DC wander: Electrodes placed against the skin cause a varying DC offset known as DC wander [8]. To help remove this DC wander, an integrator can be added to the circuit. The integrator takes its input from the output of the differential amplifier and its output is attached to the reference pin of the same amplifier. This provides a changing DC bias for the first amplifier stage which depends on the output of the same stage. This greatly
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Hardware optimization of a Real-Time Telediagnosis System
reduces the DC wander seen at the output of the differential amplifier which means that the gain of the second stage can be maximized without having to worry about saturation due to DC wander. Right Leg Drive (RLD): The function of the Right Leg Drive (RLD) is to eliminate the common mode noise generated from the body. The two signals that are entering the differential amplifier from the leads placed on the right and left arm according to Einthovens triangle [9] are summed, inverted and amplified back into the body through the right leg by a common-mode amplifier. This signal is fed back to the other leads and eliminates the noise signal drowning the wanted ECG signals [10]. Voltage gain and Filter: To remove the unwanted frequencies in this case requires three filters. A low-pass filter is implemented to remove baseline wander of a patient. A notch or band-reject filter is used to reduce 50Hz noise. Finally, a high-pass filter is used to remove frequencies higher than 100Hz. It would not be wise to use a band-pass filter in this case as the pass-band is large, and after the consideration of three options, cascaded high-pass low-pass filters is recommended [11]. After the unwanted frequencies are filtered out, the gain can be added. From calibration and simulation it is deemed that this amplification should occur by a factor of around seven or eight. The signal is amplified and inverted by an op-amp, using the inverting input as its signal path. This is done because the next stage, the summing amplifier, can only be implemented on the inverting input, and will invert the signal.
407
in the positive domain is to add one or two 1.5VAA batteries with the positive terminals attached to ground. The operational amplifiers recommended for the filter, right leg driver, integrator and second amplification stage are National Semiconductor LMV771 and LMV774. These are the single amplifier and quad pack respectively. These are suggested because of the low supply voltage 2.7V and small supply current 550MA. D. Microcontroller Advanced microprocessor devices for the telediagnosis unit: A real-time telediagnosis system needs to be portable, small in size, inexpensive and feature-extendable. So it should be implemented in a one-chip microcontroller that integrates: an analog to digital converter (ADC); digital signal processing and temporal storage in working memory (RAM), and a digital interface to the transceiver device. In the following table we made a comparison between four microcontrollers which we found prospective and suitable for developing the telediagnosis system: AT90LS8535, ATMega128L (produced by Atmel), PIC16F8X (by Microchip), and MSP430F149 (by Texas Instruments). Some of the key characteristics which are important for choosing the most appropriate solution are: (a) physical dimensions; (b) active power; (c) number of ADC; (d) RAM. We may need to convert multiple analogical signals in case of the diagnosis of other biosignals along with the ECG and probably to compress the digital signal. Hence, in order to maximize the number of channels transmitted and reduce the energy needed for transmission, we need more ADCs and RAM. One could also use analogue multiplexers, at the cost of increased total physical dimension of the system. It is also possible to use implanted amplifiers and multiplexers [12], but this is a matter of further research and commercial availability.
Vendor
Finally the signal must be brought into the range of zero to five volts to enable the analog to digital converter to perform analysis on it. It is seen that the output voltage is the sum of the negative amplification of the input voltages. Here the gain is one, as all resistors are set to the same value, so the output is the addition of the signal and offset voltage inverted. In practice, the signal is around four volts peak-to-peak, so the best way to make the signal reside only
Bits Flash (bytes) RAM (bytes) ADC Timer Voltage Active Power Idle Power
AT90 LS8535 Atmel
ATMega128 MSP L 430F149 Atmel Microchip
8 8K
8 128K
16 60K
Texas Instruments 8 68K
512
4K
2K
1K
8 x 10bit 3 4-6V 6.4mA
8 x 10bit 3 2.7-5.5V 5.5mA (4MHz) 2.5mA (4MHz)
4 x 12bit 3 1.8-3.6V 400μA
1 2-6V 2mA
1.9mA
PIC16F8X
1.3μA
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
408
Muhammad Kamrul Hasan, Md. Nazmus Sayadat and Md. Atiqur Rahman Sarker
Choice of a microcontroller: Analyzing the characteristics of the microcontrollers (MCUs) shown above, we can conclude that ATmega128L is most suitable for the telediagnosis system. This MCU is chosen because: It is low cost, has a small footprint and most importantly, it has all the peripherals required built-in. It is an 8-bit MCU clocked at up to 8MHz. When clocked at 4 MHz, it consumes only 5.5mA in active mode. It has 128 Kb of FLASH memory (reprogrammable more than 10000 times), 4 Kb of EEPROM and 4 Kb of SRAM. Data could be temporally stored on the 4 KB onchip memory, but an external memory of 64 KB could also be added. Control software and supporting data could be stored in the on-board 128 KB flash-memory. This MCU has 8 analog input ports on the board, which allows sampling from 8 micro-sensors without external ADC. The ADC subsystem is extendable to 32 channels using a multiplexer. The analog inputs could be measured with a total frequency of up to 500 KS/s with a resolution of 8 bits per channels. There are 2 UARTs onboard; one for digital communication with the external Bluetooth module and another for debugging. It features a low voltage power supply detector which we can use for monitoring the state of the battery [13]. E. Communication Protocol In the race of eliminating any form of wiring between products by adopting wireless RF (Radio Frequency) based technologies, the designers are faced with an ever growing number of communications protocols. As communication is a vital part in the development of a real-time telemonitoring device, the design decision regarding the communication technology has far reaching implications on scenarios and applications that can be supported and built based on this device. Nowadays there are various wireless RF communications technologies– a breadth of choice that makes difficult identifying the optimum technology for a given application. Short-Range Wireless (SRW) networks such as Blue tooth, ZigBee, RFID and IR, are gradually becoming more and more widespread in modern information systems. Most of these SRW networks have some drawbacks as the transmitting distance is less or else they are prone to disturbance due to outside environment. To keep the system power consumption low and to increase security, an own system of short range communication can be developed. But as said earlier the problem in this case is, the mobile phone or the PDA has to be equipped with the same transceiver. So it is better to go
for a protocol for which we don’t have to make any change in the devices like PDA or mobile phone. Again the IrDA technology needs direct visibility [14] between the two nodes and has bandwidth and energy consumption less advantageous than Bluetooth or ZigBee. From the above analysis we can consider that Bluetooth and ZigBee are most relevant for the specific requirements of a telediagnosis system: large transmission data-rate and low energy consumption. Bandwidth Energy Consumption(mw) Range(m)
IrDA 9.6-115 10
Bluetooth 720 150
ZigBee 20-250 1-2
5
10
100
ZigBee is advantageous as a communication protocol due to two main reasons: (i) Low energy consumption and battery life, (ii) Wider range but it has a relatively limited bandwidth (20 / 250 kbps) [15]. In contrast, Bluetooth has the following points for being candidate communication technology for the Real-Time Telediagnosis System: As the number of consumer devices such as PDAs, Laptops, cellular phones that are equipped with Bluetooth modules increase rapidly; Bluetooth provides a degree of interoperability to easily integrate augmented objects into existing computing environments. Bluetooth targets low-cost, low-power, secure and robust short-range connectivity. The technology has been designed for ease of use, simultaneous voice and data, and multi-point communications [16]. Studies indicate that blue tooth technology is electro magnetically compatible with the tested medical devices [17]. The bandwidth of Bluetooth (720 kbps) is about three times as much as that of ZigBee. This is achieved, however, at the cost of increased energy consumption: a Bluetooth-based system is expected to have autonomy of up to several days. If we use rechargeable battery in the sensor-end then it can be charged by the user everyday or every alternative day. In this application it is acceptable to trade bandwidth to save energy. Again, a lower bandwidth generally results in longer communication times (longer on-times) that could counter the saving effect. Therefore, in the design the optimum of bandwidth vs. energy consumption should be assured. As in our system we want the ECG graphs to be transmitted to a distant client if requested and we want our system to have the option of appending multiple sensors for monitoring
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Hardware optimization of a Real-Time Telediagnosis System
and analyzing various biosignal of the body, we need a wide bandwidth for transferring our data. Compared to communication protocols specifically designed for the telemonitoring sensor networks, communication via Bluetooth consumes significantly more energy. However, in applications where the communication modules can be switched off most of the time and the need for communication be recognized by considering local sensor readings only, the Bluetoothbased Telemonitoring System is well suited for realizing typical Portable wireless real-time telediagnosis scenario. Bluetooth networks have a more limited range than ZigBee networks (10m. vs. 100m). However, the person with the Telediagnosis system will carry the mobile phone with him which is within the range of the basic low-consumption Bluetooth devices. Finally we can conclude that in applications where interoperability is paramount and access to commercial user devices is required a Bluetooth based solution is most preferable. So in the Real-time Telediagnosis System we suggest Bluetooth as the perfect communication protocol. As far as the Bluetooth communication is concerned, it could be implemented with the one-chip solution by Ericsson: ROK 101 008 a short-range module that implements full Bluetooth functionality. It operates at 5 V and consumes 26 mA during data-transfer mode. It is 3.2x1, 6x0.275 cm. large and weights less than 3 gr [18]. III.CONCLUSIONS This article presents our preliminary investigation for the realization of a real-time telediagnosis module. Throughout our study our main aim was to make the system more robust and reliable not increasing its weight, size and cost. For this reason we have developed the system on the basis of various integrated circuital modules. The next stage of the research will focus on the implementation of sensor node and network coordinator software in the TinyOS environment. The aim will be to develop and implement a telediagnosis system which will satisfy requirements for minimal weight, miniature form-factor, low power consumption to permit prolonged ubiquitous monitoring; seamless integration, standards based interface protocols, short development cycle and patient-specific calibration, tuning, and customization.
409
REFERENCES 1.
A. Karilainen, S. Hansen, and J. Müller, “Dry and Capacitive Electrodes for Long-Term ECG-Monitoring”, 8th Annual Workshop on Semiconductor Advances, 17 Nov 2005, pp 156.
2.
G. M. Friesen, T. C. Jannett, M. A. Jadallah, S. L. Yates, S. R. Quint, and H. T. Nagle. “A comparison of the noise sensitivity of nine QRS detection algorithms,” IEEE Trans. on Biomedical Engineering, vol. 37, no. 1, Jan. 1990. Engelse W. A., and Zeelenberg C., “A single scan algorithm for QRSdetection and feature extraction”, IEEE Computer Cardiology: IEEE Computer Society, 1979, 37-42. J.P. Baker, Assoc. Prof. P.J. Bones, Dr. M.A. Lim, “Wireless Health Monitor,” Electronics New Zealand Conference 2006 Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology, “Heart Rate variability standards of measurement, physiological interpretation, and clinical use”, European Heart Journal, vol. 17, Mar. 1996. INA114 BURR-BROWN Precision Instrumentation Amplifier specification. Available: http://www.ti.com/productcontent/ina114.html P. Laguna, R. Jane, and P. Caminal, “Adaptive Filtering of ECG Baseline Wander,” Engineering in Medicine and Biology Society, Vol. 2, pp 508-509, 1992 http://www.cvphysiology.com/Arrhythmias/A013a.htm E.M. Spinelli, R Pallas-Areny, and M.A. Mayosky, “AC-Coupled Front-End for Biopotential Measurements,” IEEE Transactions on Biomedical Engineering, Vol. 50, no. 3, pp 391-395, 2003 Kerry Lacanette, A Basic Introduction to Filters – Active, Passive, and Switched-Capacitor, National Semiconductor Application note 779, April 1991. Jan Beutel, Oliver Kasten, “A Minimal Bluetooth-Based Computing and Communication Platform,” IT Papers, June 2001, Available: http://www.itpapers.com/abstract.aspx?compid=16237&docid=91473 Atmel Corporation, ATmega128(L) - Datasheet Complete, 11/2004, Available: http://atmel.com/dyn/general/tech_doc.asp?doc_id=7236 Eric Glaenzer, SIG WirelessOne, PAN - Personal Area Network, (2004) www.siliconfrench.com/workshops/ presentations/PanSiliconFrench.ppt ZigBee Alliance, Available: http://www.zigbee.org/ Atmel Corporation, Blootooth General Information. Available: http://www.atmel.com/dyn/products/product_card.asp?part_id=2205, 2000 Mats K. E. B. Walling MD, MSc t, and Samson Wajntraub, MSc t, “Evaluation of Bluetooth as a Replacement for Cables in Intensive Care and Surgery,” Critical Care and Trauma, Technical Communication, September 8, 2003. Ericsson Microelectronics AB, Ericsson Bluetooth module: ROK 101 008, Datasheet, 2000, pp.1 – 12
3. 4. 5.
6. 7.
8. 9. 10. 11. 12. 13. 14. 15.
16.
Author: Muhammad Kamrul Hasan Institute: Street: City: Country: Email:
Bangladesh University of Engineering and Technology 406, Sher-e-Bangla Hall, BUET Dhaka Bangladesh
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Home Care Technologies for Ambient Assisted Living Ratko Magjarevic University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia Abstract— Health technology and increased medical knowledge enable accurate diagnostics and effective treatment of a large number of diseases, including those which only a decade ago were not easy to manage and cure. The interest and biomedical research in the modern society is intensively directed on disease prevention, early diagnostic and life quality improvement as well as on development of personalized healthcare especially for those chronically ill, disabled and for the aging population. The aim of the new approach in healthcare is not only to monitor and improve health of individuals, but also to increase their independence, mobility, safety and social contact through increased communication, inclusion and participation using available technologies. A large number of new medical devices for health monitoring, home care, wellness promotion, gerontotechnology, etc. have to be designed, tested and adopted to meet the special needs and demands of different population groups. These new devices for telemonitoring and telediagnostics create large amount of health related information, in most cases from sensors organized into body sensor networks. The information has to be processed, transmitted from the point of care to the healthcare system in a safe way and after managing the information in an appropriate and intelligent manner, decisions related to the persons health are to be made. This paper brings an overview of some solutions presented in literature as well as our own development of intelligent mobile monitoring devices. Keywords— Personalized healthcare, ambient assisted living, body area network, telemonitoring, telediagnostics
I. INTRODUCTION The average age of the population is increasing considerably, especially in the developed countries. In order to meet the rising demand for health care and other social services for the elderly, the policy makers have decided to encourage a number of research and development projects which combine enabling possibilities of communications, information technology, sensorics and health education. Due to ageing of population, the prevalence of chronic diseases has increased and at the same time the citizens have extended their demands for the best available health care. The life style, increased stress and workload tend to increase the disease risks in middle aged and younger population and therefore a need for developing disease prediction and early detection programs and devices has risen in order to increase the quality of life and reduce the costs by pro-
viding as many as possible services at citizens home [1, 2]. Both, patients and healthy, are prepared to carry different intelligent and networked sensors continuously in order to improve their health and well being. Such wearable devices must comply with many requirements, in addition to their medical functionality and technical specifications: they have to be easy to use, reconfigurable, interoperative [3]. Physiological parameters are measured and processed by body sensor networks often based on e-textiles [4, 5, 6], transmitted to the immediate surrounding such as wristwatch, PDA, or PC [7]. The information may be collected and processed in a medical institution, or at their home. In some cases telemonitoring is covered by global GPS or GIS [8]. The ambient has been adopted by embedding sensors into smart homes in order to provide health monitoring of individuals, and also to increase their independence, mobility, safety and social contact through increased communication, inclusion and participation using available technologies [5, 9, 10]. Smart homes require integration of a large number of sensors and device monitoring with a set of processing and decision making devices resulting with a large number of different applications [11]. The concept of ambient assisted living has been created within EU FP6 as a program for funding research and development with outcomes that enhance the life quality of the elderly and of the old primarily by using ICT innovations and by providing remote services. This program shall be continued until at least 2013. The citizens’ acceptability of e-Health services is rising worldwide and e.g. the expectations for the EU are that by 2010 up to 5% of the health budget will be spent on these services [12]. II. FROM HOME CARE TO PERSONALIZED HEALTH CARE Patients do not any longer receive care only in medical facilities. The health care services have spread first to their home and then also to other spaces they reach, and finally there is a tendency to offer the services universally. Also the number of users and potential users has increased since in addition to the need for monitoring either vital functions of patients or their general health status, the wish for continuous information on health status and potential health risks have developed in healthy population also. Health care technology research and development have followed these
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 397–400, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
398
Ratko Magjarevic
needs but there is still a need for further steps in order to cover all applications, from disease and health management, disease prevention and possibly prediction. These efforts are intended for a not static citizen, moving across the borders, at any time, but also under non uniform legal framework, unclear reimbursement policy and high requirements for reliability,security and privacy. A. Home care Medical devices and services for home care first developed out of the need to extend patient monitoring after medical interventions, using minimally or non-invasive methods allowing early patient release from the clinical facilities. Currently, the home care devices can be divided into several groups: • • •
Stationary medical devices used at home to measure particular physiological parameters, transmit them to the center of care within a regular schedule, Devices embedded into the home in order to raise alarm in case of a medical need or accident, Wearable sensors and sensor networks that continuously monitor several physiological parameters.
Stationary medical devices are used to measure physiological parameters which do not need to be monitored continuously or the measurement cannot be performed in such a way. A typical parameter in this category is blood pressure, still considered difficult to determine by users themselves. New developments resulted in a more suitable device for home use, which measures ECG and photolethysmographic (PPG) signal as well [13]. Other systems are based on personal computers as base stations for raw data acquisition where multiple parameters (e.g. HR, RR, ECG, SpO2) are collected and processed to produce decision supportive information, while the data is stored in the computer or transmitted to a medical center [14]. Health smart home (HSH) was designed to follow elderly and disabled people in order to avoid hospitalization. Several modalities of monitoring have been introduced: automatic measurements of physiological parameters, activity measurement and disease specific measurements, allowing monitoring of patient’s daily activity within his home [15]. However, the intention is to minimize the interventions in the infrastructure. Thus, the design of existing devices has to be added with medical functionality. Wireless LAN, environmental sensor network for temperature, humidity and carbon dioxide monitoring are interfaced to devices for physiological measurements [16]. Embedded devices certainly provide more comfort to users than wearable devices. Since development of the home care concept is related to care for elderly people, highly
accurate automatic fall detectors are important for their care. In addition to devices which require the user to wear and activate them, passive and unobtrusive devices based on floor vibration- detectors have been proposed [17]. Sensors are incorporated into furniture, e.g. into beds in order to follow parameters during sleep [18]. However, such build in devices can produce data on a very limited number of patient related information and therefore can be taken only as a part of intelligent environment. Wearable devices present the vast majority of devices used for home monitoring. However, there are several modalities of monitoring devices and concepts: •
•
•
•
Biotelemetry, as a classical form of data acquisition and transmission, where at the side of the moving transmitter (i.e. patient) only a limited number of parameters is measured and only limited processing is performed, usually in order to compress the data and reduce the power consumption necessary for raw data transmission [19]. Portable medical devices, personal medical assistants, intended for use at home to facilitate patient centered care and to enable communication with a medical center through wired services as a part of telematic network. Body area network (BAN), which enables wireless communication of a central data storage device with numerous sensors attached to the (patient’s) body. Miniature integrated circuits allow measurement and communication at ultra low power and low weight [17]. The aim of introducing these networks is extracting, intelligent processing and transmitting information to devices which communicate with national healthcare information systems. BANs have facilitated research of numerous new miniaturized sensors for physiological data measurement such as ear worn sensors [20] or different types of dry electrodes [21]. BANs should be designed so that they do not reduce the mobility of the persons wearing them. Intra-body communication networks or personal area networks have also been proposed for data communication between sensors within or in the near vicinity of the body surface [22].
B. System Integration Body area networks produce a large amount of data which has to be reliably transferred to the base station and/or server. The majority of signal processing is preferably performed within the wearable unit using embedded intelligence [23, 24, 25]. There are many technical constrains, e.g. limited band, power consumption, interference
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Home Care Technologies for Ambient Assisted Living
399
Fig. 2 Results of the wheeze detection in one of the recorded respiratory signals a) spectrogram with marked wheezes (w), b) extraction using connected components, c) detected wheezes in time
Fig. 1 Body area network for monitoring physiological parameters. Each intelligent sensor is comprised of an acquisition unit (Acq), processing unit (DSP) and a radio frequency transmitter (RF) connected wirelessly into a WBAN. Main unit communicates with the base station in an intelligent ambient.
between different networks – objects. Challenging issues that have also to be resolved are system configuration, customization and integration, standardization of communication protocols, use of off-the-shelf components, as well as security and privacy issues. Body area network for monitoring physiological parameters typically comprises of a number of intelligent sensors which in turn have an acquisition unit (Acq), processing unit (DSP) and a radio frequency transmitter (RF) in order to wirelessly connect all of them into a WBAN. Fig. 1 shows a system being developed at the University of Zagreb. The main unit processes information obtained from all sensors and communicates with the base station, positioned in this case in a smart home, either directly or through a network of simple auxiliary transceivers. A “traffic light” communication between the main unit and the ambient has three levels: a) alert, in case information processing showed unhealthy condition of a patient in need of immediate attention, b) warning, in case the processing showed suspicious information or trends and c) normal condition, when only short messages are exchanged in order to maintain the communication and acquire the position of the patient. The signal and information processing algorithms enable data compression suitable for communication. Fig. 2 presents how from complex signal processing i.e. from a spectrogram obtained from an asthmatic patient (a), the critical par-
ameter – wheezing is extracted (b) and reduced to a one dimensional information, i.e. presence of wheezing in continuously monitored patient record (c) [24]. The messages to the base station from the main mobile patient unit contain an identification part and a set of physiological parameters individually set for the needs of monitoring of each patient. E-textiles enable structures which integrate electronic components with textiles, while the term i-textiles designates interactive structure beyond just passive incorporation of electronics and textiles [6]. Smart shirts, also called “wearable motherboards” form wearable infrastructure consisting of a number of sensors integrated into a textile which is as comfortable as traditional clothes [4]. Cellular phones are often used as a platform for communication with the base station. Physiological data is processed and summarized to enable transmission to a remote server in regular time intervals by SMS [26] or the system is configured as an alert system targeting alert message receiving from highrisk patients. The alert system includes continuous collection and evaluation of multiple vital signs and detection of emergency [27]. C. Personalised Care Innovative systems based on wearable and portable systems will soon include personalized health status monitoring, enabling early diagnosis of disease based on monitoring and analysis of physiological parameters and guidelines for appropriate treatment. The alerting systems will incorporate new algorithms for prediction, detection of symptoms and extraction of adverse events [28, 29, 30]. The knowledge will incorporate results of data mining of existing medical
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
400
Ratko Magjarevic
and scientific databases and blind separation methods for handling large amounts of clinical data. Our environment will become, consciously or unconsciously, a part of ambient assisted living enabling predictive, personalized health care based on patient-specific modelling and simulation. III. DISCUSSION AND CONCLUSIONS Health monitoring will in future include monitoring of patients and healthy persons and it will consist dominantly of body sensor networks, either implanted or surface sensors and a personalized, powerful computational unit with embedded intelligence, designed to recognize changes in person’s health and context. This personalized unit will enable ubiquitous presence within the health care system. Accordingly, it will be reconfigurable, it will communicate with the ambient to assist changes in regard to living conditions and also activate devices e.g. within rehabilitation program or for drug delivery.
ACKNOWLEDGMENT This study was supported by Ministry of Science, Education and Sport of the Republic of Croatia under grant no. 0361554.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Scanaill C et al., (2006) A Review of Approaches to Mobility Telemonitoring of the Elderly in Their Living Environment. Ann. Biomed. Eng. 34: 547-563 Kochm, S (2005) Home telehealth – Current state and future trends. Int. J. Med. Info. 75:565–576 Paradiso R et al., (2005) A wearable point-of-care system for home use that incorporates plug-and-play and wireless standards. IEEE Trans. Inform. Technol. Biomed. 9: 337-344 Boger J et al., (2006) A Planning System Based on Markov Decision Processes to Guide People with Dementia through Activities of Daily Living. IEEE Trans. Inform. Technol. Biomed. 10: 323-333 Axisa F et al., (2005) Flexible technologies and smart clothing for citizen medicine, home healthcare, and disease prevention. IEEE Trans. Inform. Technol. Biomed. 9:325–336 Park S et al., (2007) Performance Analysis of 802.15.4 and 802.11e for Body Sensor Network Applications. IFMBE Proc. vol. 13, 4th Int. Workshop Wear. & Impalnt. BSN, 9-14 Parkka J et al., (2006) Activity Classification Using Realistic Data From Wearable Sensors. IEEE Trans. Inform. Technol. Biomed. 10: 119-128 Chung-Chih Lin et al., (2006) Wireless Health Care Service System for Elderly With Dementia. IEEE Trans. Inform. Technol. Biomed. 10: 696704 Adlam T et al., (2004) The installation and support of internationally distributed equipment for people with dementia. IEEE Trans. Inform. Technol. Biomed. 8: 253-257 Mihailidis A et al., (2004) The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home. IEEE Trans. Inform. Technol. Biomed. 8: 238-247
11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.
Schmitt L et al., (2007) Towards Plug-and-Play Interoperability for Wireless Personal Telehealth Systems. IFMBE Proc. vol. 13, 4th Int. Workshop Wear. & Impalnt. BSN, 257-263 http://cordis.europa.eu/fp7/ict/programme/challenge5_en.html Jobbagy A, et al., (2006) Blood Pressure Measurement at Home. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., 3319-3322 Spyropoulos B et al., (2006) Development of low-cost Hardware supporting Mobile Home–Care. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., 372-375 Demongeot J et al., (2002) Multi-sensors acquisition, data fusion, knowledge mining and alarm triggering in health smart homes for elderly people. C. R. Biologies 325: 673–682 Tamura,T et al., (2006) The ad-hoc Network System for Home Health Care. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., 3856-3858 Jovanov E et al., (2005) A wireless body area network of intelligent motion sensors for computer assisted physical rehabilitation. J Neuro Eng. Reh 2:6 doi:10.1186/1743-0003-2-6 Chen W et al.,(2005) Unconstrained detection of respiration rhythm and pulse rate with one under-pillow sensor during sleep. Med Biol Eng Comput 43: 306 312 Lackovic I et al., (2000) Measurement of gait parameters from free moving subjects. Measurement, 27(2): 121-131 Pansiot, J et al., (2007) Ambient and Wearable Sensor Fusion for Activity Recognition in Healthcare Monitoring Systems. IFMBE Proc. vol. 13, 4th Int. Workshop Wear. & Impalnt. BSN, 208-212 Chetelat O et al (2006) Continuous multi-parameter health monitoring system. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., 585-588 Wegmueller MS et al (2006) Digital Data Communication through the Human Body for Biomedical Monitoring Sensor. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., 608-612 Karantonis D M et al., (2006) Implementation of a Real-Time Human Movement Classifier Using a Triaxial Accelerometer for Ambulatory Monitoring. IEEE Trans. Inform. Technol. Biomed. 10: 156-167 Alic A et al., (2006) A Novel Approach to Wheeze Detection. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., 963-966 Chuang-Chien Chiu et al., (2006) A Wearable e-Health System with Multi-functional Physiological Measurement. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng. 419-422 Scanaill C N et al. (2006) Long-term telemonitoring of mobility trends of elderly people using SMS messaging. IEEE Trans. Inform. Technol. Biomed. 10: 412-413 Anliker U et al (2004) AMON: a wearable multiparameter medical monitoring and alert system. IEEE Trans. Inform. Technol. Biomed. 8: 415-427 Ordonez, C (2006) Association rule discovery with the train and test approach for heart disease prediction. IEEE Trans. Inform. Technol. Biomed. 10: 334-343 Keogh E et al., (2006) Finding Unusual Medical Time-Series Subsequences Algorithms and Applications. IEEE Trans. Inform. Technol. Biomed. 10: 429-439 Sovilj S et al., (2005) Continuous Multiparameter Monitoring of P Wave Parameters after CABG Using Wavelet Detector, Proc. Computers in Cardiology, 489-492
Author: Ratko Magjarevic Institute: University of Zagreb Faculty of Electrical Engineering and Computing Street: Unska 3 City: Zagreb Country: Croatia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Modelling and Simulation of Ultrasound Non Linearities Measurement for Biological Mediums R. Guelaz, D. Kourtiche and M. Nadi University of Henri Poincaré, Laboratoire d’Instrumentation Electronique de Nancy, Nancy, France
Abstract— We present an implementation of the nonlinear propagation in an ultrasonic measurement modelling with VHDL-AMS–IEEE-1046-1999 language . The system is dedicated to nonlinear mediums characterization by a compared method measurement. Usual modelling of ultrasonic transducers are based on electrical analogy and are not simulated in the global measurement environment. The ultrasonic transducer modelling proposed is simulated with the nonlinear acoustic load and electronic excitation. The nonlinear B/A parameter is used to characterize medium with a comparative method. The measurement cell is composed of two piezoelectric ceramic transducers which are implemented with the Redwood’s electric scheme. The analyzed medium is placed between the transducers and modeled to take into account the nonlinear propagation with the B/A parameter. The usual transmission line model has been modified to take into account the nonlinear propagation for a one dimensional wave. Results obtained with simulation of mediums characterization with (blood, milk, liver and human fat tissue) showed good a modeling in agreement between modelling and experimental measurement, also a maximum error of about 12.5%. Keywords— Modelling, ultrasound, nonlinear, simulation, propagation, VHDL-AMS.
I. INTRODUCTION The ultrasonic imaging calls upon very advanced technologies in term of high-speed signal processing and conception of ultrasonic transducers matrix in nanotechnology methodology. The useful signal is generally the fundamental frequency of the resonance transducers. However recent studies show the interest of the harmonic frequencies analysis generated by the crossing of ultrasonic waves in biological environments. Works show the improvement of the image quality as in the case of the second harmonic imaging [1]. In addition to the image quality, the measurement method of the nonlinear parameter B/A makes it possible to envisage a mediums characterization. The integration of this parameter in ultrasonic system modelling is an essential stage of the measurement system conception study for multi layer biological environment analysis. Two types of methods make it possible to evaluate the nonlinearity parameter : the thermodynamics methods [2] and the finited amplitude methods [3]. The first is not credible for
in vivo measurement because of the precise variation needed in environmental parameters such as the temperature or the pressure in tissues. The second method is more realist and consists in analyzing the harmonic wave propagation of the ultrasonic signal. The coupling of the various physical natures parts (electronics and acoustics) requires the use of mixed language such as VHDL-AMS IEEE 1076-1999 to determine precisely the influence of each part in the measurement system. Usual modellings of ultrasonic systems use an acoustic model medium assimilated to an electric propagation line without losses [4] and without nonlinear aspect. Our model based on this same theory of electric propagation line integrates the nonlinearity B/A parameter, by a recurrent formulation of the Burger’s equation solution [5]. The measurement system modelling is based on a nonlinear medium excitation by an ultrasonic transducer vibrating at a fixed frequency fo. An identical transducer is placed at the end of the measurement cell and makes it possible to analyse the acoustic wave in the propagation axis of the transmitting source. The electric signal analysis of the receiver transducer is used to identify the acoustic signal at its fundamental frequency and its second harmonic by a fast Fourier transform. Parameter B/A is estimated by a comparative method [6] with water like reference medium and ethanol like analysed medium. The parameter B/A estimation shows a measurement system modelling adapted to the ultrasonic medium characterization study. II. THE MEASUREMENT SYSTEM MODELLING A. The measurement principle of a comparative method The measurement system principle for study the ultrasonic characterization is presented in figure 1. A transducer emits an acoustic wave at a frequency fo through medium. A receiving transducer vibrating at the same frequency fo in a first case is placed at the end of the measuring cell. In a second case, we study the response obtained with a transducer vibrating at 2fo in order to improve the sensitivity measurement. The transducers are assembled with air like backing medium (Rback in figure 1) with acoustic impedance Z = 425Rayl and are stuck in Plexiglas structure.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 377–380, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
378
R. Guelaz, D. Kourtiche and M. Nadi
wave propagation in a nondissipative medium (without losses) is described below:
d Electric excitation
emission y transducer
ρ ,c0 ,B/A
Ve(t)
P0
t
|Ve(f)|
a o
V0
reception y transducer
Nonlinear Medium:
acoustic wave
P1 P2 a
Vs(t)
z
|Vs(f)|
t
V1 V2
f
f
x
∂u − β .u . ∂u = 0 ∂z co2 ∂τ
Reception signal
f 2f
x
f
With co the acoustic medium celerity, u is the particles velocity and τ = t – z/co, z is the axis propagation. β=1+0.5*B/A is the nonlinear parameter in liquid mediums. In the case of sinusoidal incident wave, the solution of the equation is given by:
Fig. 1 Principle of measurement cell Piezoceramic layer
⎛ ⎞ z u ( z ,t ) =sin⎜ t − ⎟ ⎝ co + β .u ( z,t ) ⎠
Nonlinear Medium Delay (Td)
v2
v1 Zo
Zo
F2
Zback
Fii
F1 Co
I3 R
Ftt
Emitter Co
Piezoceramic layer v1
F1
CScope
RScope
T
(3)
With l = 1/( β*k*M), k = w/co with w the pulsation of the wave in the beginning, M = Uo/co is the Mach number, Uo is the amplitude of the ultrasonic source and σ= β.w.Uo.z/co² the shock formation coefficient. C. Implementation of non linearities in a linear propagation line
v2
Co
The shock front appearance in the wave form is characterized by a coefficient noted σ (0< σ<1). So the relation (3) began:
T
Zback
(2)
⎛ ⎞ w.σ.l u ( z ,t ) = sin⎜ w.t − co + β .u ( z,t ) ⎟⎠ ⎝
V3
F2
(1)
Receiver
Co
Fig. 2 Modelling of the global ultrasound system measurement The modelling of the global measurement system is presented by figure 2. The transducers are based on behavioural temporal model of Redwood [7]. The nonlinear medium are represented by "non_linear_Medium" component and correspond to the model of a nonlinear acoustic layer. B. Theoretical aspect of the non linear propagation The propagation equation in nonlinear acoustic medium is based on the Burger formulation. This equation is true if we consider not attenuation and not diffraction. The plane
Implementation of medium nonlinearity consists in modify the linear propagation line model which is based on the Branin model [8] presented Figure 3. Notations i and t refer to the incident and the transmitted acoustic wave. u designate the velocity and F the force assimilated to pressure. Zo is medium characteristic impedance. The formulation (3) is included with a recurrent aspect in Ft which represents the acoustic wave retarded by the medium propagation time. A recurrent equation represents the formulation (3) which depends on the simulation temporal step in agreement with the parameter dt. f corresponds to the excitation frequency of the emitter transducer, co is the celerity specific to the medium and Td
ui
Zo
Zo Fi
ut
Ft
Fig. 3 Equivalent electric diagram of the acoustic linear propagation in a medium
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Modelling and Simulation of Ultrasound Non Linearities Measurement for Biological Mediums
corresponds to the flight time of the acoustic wave. A starting condition is necessary to initialise the ultrasonic wave in order to avoid discontinuities problems in the acoustic wave form. The nonlinear acoustic medium model with VHDL-AMS language is : ENTITY Nonlinearlayer IS GENERIC (Zo, Td, f, co, BsurA, sig, l, dt: REAL); PORT (TERMINAL p1,m1,p2,m2 : Kinematic_v); END Nonlinearlayer; ARCHITECTURE simple OF Nonlinearlayer IS terminal t11,t22,t7,t8,m7,m8 : Kinematic_v; QUANTITY F1 ACROSS p1 TO m1; QUANTITY F2 ACROSS p2 TO m2; QUANTITY F11z ACROSS u1i THROUGH t11 TO m1; QUANTITY F1z ACROSS u11i THROUGH t11 TO p1; QUANTITY F2z ACROSS u22t THROUGH t22 TO p2; QUANTITY F22z ACROSS u2t THROUGH t22 TO m2; QUANTITY Fbelow across ubelow through t7 to m7; QUANTITY F across u through t8 to m8;
harmonic amplitude. ρ is the volumic density. c is the acoustic celerity. In this formulation, diffraction and attenuation effect are neglected. III. SIMULATION RESULTS The studied transducers are produced with PZT ceramic of P188 type (Quartz et silice®) with characteristics are recalled in table 1. Table 1 Transducers acoustic characteristics Parameters F0 A e Zt
BEGIN if now < Dt USE Fbelow == F; F == sin(2.0*math_pi*f*dt - f*2.0*math_pi*sig*l/(co)); ELSE Fbelow == F'DELAYED(dt); F == sin(2.0*math_pi*f*now - f*2.0*math_pi*sig*l/(co*(1.0 + (1.0+BsurA/2.0) * (Fbelow)/co))); END USE; if now < Td USE F22z == 0.0; F11z == - F1z; F1z == u11i *Zo/2.0; F2z == u22t *Zo/2.0; ELSE F22z == F'DELAYED(Td) - F2z; F11z == F2'DELAYED(Td) - F1z; F1z == (u11i + u22t'DELAYED(Td))*Zo/2.0; F2z == (u22t + u11i'DELAYED(Td))*Zo/2.0; End USE; end simple;
c0 Co Ε33 kt h33
Quantity Frequency resonance (MHz) Area (mm²) Thickness (mm) Acoustic impedance (Mrayls) Acoustic velocity (m/s) Capacitor of the ceramic disc (pf) Dielectric constant Thickness coupling factor Piezoelectric Constant
Value Type A
Value Type B
2,25
4,5
132,73 1
132,73 0,5
34,9
34,9
4530
4530
1109,8
2910
650,0
650,0
0,49
0,49 9
1,49·10
1,49·109
The software used for VHDL-AMS simulation is ADMS v3.0.2.1 of Mentor Graphics company. The global measurement cell modelling implemented with VHDL-AMS writing of the figure 2 is simulated. The amplitude of the emitter transducer is fixed at 1Volt with a frequency of 2,25 MHz. The electric response issued to the receiver transducers is analyzed by a Fast Fourier transform with rectangular window for 10µs. Biological mediums analyzed in simulation are compared with the measurement cell results for liquid mediums like water, blood and milk. B/A parameter of human fat tissue and liver is considered well known in followings works [2,3]. Table 2 gives acoustic characteristics of simulated mediums.
D. B/A calculation with a comparative method The reformulation of B/A parameter thanks to theoretical simulation is given by a comparative method [6]. This method requires to analyse the frequency spectrum of a medium taken as reference such as the water whose parameter B/A is supposed to be known, and to carry out following calculation: 2 ⎛ Vsr 2 ⎞ ⎛ Vsr1 ⎞ ρ .c ρ .c 3 ⎛ ⎛ B ⎞ ⎞ ⎛B⎞ ⎜ ⎟ x ⎜ ⎟ r = . . r r . x x .⎜ ⎜ ⎟ − 2 ⎟ − 2 ⎜ ⎟ ⎟ ⎝ A ⎠ x ⎜⎝ Vs 2r ⎟⎠ ⎜⎝ Vsr1x ⎟⎠ ρ x .cx ρ .c 3 ⎜⎝ ⎝ A ⎠r ⎠ r r
379
(4)
Notations r and x referred to respectively the medium reference parameters and the analysed medium parameters. Vs1 and Vs2 are the fundamental amplitude and the second
Table 2 Medium biologic ccoustic characteristics Medium Water Blood Human fat tissue Liver Milk
Acoustic impedance MRayl 1.5 1.678
Acoustic speed m/s 1509 1586
5.0 6.0
1.376
1445
10.9
7.6 1.569
1573 1531
6.54 5.9
B/A
Fundamental and second harmonic amplitude obtained with the fast Fourier transform as function to the biological mediums analyzed are presented in table 3.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
380
R. Guelaz, D. Kourtiche and M. Nadi
Table 3 Amplitude of fundamentals and second harmonics
IV. CONCLUSIONS
with simulation in Volt Fundamental amplitude at 2.25 MHz 0,579 V 0,569 V
Medium Water Blood Human fat tissue Liver Milk
Second Harmonic at 4.5 MHz 0,5 V 0,499 V
0,545 V
0,787 V
0,572 V 0,578 V
0,528 V 0,536 V
These values are then used to estimate B/A parameter with the relation (4) and compared to B/A parameter obtained with in vitro measurements (table 2). Figure 4 shows results obtained.
Simulation results
B/A
12
In vitro Measurements
REFERENCES
10,9 9,54
10 8 6,087
6
The utilization of VHDL-AMS language shows the advantage to combine multiphysic discipline. Integration of nonlinear ultrasound aspect in simulation is also a new approach here. Usual medium modellings are based on transmission line theory and our medium model implements the nonlinear propagation phenomenon and permits us to analyze harmonic generation for a sinusoidal excitation case. Characterization of some biological mediums gives a serious opportunity to perform ultrasonic imaging system. B/A obtain in simulation is in good agreement with thermodynamic experimental results. B/A parameter used for simulate mediums have been obtain with a precision which depends of measurement methodology for example with blood we find a B/A of 6,0 with thermodynamic method [2] and 7,3 with finite amplitude method [3].
1. 6,33 6,54
6
6,271
5,9
2. 3.
4 2
4.
0 Blood
Human fat tissue
Liver
Milk
5. 6.
Fig.4 B/A parameter obtained in simulation compared to measurements value.
B/A estimation in simulation shows that we can characterize biological mediums with a sufficient precision between different mediums. For milk and blood we must also take into account the measurement of the acoustic celerity to differentiate the two mediums. For biological tissues like human fat and liver we obtain a great sensibility for the B/A estimation so we can easily predict the biological nature of the medium for this two cases. Maximum relative error is made with human fat tissue with an error of 12,5%.
7.
8.
Christopher T. (1998) Experimental investigation of finite amplitude distortion-based second harmonic pulse echo ultrasonic imaging, vol.45, IEEE Trans Ultrason. Ferroelect. Freq. Contr, pp.158-162 Zhang J. et al. (1991) Influences of structural factors of biological media on the acoustic nonlinearity parameter B/A, vol.89, J. Acoust. Soc. Am., pp.80-91 Lu Z. et al. (1998) A phase-comparison method for measurement of the acoustic nonlineariry parameter B/A, vol.9, Meas. Sci. Technol., pp.1699-705 Ghorayeb S. R. et al. (2001) Modelling of ultrasonic wave propagation in teeth using PSPICE: a comparison with finite element models, vol. 48, Ultrason., Ferroelect., Freq. Contr., no. 4. pp.1124-1131 Burgers J.M. A mathematical model illustrating the theory of turbulence, vol.1, Advances in applied mechanics, pp.171-199 Kourtiche D., Allies L., Chitnalah A. and Nadi M. (2001) Harmonic propagation of finite amplitude sound beams: comparative method in pulse echo measurement of nonlinear B/A parameter, vol.12, Measurement Science and technology, pp.1990-1995 Guelaz R., Kourtiche D., Nadi M. (2003) A behavioral description with VHDL-AMS of a piezo-ceramic ultrasound transducer based on the Redwood's model, proceedings FDL’03 : Forum on Specification and Design Languages, pp.32-43 Branin F. (1967) Transcient analysis of lossless transmission lines, vol.55, proceedings of IEEE, pp.2012-2013 Author: Guelaz Rachid Institute: Street: City: Country:
Université Henri Poincaré Nancy 1. L.I.E.N Faculté de sciences. Laboratoire du LIEN Vandoeurvre les Nancy France
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Simple verification of infrared ear thermometers by use of fixed-point J. Bojkovski University of Ljubljana, Faculty of Electrical Engineering, Laboratory of Metrology and Quality, Ljubljana, Slovenia Abstract— Due to a number of reasons infrared ear thermometers (IRETs) are becoming very popular in many medical applications, where temperature measurements are important. However, one of the drawbacks of using IRETs is lack of information about their accuracy and stability in time. In order to be able to simply verify correct operation of the thermometer, special portable temperature fixed-point blackbody source has been developed. The reference fixed point operates near body-temperature (36.3 °C) with a repeatability and reproducibility of ± 100 mK. The fixed-point can be used for testing of other, contact, medical thermometers as well, such as mecury-in-glass, thermistor probes,...
Fixed point enclosures are used to define the temperature scale by realizing the equilibrium state of triple points and the phase transition of certain substances which occur at highly reproducible temperatures. The triple point of a material is the temperature at which the solid, liquid and the vapor phase coexist in thermal equilibrium at the vapor pressure of the material. The freezing and the melting points of a material occur at constant temperatures at which a substance undergoes a phase transition from liquid to solid and from solid to liquid respectively.
Keywords— clinical thermometer, fixed-point, accuracy, uncertainty
II. DEVELOPMENT OF THE BLACKBODY A. Design
I. INTRODUCTION In the recent years infrared ear thermometers (IRETs) became very popular in clinical practice for measuring the temperature of a human body. Furthermore, many different thermometer models were introduced and became commercially available to common users. All IRETs are advertised and specified by manufacturers as being accurate and reliable measuring devices. In order to be able to verify their performance, an appropriate calibration set-up is essential, which in principle consists of a blackbody radiator (BBR) either with a reference thermometer or as a fixed-point. For calibration of IRETs and clinical contact thermometers an accuracy of 0,2 °C is required in the range of 35,5 °C to 42 °C, setting requirements for employed BBRs better than 0,1 °C, [1]. In this paper we are going to concentrate on simple verification method, which consists only of fixed-point blackbody. In physics, a black body is an object that absorbs all electromagnetic radiation that falls onto it. No radiation passes through it and none is reflected. In the laboratory, the closest thing to black-body radiation is the radiation from a small hole entrance to a larger cavity. Any light entering the hole would have to reflect off the walls of the cavity multiple times before it escaped and is almost certain to be absorbed by the walls in the process, regardless of what they are made of or the wavelength of the radiation (as long as it is small compared to the hole). The hole, then, is a close approximation of a theoretical black body, [2].
As discussed in [3], three key elements of the system were identified. Those are versatility (covering all different types of IRETs available), robustness (robust and user friendly) and independence of any additional equipment, which would support verification of the IRETs. It is necessary to emphasize that system is used only for quick testing and verification of IRETs and not complete calibration. For such purposes there are number of things on the market, as described in [4]. One of the key requirements in designing verification system is its versatility. There are number of different IRETs on the market. Developed system should be suitable for verification of any type, independent of the working principle. The devices do however have many similar characteristics, such as aperture size and exterior dimensions. All the devices need to fit comfortably in the ear canal so the outer aperture dimensions are very similar – a maximum exterior defining-aperture of less than 10mm diameter. Some of devices have been produced with wide-angle field of view (FOV) characteristics; this is the main difference in systems. The fixed-point device therefore had to be able to accept devices having a defining aperture size of 10 mm, or greater, and has to be able to validate devices that have wide-angle FOV features. Complete system can be used without any additional equipment (no power supply, reference thermometer …). Also, typical freezing plateau, under room temperature conditions, can last couple of hours, which enables us to verify number of IRET.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 361–364, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
362
J. Bojkovski
Fig. 2 Teflon blackbody for verification of IRET
Fig. 1 A Simple presentation of the blackbody cavity and IRET There are number of shapes that can be used as a suitable for blackbody, mainly cylindrical, [3]. In our case, we have chosen spherical shape. Simple representation together with IRET inside it is shown on figure 1. Final verification set-up is presented on figure 2. The spherical shape is closest to the ideal blackbody. Also, it is relatively easy to calculate effective emissivity of such blackbody. Teflon (PTFE) was chosen as a suitable construction material for the blackbody. It is chemically inert, robust and lightweight. Also, it is rather easy for machining. Reference texts suggest that emissivity value of PTFE is around 0,8. As a consequence, no additional coating with high emissivity color would be needed inside the blackbody. In order to be able to evaluate the blackbody cavity’s emissivity value
_______________________________
information about the regular (specular) and diffuse reflectivity of PTFE over 5-50 μm is needed, [3]. The spectral emissivity of a sample, in the infrared, is related to the regular (specular) and diffuse components of reflectivity. By measuring these components one can calculate the spectral emittance of the surface of any sample. From this surface emissivity value and the geometry of the blackbody its emissivity can be accurately calculated assuming isothermal conditions. The effective emissivity of 0,99734 was calculated by the software program STEEP, which was used in the TRIRAT project, [6]. The fixed-point material selected was 1, 3-dioxolan-2one (ethylene carbonate (EC)). This is already a proven reference material for temperature standards, [5]. It is also relatively inexpensive to obtain in purities high enough to be acceptable for fixed-points. The EC used in this device was supplied (by the manufacturer) as 99,995% pure. The freezing point of such material is estimated to be 36,3 °C, at pressure of 1 atm. This temperature is inside measuring range of typical IRET. All the components of the blackbody were cleaned prior to assembling it. In first step, mild detergent and distilled water is used. Then, everything was rinsed with distilled water. At the end, ethanol was used for final cleaning of all parts. With this procedure, influence of any impurities has been limited to the minimum. The ethylene carbonate was melted and poured inside the container. Finally, blackbody was placed and it was screwed on the O ring seal. B. Measurements and results The fixed-point system has been cycled couple of times between melting and freezing cycle. In order to confirm
IFMBE Proceedings Vol. 16
________________________________
Simple verification of infrared ear thermometers by use of fixed-point
36
35
34 temperature in °C
proper working of fixed-point verification system, Minolta Cyclops Land 300 AF pyrometer was used, Figure 3. The thermometer has correction of 1 °C and uncertainty of 3 °C at that temperature. It can be used only as indication of freezing plateau and for measuring exact value. The distance between fixed-point blackbody and pyrometer was 50 cm. The pyrometer has been connected to the computer over Rs-232 bus. Special computer program, written in LabVIEW has been used for data acquisition. In order to fully melt ethylene carbonate, the fixed-point has been placed inside the furnace at the temperature of 70 °C for two hours. After that, it has been measured under the laboratory conditions of 23 °C. When thermometer indicates that temperature of the fixedpoint is super cooled to the room temperature, freeze is initiated either with a little bit of dry ice or by strongly shaking the fixed-point and thus stimulating crystallization (freezing) of the ethylene carbonate. The standard deviation over one hour of the freezing plateau is 0,03 °C. Reproducibility of different realization over
363
33
32
31
30 24.1.2007 15:36
24.1.2007 16:48
24.1.2007 18:00
24.1.2007 19:12
24.1.2007 20:24
24.1.2007 21:36
24.1.2007 22:48
25.1.2007 0:00
25.1.2007 1:12
25.1.2007 2:24
date and time
Fig. 4 Typical freezing plateau the couple of days is 0,05 °C. The plateau itself can be prolonged with placing of fixed-point inside the thermally controlled system (bath or furnace) which has the temperature around the temperature of the fixed-point. However, even without such system, the fixed point can be used for couple of hours. III. CONCLUSIONS Based on several experiments, it has been proven that developed fixed-point blackbody system can be used for verification of IRETs. The short-term stability of the system, without any additional equipment is in order of 0,03 °C, while reproducibility of the system is in order of 0,05 °C. The results presented here are limited by the limitation of the used measuring system and not the performance of the blackbody. It is very hard to find appropriate pyrometer, with such characteristic to be used to evaluate performance of the fixed-point at such low temperature. The performance of the system can be improved with using additional equipment such as bath or furnace, which would enable prolongation of the freezing plateau. Further progress can be made with using of contact thermometer inside blackbody. This would enable better evaluation of the temperature of the fixed-point. For the calibration of the IRET and thus providing traceability, the blackbody with possibility of changing temperature in the range from 35 °C up to 42 °C has to be used.
ACKNOWLEDGMENT
Fig. 3 Measuring set-up
_______________________________
The development of the fixed-point blackbody was partially supported by the Metrology Institute of the Republic of Slovenia, under the contract of National standard for thermodynamic temperature.
IFMBE Proceedings Vol. 16
________________________________
364
J. Bojkovski 5.
REFERENCES 1.
2. 3. 4.
Pušnik I., E. van der Ham, Drnovšek J., IR ear thermometers - what do they measure and how they comply with the EU technical regulation, IOP Physiol. Meas. 25 (2004) 699–708IFMBE at http://www.ifmbe.org http://www.wikipedia.org Machin, G, Simpson, R., Tympanic thermometer performance validation by use of a body temperature fixed-point blackbody, Proc. SPIE Int. Soc. Opt. Eng. 2003, 5073, 51-7 I. Pušnik, J. Bojkovski and J. Drnovšek., Development of a calibration bath for clinical thermometers, MEDICON 2007 procedings (this conference)
_______________________________
6.
J. D. Cox and B. W. Mangum, “Evaluation of the triple point of 1,3dioxolan-2-one”, Metrologia, Vol. 23, pp 173-178, 1986 Bosma R., van der Ham E. W. M., Schrama C. A., Test results on workpackage 5, 6 and 7 for the project TRaceability In RAdiation Thermometry (TRIRAT)– NMi/VSL contribution, February 1999, NMi/VSL, Delft, The Netherlands Address of the corresponding author: Author: Jovan Bojkovski Institute: University of Ljubljana, Faculty of Electrical Engineering. Laboratory of Metrology and Quality Street: Trzaska 25 City: SI-1000 Ljubljana Country: Slovenia Email:
[email protected]
IFMBE Proceedings Vol. 16
________________________________
System Identification of Integrative Non Invasive Blood Pressure Sensor Based on ARMAX Estimator Algorithm Noaman M. Noaman1, Abbas K. Abbas2 1
Department of Computer Engineering, Computer Man College, Khartoum, Sudan 2 Tuebingen University, Department of Biocybernetics, Germany
Abstract— Varieties of the oscillometric non-invasive blood pressure (NIBP) measuring devices are based on recording the arterial pressure pulsation in an inflated cuff wrapped around a limb during the cuff deflation. The recorded NIBP data contain the pressure pulses in the cuff, called oscillometric pulses, superimposed on the cuff deflation. Some of NIBP devices have also implanted microphone inside the cuff, which enables measurements of Korotkoff sounds The objectives of this contribution are, first, to extract the transfer characteristics of the oscillotonometric method of NIBP deflation from the pressure pulses, and second, to extract the Parametric coefficient of the NIBP system with regression path with ARMAX algorithm. Keywords— NIBP, system dynamics, ARMAX, estimator, identification.
I. INTRODUCTION Routine methods for noninvasive arterial pressure evaluation are based on a simple idea, which has remained almost unchanged in its essence for more than one century: an arterial vessel, usually placed in a limb, is compressed by an external load, which causes the artery to rhythmically collapse or reopen at each heart beat. Some effects of the arterial collapse (either alterations in pressure pulse amplitude, blood volume changes, blood velocity perturbations, or audio-frequency sounds) are then detected by means of an external noninvasive transducer. The belief is that artery obliteration and the consequent phenomena measured by the transducer are closely related to the incoming arterial pressure waveform, particularly to the diastolic, mean, and systolic arterial pressure values. [1, 2]
catheter inserted into an artery. The most important results, however, can be summarized as follows. The auscultatory method seems to underestimate systolic blood pressure by 5-20 mm Hg, while it overestimates diastolic blood pressure by 12-20 mm Hg. [7, 3] In particular subjects, however, such as in the elderly, in individuals with increased vascular rigidity (such as in advanced atherosclerosis), or in individuals with large arms (such as in the obese), the auscultatory method may overestimate the intra-arterial pressure (a phenomenon often called pseudo hypertension). Just a few experimental results on the oscillometry technique can be found in the literature. Geddes et al. [5] used the value of cuff pressure pulse amplitude, normalized to the maximum, to estimate systolic and diastolic pressure. Comparison with intra-arterial pressure in the dog suggests that these ratios are quite imprecise, ranging from 0.45 to 0.57 for the systolic, and from 0.75 to 0.86 for the diastolic [6]. Basically, two main classes of models can be distinguished: the lumped parameter models, which neglect wave propagation phenomena along the artery and provide a compartmental description of tissue and artery segments (i.e., they do not consider a spatial coordinate explicitly); and the distributed parameter models, which consider the spatial coordinates (i.e., the longitudinal coordinate of the artery and of the arm and, in some instances, the arm radial coordinate, too). Both models exhibit some advantages and shortcomings.
II. THEORETICAL BASIS OF BLOOD RESSURE MODELLING ESTIMATION
A. Non invasive technique of the blood pressure characteristics estimation based on the ARMAX system There are two ways to validate noninvasive blood pressure estimation techniques and improving their performance. The first consists of comparing values provided by the indirect technique with those obtained simultaneously by a
Fig. 1 the oscillometric non-invasive blood pressure waveform and major phases of NIBP waveform [4].
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 385–389, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
386
1. First preparation of simple biomechanical description of the cuff wrapped around the arm was proposed: in this case a lumped parameter model may be sufficient to account for the main biomechanical properties of the cuff model + air surrounding system. 2. Subsequently, the analysis of pressure transmission across the arm elastic tissue should achieve. In the case of an ideal cuff, this can be studied as the classic two-dimensional problem in cylindrical coordinates, when stress distribution is symmetrical about an axis. Hence, the analytical solution can be written independently of the axial coordinate. With suitable simplifications, the distributed parameter model is then simplified into a lumped parameter one. Moreover, a more complex model for stress propagation in the tissue is presented, to account for the cases when pressure load is not uniformly distributed around the arm. However, in this event analytical solutions are not available and, thus, a finite-element numerical method is adopted. The effect on the measurement of changes in cuff dimension is analyzed with the finiteelement model of the arm, together with the effect of alterations in the arm tissue elastic parameters (Young's modulus and Poisson's ratio) [4]. 3. The last portion of our mathematical analysis concerns the description of the collapsing artery under the cuff. Most emphasis is given to a lumped parameter description of brachial hemodynamics. A monodimensional, distributed parameter model of brachial hemodynamics is also presented for the sake of completeness, without entering in specific mathematical details.
Noaman M. Noaman, Abbas K. Abbas
III. A LUMPED PARAMETER MODEL OF THE CUFF The overall pressure-volume characteristic of the cuff depends on the elasticity of the internal wall, of the air enclosed in the bladder, and of the external wall. By assuming that cuff thickness is negligible, where Ve denotes the volume enclosed within the cuff external wall, Vc is the air volume inside the cuff, and V I is the volume enclosed within the cuff internal wall. Of course, when the cuff is wrapped around the arm, the latter volume is approximately equal to the arm volume. In the following we shall denote by pc the pressure of air inside the cuff and by pb the outer pressure acting on the cuff internal wall, both evaluated with respect to the atmosphere. During the measurement, when the cuff is wrapped around the upper arm, pb is equal to pressure transmitted from the cuff to the arm's outer surface. Moreover, pressure acting on the cuff external wall is constant and equal to the atmospheric pressure.
A lumped-parameter model of the cuff consists in a relationship linking the cuff volume, Vc, with the pressures pc and pb By denoting with Ce the compliance of the cuff external wall, and with Ci The compliance of the internal wall, we have dV dt
e
= C e ( p c ).
dp dt
c
dV i dp c ⎞ ⎛ dp b = C i ( p c − p b ). ⎜ − ⎟ dt dt ⎠ ⎝ dt
(1.1) (1.2)
Where, in writing Eqs. (1.2) and (1.3) assumed that both compliances are non-linear functions of the transmural pressure. Finally, by deriving Eq. (1.1), eq(1.3) can be written as
B. Techniques for noninvasive blood pressure Transfer characteristics estimation
dp dVc dVe dVi ⎛ dp dp ⎞ = − = Ce ( pc ). c −Ci ( pc − pb ).⎜ b − c ⎟ dt dt dt dt ⎝ dt dt ⎠
There are two ways to validate NIBP estimation techniques and to improve their performance. The first consists of comparing the values provided by the indirect technique with those obtained simultaneously by a catheter inserted into an artery. The most important results, however, can be summarized as follows. The auscultatory method seems to underestimate systolic blood pressure by 5-20 mm Hg, while it overestimates diastolic blood pressure by 12-20 mm Hg [7, 8] Geddes et al.[5] used the value of cuff pressure pulse amplitude, normalized to the maximum, to estimate systolic and diastolic pressure. Comparison with intra-arterial pressure in the dog suggests that these ratios are quite imprecise, ranging from 0.45 to 0.57 for the systolic, and from 0.75 to 0.86 for the diastolic [8]
Equation (1.4) characterizes the pressure-volume behavior of the occluding cuff provided expressions for the internal and external wall compliances are available. Such expressions can be easily obtained for a given cuff by means of the following experimental procedure. In order to achieve an expression for the compliance of the cuff external wall, the cuff can be wrapped around a rigid cylinder having a suitable diameter and progressively inflated with air. Since the inner radius does not change in this condition, we have (dVi/dt= 0), and so the pressure-volume curve only reflects the elasticity of the external wall, i.e., dV c dV e dp c dV c = = C e ( p c ). → C e ( pc ) = dt dt dt dt
(1.3)
(1.4)
Similarly, in order to characterize the compliance of the cuff internal wall, the cuff can be enclosed within a rigid cage, which prevents any outer expansion, while the internal
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
System Identification of Integrative Non Invasive Blood Pressure Sensor Based on ARMAX Estimator Algorithm
cuff is free to expand against the atmospheric pressure. This means that dVe/dt= 0 in Eq. (1.4) and, moreover, pb=0.
Step Response of NIBPoscillometric system with sampling time (Ts)=0.7567 1.4 1.3 A m plitudeof theB loodpressure(m m H g)
Hence, we can write
(1.5)
dV dVc dp dVc dVi = = Ci ( pc − pb ). c → Ce ( pc − pb ) = c = dt d ( pc − pb ) dt dt dt
387
1.2 1.1 1 0.9 0.8 0.7 0.6 0.5 0
.
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Time in second (sec)
Fig. 4 A- step response of the extracted (estimated transfer function of the ARMAX model).
Fig. 2 Sensors array transducer based on NIBP oscillometry measurement technique [4]
IV. ESTIMATION OF THE NIBP OSCILLOMETRIC TRANSFER CHARACTERISTICS BASED ON ARMAX ALGORITHM
The use of blood pressure input paradigm for system identification process as actuation signal for monitoring the response in order to estimate and making a robust prediction for the NIBP transfer function , here we test for the 6 paradigm for 8 subject simulated inputs and figure (1.3) illustrate the signals and simulated ARMAX estimation kernel schemes multiplexed into error rate value is range between (0.036- 0.0211) as the tolerance index is indicated that system simulation is stable and pass the transient region with raising time tr= 1.3 sec as it indicated for physiological index. A. Simulation result of NIBP system. The simulation results are illustrated with figure templates as in fig(3) ,fig(4) in which illustrate the system dynamics and steady state characteristics for the oscillometric
proposed system as output from ARMAX method , the platform is used with SIMULINK® software from Mathworks. B. Estimation method for NIBP The oscillometric input signal where its simulate and interacted to this ARMAX Kernel in order , the duration of simulation is range between 120 msec to 3500 msec according to the No. of samples for the parameterized blood pressure waveform . The method for estimation NIBP when defining the input paradigm of characteristic for the NIBP waveform in which six different paradigm of blood pressure waveform from 7 subject were selected and supplemented to the ARMAX estimator kernel mask , as in the proposed model figure(2) ,the sampling frequency for the S/H system The resultant transfer function of simulated system can be shown in table (1.1) with corresponding calibrating gain. must be set so that is embedded inside the S-function of the kernel and set to (0) in order to inhibit harmonics computation for the input signal and avoid any frequency overloaded to the system or in other words attenuated the associated harmonics of estimated model and decreasing the polynomial orders of estimated transfer characteristics The
Actual Output (Red Line) vs. The Predicted Predicted Model output (Blue Line) 6 Step Response of NIBPoscillometric system based on ARMAX method
5 200
4 3
180
2 160
0 100
120
140
160
180
200 Time (secs)
220
240
260
280
Error In Predicted Model 0.4 0.2 0 -0.2
A m plitudeofB loodP ressure(m m H g)
1
140 120 100 80 60 40
-0.4
20
-0.6 100
120
140
160
180
200 Time (secs)
220
240
260
280
0 0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Time in sec (sec)
Fig. 3 simulated Blood pressure signal of oscillometric unit based
Fig. 4 B- step response of the extracted (estimated transfer function of the
form ARMAX estimator unit
ARMAX model).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
388
Noaman M. Noaman, Abbas K. Abbas
Table 1 estimated transfer characteristics of the NIBP system simulated with ARMAX algorithm Trail No of simulation
Transfer function Ts=0.035 s
Transfer function Ts=0.0745 s
1 G=0.89
M (z) z3 = R(z) z 3 − 1 .6 z 2 + 0 .4
M (z) z3 = R( z ) 1.977z 3 − 1.582z 2 + 0.403z +
2 G=0.92
M ( z) z3 = R( z ) 1.835z 3 − 1.581z 2 + 0.4
M(z) z3 = R(z) 2.09z3 −1.5877z2 + 0.4055z+ 0.205
Step Response of estimated NIBP model based on varying Ts sampling time 70
60 A m p litu d eo fB Pw a ve fo rm m m H g
duration of the simulating experiment should be at minimum at least 3 folds grater than the signal duration of repition for BP waveform, this add more accuracy to the computation algorithm of regression estimation For estimating and derivation of the polynomial coefficient (α1…..αn-1) and (β1…βn-1) for numerator and denumerator , one of the important aspect should be take into consideration which is the calibrating factor or gain-weighting factor for oscillotonometric-transducer in this case sphynomanometric system should be with in range of (0.89-1.9) to get the optimal result for simulation. Trail simulation was done through selection of different gain (calibrating factor) G in the path of the NIBP signal The step response of estimated transfer characteristics shows enhancement of step response when increasing the calibrating factor gain to reach 3 folds of the initial value in which set in simulation system. The expected transfer function of NIBP system that is predicted using ARMAX methods are as following table in which 7 trials simulation with two sampling time Ts was achieved and illustrate that inhibition of the sampling time will lead to decrease of frequency harmonics associated with real extracted model.
3 G=1.04
M ( z) z = R ( z ) 2.013z 3 − 1.677 z 2 + 0.
M (z) z = R( z ) 2.017 z 3 − 1.702 z 2 + 0.41z +
4 G=1.24
M ( z) z3 = R ( z ) 2.024 z 3 − 1.742 z 2 +
M ( z) z3 = R ( z ) 2.0245 z 3 − 1.765 z 2 + 0.421z
5 G=1.38
M ( z) z3 = R ( z ) 2.0446 z 3 − 1.812 z + 0
M (z) z3 = R( z ) 2.0246 z 3 − 1.8 z + 0.421z +
6 G=1.49
M ( z) z3 = R ( z ) 2.21z 3 − 1.932 z + 0.441z +
M ( z) z3 = R( z ) 2.056 z 3 − 1.87 z + 0.4301z
7 G=1.64
M ( z) z3 = R ( z ) 2.329 z 3 − 2.045 z + 0.48 z +
M ( z) z3 = R( z ) 2.402 z 3 − 2.065 z + 0.502 z +
3
40
30
20
10
0 0
0.5
1
1.5
2
2.5
3
3.5
4
Time (sec) (sec)
Fig. 5 step response of the NIBP system based on ARMAX estimator with different sampling time Ts
V. CONCLUSIONS The extraction the transfer function for the overall oscillometric method for direct NIBP system as this work based on ARMAX estimator algorithm that reflects partial dynamics information when compared with IBP measurement this is due to ARMAX lacking of compensation factor for residual harmonics unless we attenuate the sampling frequency for the ADC of the unit ,this also add a systematic error ration Δerr=0.0454 of total measurement errors , the methods should be comparable to models extracted from other IBP and NIBP measurement to get significant of the techniques used in estimation, in future work combination methods for the estimation as MLE singular value decomposition applied for both IBP and NIBP measurement with hemodynamics performance comparison and analysis
ACKNOWLEDGMENT
5 3
50
We would like to thanks Tuebingen University Department of Biocybernetics for providing research tools and Computer Man College in Khartoum for giving opportunity and scientific laboratory to complete this work in suitable and reproductive profile.
REFERENCES 1. 2.
Lock I, Jerov M, Scovith S (2003) Future of modeling and simulation, IFMBE Proc. vol. 4, World Congress on Med. Phys. & Biomed. Eng., Sydney, Australia, 2003, pp 789–792 O'Brien, E., MEE, F., ATKINS, N. and O'MALLEY, K. (1993): 'Short report: accuracy of the Dinamap portable monitor, model 8100 Determined by the British Hypertension Society protocol, Hypertension, pp. 761-763
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
System Identification of Integrative Non Invasive Blood Pressure Sensor Based on ARMAX Estimator Algorithm 3.
4.
5. 6.
7.
PAPADOPOULOS, G., OLDORP, B. and M/EKE, S, (1994): "Die arterielleBlutdruckmessung mit oszillimetrischen Geraten bei Neugeborenen und Kleinkindern', Anaesthetist, 43, pp. 441-446 PETRIE, J. C., O'BRIEN, E. T., LITTLER, W. A. and D~ SwErr, M. (1986): 'British Hypertensive Society: Recommendations on blood pressure measurements', Br. Med. J , 293, pp. 611-615 RAMSEY, M. Ill (1979):'Noninvasive automatic determination of mean arterial blood pressure'. Med Biol. Eng. Comput. 17, pp. 11-18 RUNCIE, C. J., REEVE, W. G., REIDY, J. and DOUGALL, J. R. (1990): 'Blood pressure measurement during transport. A comparison of direct and oscillometric readings in critically ill patients', Anesthesia, 45, pp. 65-665 YAMAKOSHI, K. and TANAKA, S. (1993): 'Standard algorithm of blood-pressure measurement by the oscillometric method', Med.Biol. Eng. Comput. 31, pp. 204
8.
389
URSINO M., CRISTALI C.,: Cardiovascular system , Biomechanical System, Techniques and Applications CRC Press 2001
Author: Noaman M. Noaman Institute: Department of Computer Engineering, Computer Man Col lege, Sudan – Khartoum Street: City: Al Khartoum city Country: Sudan Email:
[email protected] Author: Institute: Street: City: Email:
Abbas Kader Abbas Tuebingen University, Biocybernetics department Waldhauser Ost, Fichtenweg 29 -D-72076 Tuebingen
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The hybrid piston model of lungs M. Kozarski, K. Zielinski, K.J. Palko and M. Darowski Department of Bioflows, Institute of Biocybernetics and Biomedical Engineering, Warsaw, Poland Abstract— The novel hybrid (pneumo-electrical-numerical physical) model of lungs is presented. A general procedure of creating of this type of models is also described. It consists in application of proportional transformation of an electrical impedance of a lumped parameter electrical or numerical model of lungs into a pneumatic impedance obtained in an input pneumatic terminal of the model. The standard Dubois mathematical model of lungs has been applied in model examinations. That proved the assumed concept of hybrid modeling. Keywords— lungs model, lungs mechanics, impedance transformation, physical hybrid lungs models, nonlinear lungs models.
II. MATERIALS AND METHODS Mechanical properties of lungs may be described [1, 3] by the impedance defined as ratio of air pressure and corresponding flow in any point of the lung tree. The aim of this paper is to present the general approach leading to the final design of the hybrid lung model. The general procedure leading to this aim contains the following steps (Fig. 1):
I. INTRODUCTION Physical models of lungs reproducing their mechanical properties are still a valuable tool in many applications. Models of lungs may be used: • • • • •
to develop and evaluate new methods of artificial ventilation applied for different lung disease treatment, to develop new measurement methods and instruments for lung investigations, to create standards for medical instruments and measurement procedures, for testing of respirators and spirometric instruments, for education of students and training of a medical staff by demonstrating: • an influence of lungs pathology on mechanical function of lungs, • different respiratory strategies applied in different lungs diseases treatment.
Nearly all [2] physical lungs models worked out till now have been designed as a more or less complex connections of mechanical discrete elements such as springs, pistons, bellows and pneumatic resistors representing lungs abilities to accumulate potential energy (by mechanical elastances) and to dissipate energy (by resistors). Unfortunately this approach exhibits strong limitations as far as complexity of the model structure is concerned. Additionally there is very difficult, if possible at all, to reproduce nonlinear properties of lungs, crucial for testing new methods of assisted ventilation, supporting natural breathing.
Fig. 1 General procedure The main functional element enabling transformation of the electrical impedance from its electrical side to a pneumatic one is the electropneumatic proportional converter which can be done as the pure analog or the hybrid (numerical-pneumatic). This is presented if Fig. 2a as an analog three terminal box. The basic element of the circuitry is the voltage controlled flow source VCFS delivering flow q independent of pressure p. To complete the circuit it is necessary to have voltage controlled voltage source VCVS. In the considered circuit any electrical impedance connected to electrical terminals will be transformed into directly proportional pneumatic impedance measured in the pneumatic terminal. The VCVS may be replaced by numerical (computer) section of the impedance transformer. The PC (Fig. 2b) resolves differential and algebraic equations describing the mathematical model of lungs. VCVS is there represented by numerical voltage value proportional to pressure p being the numerical input into the set of equations. Solution of the equations gives current and finally voltage controlling flow q.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 416–418, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
The hybrid piston model of lungs
417
The novel (patented) hybrid physical model of lungs presented here uses a piston servo system as a voltage controlled flow source (Fig. 3). The flow produced by the piston is proportional to the input voltage uq. III. RESULTS The servo system exhibits very good dynamical and statical properties (Fig. 4). Time constant of the high signal step response is 0.8ms. Static characteristics q(uq) is linear with neglieable error (correlation factor R2=0,999997). In the paper tests results of the complete model of lungs are presented for pure analog impedance transformer (Fig. 5) and for its pneumatic – numerical equivalent system. In both cases the some mathematical model of lungs has been applied i.e. the standard Dubois model with a spontaneous mechanism included (Fig. 6). The exemplary flow and pressure time courses obtained in the model are presented in Fig. 6.
Fig. 4 Statical and dynamical characteristics of VCFS Fig. 2 Impedance converter (transformer)
Fig. 3 Voltage controlled voltage source. A - piston area,x - piston displacement, v - piston velocity, q - flow (q=A·v), n - angular motor shaft velocity, C0 - pneumatic compliance C0 = V0/mPa= Ax0/mPa, m - politropic exponent, P - pressure in V0, uT - voltage output of tacho T (prop. to n), uq - preset value of q, ux - output voltage of displacement transducer x/u
Fig. 5 Hybrid pure analog model of lungs
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
418
M. Kozarski, K. Zielinski, K.J. Palko and M. Darowski
ters, alveoli recruitment, rheological properties of lungs tissues e.t.c. In our meaning the only way to cope with these problems is to build up hybrid systems where are no practical limits on complexity of lungs models and their structural nonlinearities. The impedance pneumo-electrical transformation is one of possible ways to solve the problems.
ACKNOWLEDGMENT The work was supported by the Foundation for Polish Science.
Fig. 6 Standard Dubois model of lungs. Ra – airway resistance, La – airway inertance, Ca – airway compliance, Rt – tissue resistance, Lt – tissue inertance, Ct – tissue compliance, PSV – spontaneous breath
REFERENCES 1. 2.
3.
IV. CONCLUSIONS In the Institute of Biocybernetics and Biomedical Engineering of the Polish Academy of Sciences in Warsaw we have started the program aimed to design the hybrid physical-numerical lungs model suitable to evaluate new methods of mechanical ventilation of lungs applied in many pathological clinical situations. It needs models able to reproduce such phenomena as strong nonlinearities of lungs parame-
Golczewski T, Kozarski M, Darowski M (2003) The respirator as a user of virtual lungs. Biocybernetics and Biomedical Engineering 23 (2):57-66 Verbraak A, Rijnbeek P, Beneken J, Bogaard J, Versprille A (2001) A new approach to mechanical simulation of lung behaviour: pressurecontrolled and time-related piston movement. Medical & Biological Engineering & Computing 39:82-89 Darowski M, Kozarski M, Golczewski T (2000) Model studies on respiratory parameters for different lung structures. Biocybernetics and Biomedical Engineering 20:67-77 Author: Darowski Marek Institute: Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Science Street: 4 Ks. Trojdena City: Warsaw Country: Poland Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The impact of the intubation model upon ventilation parameters B. Stankiewicz1, J. Glapinski1, M. Rawicz2, B. Woloszczuk-Gebicka2, M. Michnikowski1, M. Darowski1 1
Centre of Excellence ARTOG, Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences, Warsaw, Poland 2 Department of Anaesthesiology and Intensive Care, Warsaw Medical University, Poland
Abstract—The impact of various shaped endotracheal tubes on ventilation parameters has been preliminary assessed in this study. Two uncuffed pediatric tubes of different designs: a standard tube (cylindrical) and a new tube of smooth cone shape (3, 3.5 and 4 mm ID) have been examined under IPPV mode ventilation and using infant lung model. The total inspiratory flow resistance (Ri), peak inspiratory preassure (PIP) and work of breathing (WOB) have been determined under intubation with the standard and the cone shaped tube, and also for non-intubated infant lung model. The significant reduction of the Ri, PIP and WOB has been received when the standard tube had been changed with the new cone tube. The results have been confirmed in a case clinic study. Keywords— endotracheal tube, flow resistance, mechanical ventilation.
I. INTRODUCTION A novel endotracheal tube (patent pending) was design to decrease its flow resistance if compared to commonly used cylindrical tube [1-3]. The part of the work of breathing which is needed to overcome a tube resistance, i.e., ETTimposed work of breathing can exceed 50% of the entire work of breathing, especially in neonatal and pediatric patients, who are intubated with the tubes of the smallest diameters (1.5-5 mm inner diameter), thus having the highest gas flow resistances [4]. Decreased flow resistance of our new tube was succeed by a special tube shape. Internal diameter of this tube is constant along tracheal part of it, and then smoothly increases and reach near two times bigger diameter. It is well known, that tube flow resistance is strongly depended on internal diameter. Under laminar flow the resistance of a cylindrical and hydraulically smooth tube is inversely proportional to 4th power of the tube internal diameter, and under turbulent flow – near to 5th power of the internal diameter [5, 6]. Besides, original cone shape of our tube was elaborate to avoid the problem with intubation appearing in vocal cords injury. This problem was often observed when intubation with Cole tube (shouldered tube) was carried out [9-11].
II. MATERIALS AND METHODS A. Model study The model of infant lung (lung/thorax compliance c=6 ml/cmH2O, total airway resistance R=30 cmH2O/l/s) intubated with the standard cylindrical endotracheal tube (Portex) or with the novel cone shaped tube (3, 3.5 and 4 mm ID) has been connected to EVITA respirator, and ventilated in IPPV mode, when mixture air and 30% O2 had been dosed. The parameters: total flow resistance due to the infant lung model and an endotracheal tube (Ri), peak inspiratory pressure (PIP) and work of breathing (WOB) have been determined on the basis of measurements done by COSMOPLUS monitor (Respironics), placed between the upper tip of the endotracheal tube and Y-piece. B. A case clinic study The subject of our study was a stable patient of intensive care unit - the infant of 3kg weight, intubated and ventilated in IPPV mode with EVITA ventilator, when air and 30% O2 mixture has been dosed. The study was approved by Bioethics Committee at Warsaw Medical University. First, the patient had been intubated with the standard endotracheal tube of 3.5 mm ID (Portex), and then he was reintubated with the novel cone shaped tube 3.5 mm ID (in tracheal part, with connector of 6 mm OD), with help of a laryngoscope. Directly before and after reintubation the patient was ventilated with 100% O2. COSMOPLUS monitor placed between upper tip of an endotracheal tube and Ypiece was used to receive the parameters: Ri, PIP and WOB. III. RESULTS The results of model study carried out with using infant lung model are shown in figure 1. The values of Ri, PIP and WOB obtained when patient was not intubated, when was intubated with the standard tube of 3.5 mm ID and with the cone shaped tube 3.5 mm ID are presented. It is clearly visible that all the parameters ware signifi-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 413–415, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
414
B. Stankiewicz, J. Glapinski, M. Rawicz, B. Woloszczuk-Gebicka, M. Michnikowski, M. Darowski
Fig. 1. Inspiratory flow resistance (Ri), peak inspiratory pressure (PIP) and work of breathing (WOB) for infant lung model non-intubated (left-side) and intubated with the standard 3.5 mm ID (middle) and with the cone shaped tube 3.5. mm ID (right-side), ventilated in IPPV mode, ventilation frequency f=40/min, PEEP=0, inspiratory time Ti=0.5 s, tidal volume TV=30 ml. cantly lower when patient was intubated with the cone shaped tube compared to standard tube. The results obtained from the case clinic study are presented in figure 2. Such parameters like Ri, PIP and WOB measured and calculated when patient was intubated with the standard tube and with the cone shaped tube 3.5 mm ID are shown.
Fig. 2. Inspiratory flow resistance (Ri), peak inspiratory pressure (PIP) and work of breathing (WOB) for infant intubated with the standard 3.5 mm ID (left side) and replaced by the cone shaped tube 3.5. mm ID (right side), ventilated in IPPV mode, f=30/min, PEEP=6 cmH2O, Ti=0.7 s, TV=30 ml.
If we look at the figure 2, you can see that significantly lower values of all three parameters have been received in the second situation, when standard tube had been replaced by the cone tube. It is also visible, that results obtained in model study and a case clinic study are qualitatively very similar. IV. CONCLUSIONS The novel cone shaped pediatric endotracheal tube shows significantly decreased flow resistance in relation to that of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The impact of the intubation model upon ventilation parameters
a standard cylindrical tube. That is why total inspiratory resistance of patient lung (model) and endotracheal tube (Ri) is lower too, in the first case. It influences positively the values of ventilatory parameters. As it results from the model study (using lung model) as well as from one case of clinical study carried out under IPPV mode of ventilation, the work of breathing (WOB) and peak flow pressure (PIP) were significantly decreased when standard tube had been replaced by with the novel cone shaped tube.
ACKNOWLEDGMENT The study was supported by the Ministry of Science and Higher Education in 2006-2008 as the research project 3T11E02830.
REFERENCES 1.
2.
Stankiewicz B, Darowski M, Glapiński J, Rawicz M, Michnikowski M, Rogalski A. (2006) In vitro comparison of the standard and the Cole endotracheal tubes with the endotracheal tube of new design. Critical Care 10 (1): 22. Stankiewicz B, Darowski M, Glapiński J, Rawicz M, Michnikowski M, Rogalski A (2006) Diminishing airways resistance and work of breathing by a novel design of the paediatric endotracheal tube, Proc. vol. ISBN 88-7587-270-8, CD ISBN 88-7587-271-6, 5th World Congress of Biomech., Munich, Germany, 2006, pp 305-308.
415 3.
Stankiewicz B, Darowski M, Glapiński J, Rawicz M, Michnikowski M, Rogalski A (2006) Comparison of a standard pediatric tube with a novel tube of unconventional shape, IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., Seoul, Korea , 2006, pp 3070-3073 4. Fujimo Y, Uchiyama A (2003) Spontaneously breathing lung model comparision of work of breathing between Automatic Tube Compensation and Pressure Support. Respiratory Care 48 (1): 38-45 5. Manczur T, Greenough A, Nicholson G P, Rafferty G F (2000) Resistance of pediatric and neonatal endotracheal tubes: influence of flow rate, size, and shape. Crt Care Med 28 (5):1595-1598 6. Oca M J, Becker M A, Dechert R E, Donn S M (2002) Relationship of neonatal endotracheal tube size and airway resistance. Respiratory Care, vol. 47 (9):994-997 7. Cole F (1945) A New Endotracheal Tube for Infants. Anesthesiology 6:87-88 8. Hatch D J (1978) Tracheal Tubes and Connectors Used in NeonatesDimensions and Resistance to Breathing. Br. J. Anaesth. 50:959-964 9. Mitchell M D Bailey C M (1990) Dangers of Neonatal Intubation with the Cole Tube. British Medical Journal 301:602-603 10. Brewis C, Pracy J P (1999) Localized Tracheomalcia as a Complication of the Cole Tracheal tube. Pediatric Anesthesia 9:531-533 11. Contencin P, Narcy P (1993) Size of Endotracheal Tube Neonatal Acquired Subglottic Stenosis. Arch Otolaryngol Head Neck Surg, 119:815-819 Author: Marek Darowski Institute: Institute of Biocebernetics and Biomedical Engineering, Polish Academy of Sciences Street: 4 Ks. Trojdena City: Warsaw Country: Poland Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Wearable Wireless Biopotential Electrode for ECG Monitoring E.S. Valchinov and N.E. Pallikarakis Department of Medical Physics, University of Patras, 26 500 Patras, Greece Abstract— A single channel wearable wireless ECG biopotential electrode is presented. The device is built up of commercial components and utilizes 434 MHz license free frequency band. The three electrode, two-stage biopotential amplifier design and the wireless signal transfer system, yielded very low noise and excellent interference rejection. A CMRR of 100 dB at 50 Hz and an equivalent input voltage noise of 0.4 μVrms were measured during the tests of the developed prototype. The amplified analog ECG signal is sampled with 500 Hz by a 10 bit Analog-to-Digital Converter (ADC) and transmitted via UHF Amplitude-Shift Keying transmitter to a dedicated receiver module USB connected to a PC. Both the ADC and RF-transmitter are embedded in the flash-based 8-bit CMOS Microcontroller rfPIC12F676F. The receiver module is based on the low cost single conversion superheterodyne receiver rfRXD0420, interfaced with an 8-bit CMOS microcontroller PIC16C745 with USB support. The developed electrode is powered by a small coin cell lithium battery and can perform continuous ECG recording and transmission for more than a week. Keywords— wireless monitoring, biopotential electrode, amplifier, telemetry, ECG.
Moreover there is increased safety due to the complete isolation from the power-line network and better noise properties and interference rejection. Additionally the influence of body movement on wireless biopotential electrodes is much less than for the conventional ones. Our goal was to develop a light, low power and low cost wearable wireless biopotential electrode that will provide enhanced comfort and convenience for the patient and can be worn continually during ECG monitoring over a longer periods of time. The size and the cost of the electrode will allow previously unmonitored patients to be monitored in both acute care and outpatient settings. II. METHODOLOGY A general overview of the proposed ECG monitoring device is depicted in Figure 1. It consists of two units, a wearable on-body measurement electrode that continuously measures, samples and transmits the ECG signal and a base unit which receives the transmitted signal and directs it to a personal computer.
I. INTRODUCTION Although continuous ECG monitoring has proven to be useful in the early detection of life threatening events, it has limited application mainly due to inconvenience of use of the medical instrumentation currently available. However the rapid developments in wireless technology; reduced size and power consumption of radio transceivers, increased transfer speeds, and new communication protocols have enabled its use in many medical applications. Nowadays there are a number of products and projects within mobile ECG recording using Bluetooth technology, GSM/GPRS, WAP-based implementations and wireless local area networks, WLAN [1-4]. The main advantage of the wireless measurement technology is the increased patients’ mobility, degree of freedom and convenience since it is not restricted by lead wires. The patient can be moved throughout a hospital, while being monitored, which facilitates its ambulation, transport, and positioning. The critical time needed to attach the lead wires and the false alarms due to their entanglement are eliminated.
Fig. 1 Block diagram of the proposed ECG monitoring device
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 373–376, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
374
E.S. Valchinov and N.E. Pallikarakis
Fig. 2 Schematic of the biopotential amplifier with balanced DC rejection. The first electrode section is a very low power analogue front-end which includes one single channel low noise biopotential amplifier followed by a low pass filter stage. This section conditions the ECG signal for its later A/D conversion. The biopotential amplifier used is shown in Figure 2 and exploits two balanced dc-rejection circuits in the feedback which does not rely on matched passive components [5]. This approach is a balanced extension of the two-opamp instrumentation amplifier [6] and inherits its good DC input range and low noise, where the Common Mode Rejection Ratio ( CMRR ) is insensitive to passive component mismatches and tolerances. Because of the large differential gain of the first amplifier stage, the CMRR and voltage noise are optimal. It can be shown that the expression for the differential-mode gain Ad(s) is given by
+3V
Analog ECG signal
C1 100nF
U1 1 2 3 4 5 6 7 8 9 10
X1
R3 1k
13.56MHz D1
+3V R1 22k
C2 330pF
Vdd GP5/OSC1/CLKIN GP4 GP3/MCLR RFXTAL REFIN CLKOUT PS VDDRF DSSRF
Vss GP/AN0 GP1/AN1 GP2/AN2 FSKOUT DATfsk DATask LF VSSRF ANT
20 19 18 17 16 15 14 13 12 11
RFPIC12F675 +3V
L1 120nH
R4 1k
C3 330pF
R2 220
SW1
C4
(1)
where τ=RiCi=1 is the integrator time constant of the feedback loops. The high pass amplifier response is with 1st order pole at 0.05Hz and zero at 0.05µHz. The equivalent input amplifier noise is mainly due to the noise voltage of opamps A1 and A2 and resistors R1 and R1´, which appears directly to the amplifier’s input. However the noise contri-
12pF
C6 15pF
C5 2pF
Loop Antena
⎛ 2 R2 ⎞⎛ 2R ⎞ ⎟⎟⎜⎜1 + 4 ⎟⎟ sτ ⎜⎜1 + R1 ⎠⎝ R3 ⎠ Ad ( s ) = ⎝ 1 + sτ
bution of R1 and R1´ can be neglected for the selected low resistor values. The noise voltage from opamps A3 and A4 is attenuated by R2/R1, where opamp A5 contributes only common mode noise. The amplifier bandwidth is limited to 100Hz by 4th order low pass filter to prevent aliasing. Bessel filter type is preferred for its excellent transient response and linear phase. Single power supply rail-to-rail input/output opamps are used in order to achieve maximum differential mode DC input voltage range and fast recovering from overloads or large artifacts. In the second electrode section, the amplified and low pass filtered ECG signal is sampled with 500 Hz by a 10 bit Analog-to-Digital Converter (ADC) and transmitted via a high performance short-range radio transmitter as shown in Figure 3. The RF-transmitter exploits an Amplitude-Shift Keying (ASK) modulation and a simplified KeeLoq protocol. Both ADC and transmitter are embedded in the flashbased 8-bit CMOS Microcontroller rfPIC12F676F. The receiver module is based on the low cost single conversion superheterodyne receiver rfRXD0420, interfaced with an 8bit CMOS microcontroller PIC16C745 with USB support. Low cost monitoring requires small mechanical enclosures and elimination of lead cables. This drove the size and the shape of the electrode to a low-profile package with electrodes spaced close together as shown in Figure 4.
Fig. 3 Schematic of the digital and radio section.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Wearable Wireless Biopotential Electrode for ECG Monitoring
375
Table 1 Electrode specifications Parameter
The electrode attachment to the patient’s chest is implemented with three small disposable adhesive foam electrodes (PG10S). They utilize Ag/AgCl sensing elements with solid gel (Ø10 mm). The ECG signal is measured bipolarly between sensing elements SE1 and SE2, placed 3 cm apart from each other (center-to-center). The common sensing element ( CSE ) is placed at 2.7 cm away from SE1 and SE2 and provides path for the input bias currents of opamps A1 and A2. The connection between the amplifier common and the CSE (i.e. the signal source ) is implemented by a driven common electrode circuit ( DCE ) which is an equivalent of the driven right-leg circuit used in conventional biopotential amplifiers. Thus the common mode voltage at the output of A5 is reduced by a factor equal to the gain of the DCE, which theoretically should give a 50dB extra CMRR at 50Hz III. RESULTS The developed electrode prototype is a compact unit with a size of 58 x 50 x 10 mm, and a weight of 25 grams. It accepts input offset voltages up to ± 47 mV, and has an equivalent input voltage noise of 0.4 μVrms for a -3 dB bandwidth of 0.05–100 Hz. The CMRR of the amplifier measured without the DCE circuit and with imbalanced electrode impedances ( ΔZ = 10 kΩ ) was 100dB at 50 Hz. A CMRR of 116 dB was seen when the imbalance was removed. The maximum measured CMRR with DCE circuit and a common mode input signal of 2.8 Vp-p was 119 dB at 50 Hz, where the output signal level was approximately equal to the amplifier output voltage noise. The RF-transmitter output power was set to -12 dBm at 434 MHz which assured a maximum reliable transfer distance of approximately 50 meters in open air with a loop antenna at the transmitter and a quarter wave monopole antenna at the receiver. No noise was observed in the measured signal due to the digital circuitry and the RFtransmitter. The electrode was powered by a small 3V-
Bandwidth (-3dB) AC mid-band gain Differential Mode AC input range Differential Mode DC input range Common Mode input range Input bias current
0.05-100 Hz 71 dB 0.82 mVp-p ± 47 mV ± 1.4 V 1 pA
Common Mode input impedance
400 MΩ @ 50 Hz
CMRR without DCE and ΔZ=10 kΩ equivalent input voltage noise RF-transmitter output power Maximum reliable transfer distance Power supply voltage Analog front-end supply current Total supply current Total power consumption Maximum battery life Dimensions Weight
100 dB @ 50 Hz 0.4 μVrms @ 0.05-100Hz -12 dBm 50 m 3V 0.56 mA 4.56 mA 13.7 mW 219 h 58 x 50 x 10 mm 25 g
1000mAh Lithium battery (CR2477) which will provide continuous ECG recording and transmission for more than a week. All the electrode prototype characteristics are given in Table 1. The validation of the proposed electrode, and the system as a whole, requires human studies to assess its performance in real-world settings. We have performed only some preliminary tests to estimate the performance of the electrode prototype. A lead II non-filtered ECG waveform captured with the electrode placed on the subject’s sternum is shown in Figure 5. 0.5 0.4 signal amplitude, mV
Fig. 4 Assembly of the proposed biopotential electrode.
value
0.3 0.2 0.1 0 -0.1 0
0.5
1 time, s
1.5
2
Fig. 5 ECG signal recorded with the developed wireless electrode
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
376
E.S. Valchinov and N.E. Pallikarakis
As can be seen the influence of the 50 Hz power line interference is highly reduced due to the electrically floating configuration, high CMRR and small size. For these reasons no 50 Hz notch-filter was needed. The noise present in the acquired signal is mainly compounded of muscle artifacts and noise from the electrode-skin interface due to subject’s motion. However, these artifacts are well minimized due to the low-profile and low-mass of the electrode that allows comfortable wear.
Future plans include on-site hardware processing of the ECG signal for automatic detection of life-threatening events and reduction of the power consumption by transmitting only the heart rate when the entire ECG signal is not needed.
REFERENCES 1. 2.
IV. CONCLUSIONS The wearable wireless electrode for ECG monitoring which is presented in this paper is easy to install for both the patient and the physician with little previous training. Its low power design allows for a continuous ECG monitoring over a period of several days. The low cost of the unit combined with its sufficient functionality will enable monitoring on a large scale. These applications can include: currently-unmonitored beds in hospital, patient monitoring during drug trials and while waiting for organ transplants, high-performance sports monitoring, cardiac rehabilitation, and home telemedicine for disease management. The preliminary results indicated that innovations introduced to the design due to the very low power and size constraints, lead to improved performance in sense of electromagnetic interference and motion artifacts.
3. 4. 5. 6.
Andreasson J, Ekström M, Fard A et al. (2002) Remote System for Patient Monitoring Using Bluetooth. IEEE Proc Sensors 1:304-307 Istepanian RH, Woodward B, Gorilas E et al. (1998) Design of mobile telemedicine systems using GSM and IS-54 cellular telephone standards. J Telemed Telecare 4:80-82 Hung K, Zhang Y, (2003) Implementation of a WAP-Based Telemedicine System for Patient Monitoring. IEEE Trans Inf Technol Biomed 7:101-107 Rollins D, Killingsworth C, Walcott G et al. (2000) A Telemetry System for the Study of Spontaneous Cardiac Arrhytmias. IEEE Trans on Biomed Eng 47:887-892 Spinelli EM, Martinez N, Mayosky MA et al. (2004) A novel fully differential biopotential amplifier with dc suppression. IEEE Trans Biomed Eng 51:1444-1448 Van Rijn A, Peper A, Grimbergen C, (1994) Amplifiers for bioelectric events: A design with a minimal number of parts. Med Bio Eng Comput 32:305–310 Address of the corresponding author: Author: E.S. Valchinov Institute: Department of Medical Physics, University of Patras City: Patras 26500 Country: Greece Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A device for quantitative kinematic analysis of children’s handwriting movements A. Accardo1, A. Chiap1, M. Borean2, L. Bravar2, S. Zoia2, M. Carrozzi2 and A. Scabar2 1
2
DEEI, University of Trieste, via Valerio, 10, Trieste, Italy S.C. Child Neuropsychiatry, IRCCS Burlo Garofolo, Via dell’Istria, Trieste, Italy
Abstract— Kinematic analysis of handwriting is a promising new frontier towards the characterization of handwriting movements, both in children and adults, with and without difficulties or pathologies that disrupt normal handwriting processes. The challenge, however, is to define and measure parameters that tap and highlight underlying mechanisms and strategies in order to comprehend such disorders, promote prevention programs and provide treatment or remedy, whenever possible. This work represents a practical application, binding neuropsychologic theory and engineering technology in the development of a device that enables on-line process analysis of handwriting, offering ample possibilities of research, both in medical and educational fields. Although employing complex mathematical procedures, the device is user friendly in its interface design and allows the rapid analysis of the parameters reported in literature, as well as some new and interesting variables that may contribute to the understanding of handwriting difficulties. The device has been successfully tested and used in a major Italian institute for childcare and research to evaluate handwriting proficiency in children. Preliminary results indicate that kinematic analysis of handwriting thus performed provides important information for the diagnosis and treatment of dysgraphia. Keywords— Handwriting, children, dysgraphia, kinematic analysis.
I. INTRODUCTION Handwriting represents a complex motor behavior in which linguistic, psychomotor and biomechanical processes closely interact with maturational, developmental and learning processes. In the last years, its analysis, initially carried out on scanned images, has been performed by using a direct measurement of handwriting movements acquired by digitizing tablets. This technology also allows objective quantitative kinematic analyses of the quality of writing and it has recently been used to characterize the handwriting process [1], to study the dopaminergic effects on skilled handwriting movements in Parkinson's disease [2] and to assess handwriting in healthy elderly persons [3]. These analyses of handwriting movements have demonstrated the importance of temporal and spatial parameters (letter width and height, pen elevation size, number of pen lifts, letter length and duration) in the study of growing motor ability [1] as well as kinematic aspects like velocity
and acceleration measures that result modified in elderly people [3] as well as in Parkinson’s disease [2]. Mean values of these measures are generally considered by the user. On the other hand, disorders in handwriting legibility and speed are also found among elementary school children [1]. Thus, a quantitative analysis of handwriting characteristics can be useful to study dysgraphia. The aim of this paper is to present a new device able to analyze handwriting movements produced by children on a digitizing tablet. The instrument was specifically produced to assess the effects of a rehabilitation program for dysgraphia that promotes the acquisition of domain specific movements, in order to verify how treatment modifies particular kinematic parameters linked to the velocity profiles of strokes. Apart from some mean parameters, previously described in literature, other new interesting features are evaluated by using sophisticated analytical tools. Following preliminary studies on dysgraphic handwriting carried out by our group [4-5], temporal and spatial measures of the dynamics of ongoing handwriting, based on signal processing methods are developed and visually presented by means of user-friendly interfaces. An original automatic stroke identification procedure, starting from the curvilinear velocity curve is supplied, reducing the operator’s intervention to the minimum. The measures of handwriting characteristics are presented in such a way as to help clinicians and educators easily evaluate and understand handwriting disorders and parameter changes. II. DEVICE DESCRIPTION A. Basic features Before using the device, pen-tip position of an inking digitizing pen is sampled at 100Hz by a digitizing graphics tablet (Wacom, Inc., Vancouver, WA, Model Intuos 2). Pen displacement across the tablet is recorded both in the horizontal and vertical directions with a spatial resolution of 0.01 mm. Simple writing exercises mainly involving motor abilities (like a one-minute handwriting sample of a continuous sequence of ‘lelele’) are performed. In order to produce a user friendly device, with easy-to-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 445–448, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
446
A. Accardo, A. Chiap, M. Borean, L. Bravar, S. Zoia, M. Carrozzi and A. Scabar
10
Y Position (mm)
20
2
1
30
5
40 50
6
43 7
60
8
70 80
0
20
40
60 80 X Position (mm)
100
120
140
Fig. 2 Component identification. Each component is progressively numbered
Fig. 1 Main page of the starting window. operate interfacing, the GUI (Graphical User Interface) tool of MATLAB (Mathworks) was adopted. The structure of each interface is very simple so as to make its usage straightforward even for inexpert users. The device allows the analysis of handwriting data both in spatial and time domains. Main menus, in the starting window (Fig.1), permit to select and open a file (‘File’ menu) containing acquired handwriting data, to execute manual or automatic operations in the spatial domain (‘Manual selection’ and ‘Automatic analysis’ menus) as well as to examine data in relation to time (‘Graphs’ menu). B. Automatic procedures & data analysis Each time a file is opened, all acquired data (X and Y pen positions, pressure, altitude and azimuth angles) are initially segmented into their components. The limits of each component are identified as the written tract between two consecutive pen lifts (Fig.2). The following analysis is carried out separately on each component. To reduce noise present on acquired data, X and Y pen positions of the component are then smoothed by means of a non-parametric regression method (cubic smoothing splines) and subsequently filtered by means of a second order (20Hz cut-off frequency) low pass Butterworth filter with phase compensation. Generally, analyses are focused on movement along the Y axis alone and eventually also along the X axis; however, since in our device we included an automatic stroke identification mechanism starting from the curvilinear velocity curve, the curvilinear
(C) motion has been reconstructed in the x-y plane starting from X and Y components. To examine the dynamic characteristics of handwriting motion, the three displacement functions (X, Y and C) are double differentiated with a 9-point central finite difference algorithm to yield velocity and acceleration functions. Finally, to reduce noise introduced by the differentiation process, each of these curves are low pass filtered. Since normal movements, movement disorders, rest and action tremors all occur at frequencies of less than 10Hz [6], low pass second order Butterworth filters with a cut off frequency of 10Hz and phase compensation are initially proposed. However, since in children, especially during the acquisition of a fine motor task, a frequency of 5Hz is more suitable, a frame where the cut off frequency can be changed has been included in the main window (Fig.1). A final analysis concerns stroke identification. By means of an automatic segmentation procedure, each component is partitioned into consecutive elements, starting from the curvilinear velocity curve. The procedure detects points of minimal curvilinear velocity (Fig.3C-D), hypothesizing that each velocity minimum corresponds to a different motor stroke, as claimed by the bell-shaped velocity profile theory [7]. Since handwriting difficulties in children with dysgraphia, are consistent with a lack of automaticity in the programming and performing of handwriting movements, handwriting samples of these children often show a velocity minimum, even when a current stroke is briefly interrupted and quickly resumed. In this case however, the identified element does not actually correspond to a different stroke. Considering that the device we developed is especially dedicated to the analysis of dysgraphia, we also included such elements among the ‘strokes’ since the subsequent analysis of ‘stroke’ duration easily pinpoints this situation,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A device for quantitative kinematic analysis of children’s handwriting movements
allowing an evaluation of the degree of automaticity of handwriting movements. As an example in Fig.4, the stroke duration histograms of two handwriting samples, one of a child with efficient production (Fig.3A-C) and one of a child with dysgraphia (Fig.3B-D) are reported. The large
20 22
18
10
6
22
26 8
4
20
16
24
30 15
32
1
34
7
5
3
10
9
11 13
20
11
22
19
18
26
1
5
22
9
8
10
17 23 21 20
13
34
24
20
35
30 40 X Position (mm)
50
33
30 28
15 16 12
40
31
29
25
14 6
32 C
10
7
30 50
38
2726
2 4
24
28
23
19 21
17
30 40 X Position (mm)
80
32 3
20
24
28
39
18
14
12
B
16
Y Position (mm)
2
Y Position (mm)
difference in the mean values and the overall low values in the handwriting sample of the child with dysgraphia are consistent with the reduction in stroke duration due to the interruptions.
A
18
447
37 36
41
50 D
Curvilinear Velocity (mm/s)
Curvilinear Velocity (mm/s)
70 60 50 40 30
3
5
9
7
20
6
2
10
17 13
10
12
14
8
4
1
0
11 15
16
21 18
19
20
23 22
2
3
4 5 Time (s)
6
7
40 4
30
8
18 17
2 6 5 8
20 3
10 1
24
1
40
0
21 23 20
12 15 14 11 13
7
22
10 16
9
2
4
41
28 31 26 27 25
29 32
19
3334
30
24
6 Time (s)
35
38
8
10
37 36 39
12
Fig. 3 Examples of strokes identification as minima of curvilinear velocity profiles (C-D), also reported on X-Y spatial domain (A-B) in two children with normal writing (A-C) or presenting some handwriting difficulties (B-D).
C. Visualization of components and strokes in spatial domain (‘Automatic Analysis’ menu) The visualization of components is accomplished by selecting the ‘Component visualization’ option from the ‘Automatic Analysis’ menu. A new window in which each component of the handwriting sample is delimited by a colored rectangle (Fig.2) is presented. All parameters (duration, mean, peak and standard deviations of pressure, X, Y and Curvilinear length, velocities and accelerations) concerning current component are visualized in a frame to the left of the graphic window. At the bottom of the frame the mean and standard deviation values of the above parameters across all components are visualized. A slider bar permits to move across the components. Two buttons allow the visualization of ‘in air’ and ‘on paper’ (component) duration histograms, respectively. The number of ‘in air’ lifts represents an index of handwriting fluency. The visualization of strokes is achieved by choosing the ‘Stroke visualization’ option from the ‘Automatic Analysis’
menu. A new window appears containing the entire handwriting sample in which each stroke is identified by a progressive number (Fig. 3). Parameters concerning current stroke and their mean and SD values are visualized in the same way for the components. A slider bar permits to move across the strokes. Three buttons allow the visualization of graphs with the distributions of duration (Fig. 4), curvilinear length and curvilinear peak velocity of the strokes. D. Visualization in time domain (‘Graph’ menu) By means of the option ‘Time domain visualization’ present in the ‘Graph’ menu, it is possible to visualize the graphs of X and Y velocities and of X and Y accelerations , in separate windows, as well as that of the curvilinear velocity in time. By clicking a button in the window, stroke identification numbers (Fig.3C-D) are superimposed on the relative velocity profiles, allowing comparison of spatial and temporal elements (see Fig. 3 for example). In this way another interesting parameter (number of ‘strokes’ per let-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
448
A. Accardo, A. Chiap, M. Borean, L. Bravar, S. Zoia, M. Carrozzi and A. Scabar
ing into a specific region of interest in order to operate on this area as a whole. By using the option ‘pair of points’ , any two points within the handwriting sample can be selected. Some useful parameters, such as the slope of the line passing through the two points or the spatial and temporal distances between points, can be calculated and visualized. III. CONCLUSIONS The device has been successfully used for the examination of normal and dysgraphic handwriting in the Child Neuropsychiatry Unit of the “Burlo Garofolo” Institute for Childcare and Research of Trieste. Preliminary results [4-5] indicate that kinematic analysis of handwriting not only provides important information about the processes and strategies involved in learning and controlling handwriting but also constitutes a useful support for monitoring progress during the treatment of dysgraphia.
REFERENCES Fig. 4 Stroke duration histograms in two children with normal writing (top) or presenting some handwriting difficulties (bottom).
ter) that describes a dynamic characteristic of handwriting can be evaluated. The option ‘Pressure visualization’ (also in the ‘Graph’ menu) opens another window in which each sample of handwriting in the spatial domain is coloured according to the amount of pressure exerted. The Pressure range (0-1023 levels) is subdivided into seven equally spaced classes, each associated with a different color (black, red, yellow, green, cyan, blue and magenta). This way it is very easy to identify pressure variations in writers. On the left side of the graph, a frame presents the pressure histogram, together with the mean and standard deviation values. These values are correlated with the mean strength impressed by writer on the sheet. E. Manual operations (‘Manual Selection’ menu) Besides automatic procedures, some manual operations are provided. The first allows the operator to reproduce handwriting samples at different velocities (particularly in slow motion) in order to accurately examine handwriting strategies (directions, sequences of movements etc.), which is often impossible to achieve if the samples are examined only off-line. This is obtained by means of the slider bar present in the main window, at the bottom of the page (Fig.1). Two arrows permit fast shifting through handwriting. Other manual operations concern the selection of areas, (single or pairs of points) on the writing track. The selection of a work area (option ‘Area selection’) is useful for zoom-
1. 2.
3. 4.
5.
6. 7.
Rosenblum S, Chevion D, Weiss PL (2006) Using data visualization and signal processing to characterize the handwriting process. Pediatric Rehabil. 4:404-17 Tucha O, Mecklinger L, Thome J et al (2006) Kinematic analysis of dopaminergic effects on skilled handwriting movements in Parkinson's disease. J Neural Transm. 113(5):609-23 Rosenblum S, Werner P (2006) Assessing the handwriting process in healthy elderly persons using a computerized system. Aging Clin Exp Res. 18(5):433-9 Bravar L, Borean M, Zin R, et al. (2005) Use of a graphic tablet in the evaluation of handwriting skills, before and after a movement-based treatment, in a group of children with dysgraphia. 6th Int. Conf. on Develop. Coord. Disorder (DCD), Trieste, Italy, May 17-20. Borean M, Bravar L, Accardo A, et al. (2007) Degrees of improvement after a movement based treatment in italian children with dysgraphia. 7th Int. Conf. on Develop. Coord. Disorder (DCD), Melbourne, Australia, February 6-9. Phillips JG, Gallucci RM, Bradshaw JL (1999) Functional asymmetries in the quality of handwriting movements: a kinematic analysis. Neuropsychol, 13:291-297 Plamondon R (1991) On the origin of asymmetric bell-shaped velocity profiles in rapid-aimed movements. In J.Requin and GE Stelmach (Eds) Tutorial in motor neuroscience, Kluwer Academic Publishers, Dordrecht, 283-295 Address of the corresponding author: Author: Agostino Accardo Institute: DEEI-Dept of Electronics Address Via Valerio, 10 City: Trieste Country: Italy Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Analysis of foveation duration and repeatability at different gaze positions in patients affected by congenital nystagmus. M. Cesarelli1, P. Bifulco1, M. Romano1, G. Pasquariello1, A. Fratini1, L. Loffredo2, A. Magli2, T . De Berardinis2, D. Boccuzzi2 1
Dept. of Electronic Engineering and Telecommunication, Biomedical Engineering Unit, University "Federico II", Naples, Italy 2 Department of Ophthalmologic Science, University "Federico II", Naples, Italy
Abstract— Congenital nystagmus (CN) is a disturbance of the oculomotor centers which develops at birth or in the first months of life. Nystagmus consists essentially in involuntary, conjugated, horizontal rhythmic movements of the eye. Its pathogenesis is still unknown. Current therapies for CN aim to increase the patient’s visual acuity by means of refraction defects correction, drug delivery and ocular muscle surgery. Eye movement recording supports for accurate diagnosis, for patient follow-up and for therapy evaluation. In general, CN patients show a considerable decrease of visual acuity (image fixation on the retina is obstructed by nystagmus continuous oscillations) and severe postural alterations such as the anomalous head position, searched by patient to obtain a better fixation of the target image onto the retina. Often CN presents ‘neutral zones’ corresponding to particular gaze angles, in which nystagmus amplitude minimizes allowing a longer foveation time and a more stable repositioning of foveations, increasing visual acuity. Selected patients’ eye movements were recorded by using EOG or infrared oculography devices. Visual stimulation was delivered by means of an arched LED bar covering a visual field of –30 +30 degrees with respect to the central position. Computation of CN concise parameters allows in-dept analysis of foveations and estimation of visual acuity at different gaze angles. Preliminary results show a maximum of visual acuity at a specific gaze angle; this angle is mostly located at the patient’s right side for the analyzed group. Keywords— Congenital nystagmus, eye movement, visual acuity, foveation.
I. INTRODUCTION Congenital nystagmus (CN) is an ocular motor disorder which develops at birth or in the first months of life and persists all life long. Nystagmus consists essentially in involuntary, conjugated, horizontal (rarely they can be vertical or rotatory) rhythmic movements of the eye. Nystagmus oscillations can persist also closing eyes, even if they tend to dump in absence of visual activity. Nystagmus can be idiopathic or associated to alteration of the central nervous system and/or ocular system such as acromathopsia, aniridia and congenital cataract. Both nystagmus and associated ocular alterations can be genetically transmitted, with different
modalities. According to some authors, occurrence of idiopathic CN would be 1 out of 1000 males and 1 out of 2800 females; CN occurrence associated with total bilateral congenital cataract is of 50-75%, less if partial or monolateral. CN is present in most of the cases of albinism. Pathogenesis of the congenital nystagmus is still unknown, dysfunctions of at least one of the ocular stabilization systems such as fixation, smooth pursuit, vestibular ocular reflex and optokinetic reflex have been hypothesized. According to bibliography, nystagmus can be classified in different categories (pendular, jerk, horizontal unidirectional, bidirectional) depending on the characteristics of the oscillations. Eye movement recording and estimation of concise parameters, such as amplitude, frequency, direction, foveation periods and direction, shape, are a strong support for an accurate diagnosis, for patient follow-up and for therapy evaluation. Current therapies for CN, still debated, aim to increase the patient’s visual acuity by means of refraction defects correction, drug delivery and ocular muscle surgery. In general, in CN patient are registered a considerable decrease of the visual acuity (image fixation on the retina is obstructed by nystagmus continuous oscillations) and severe postural alterations such as the Anomalous Head Position, searched by patient to obtain a better fixation of the target image onto the retina. Indeed, often CN presents ‘neutral zones’ corresponding to particular gaze angles, in which can be recorded a smaller nystagmus amplitude and a longer foveation time: this allows a better visual acuity. In normal subjects, when the movement of the image on the retina increases by a few degrees per second, visual acuity and contrast sensitivity decrease. Visual acuity is mainly dependent on the duration of the foveation periods (when target image intersects foveal region); also cycle-to-cycle foveation repeatability and eye velocity contribute [1-2]. By analyzing a large eye movement recordings database, this study focus the attention on cycle-to-cycle repeatability of image placement onto the fovea. In a previous work we have evidenced a slow eye movement oscillation superimposed to nystagmus, we have called Base Line Oscillation (BLO) [3]. Characteristics of such oscillation, concisely approximated by a sinusoid, have been extracted and analyzed. Frequency of
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 426–429, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Analysis of foveation duration and repeatability at different gaze positions in patients affected by congenital nystagmus.
this slow oscillation results to be of 0.36 ± 0.11 Hz on average, and the amplitude results correlated with that of the nystagmus waveform. The estimated relationship between mean nystagmus amplitudes and mean BLO amplitudes for each eye movement recording, shows an high correlation coefficient value (R2 = 0.77), suggesting an high level of interdependence between BLO and nystagmus amplitude [4]. The presence of this superimposed oscillation reduces substantially the visus, causing an increase of the Standard Deviation of position (SDp) during foveation, which in turn may hamper visual acuity. However, the amplitude of the oscillation and the SDp are different in different gaze positions. In particular, the BLO is suppressed in some gaze position. For this reasons, the estimated visus could depend by gaze position. Computing the time duration of foveation periods and the standard deviation of position in the same periods (estimation of cycle-to-cycle foveation repeatability) we can estimate visual acuity. The following relationship that combines the foveation time and the standard deviation of eye position during foveation was used to investigate the relation between the measured visus and a mean value of the estimated visus [5]: NAEF = exp(-SDp)[1-exp(-Tf/33.3)] (1) Where NAEF (Nystagmus Acuity Evaluation Function) is a predictor of patient visual acuity, Tf is the average foveation time and SDp is the standard deviation of eye position during foveation (SDp was used to estimate the cycle to cycle foveation repeatability). It is an objective measure of the foveation ability of CN patients and, therefore, of potential visual acuity that could be used to evaluate the effects of therapies applied to a patient or across patients. The focus of this study is to investigate the distribution of the estimated visus in function of gaze position using the NAEF function.
427
To record eye movement signals, the patients, sated in a dimly lit room, were instructed to fixate a light stimulus presented using a LED bar at fixation distance of 1 meter. The device was able to provide stimuli in a range of 60 degrees of the field of vision (±30°). Head movements were minimized using chin and head rests. The stimuli were sequentially presented at different gaze positions (sequence: 0°,5°,10°,20°,30°,0°,-5°,-10°,-20°,30°,0°) for 10 seconds each. Eye movements were detected using either an infrared limbal reflection technique apparatus (Oftalmograf, Universal Initram Corporation, El Paso) or an electro-oculography device (Gould ES 2000, Gould Instrument System with bio-signal amplifiers 11- 5407-58). The eye position signals were digitized at 200 Hz with a 12 bits resolution, using a PCI acquisition board (Data Translation DT 2801-A), and stored on computer for off-line analysis. Eye movement signals were filtered to reduce high frequency noise and power line noise. A specific software, developed at our Lab, was used to recognize nystagmus waveforms and compute nystagmus parameters such as frequency, amplitude, intensity and waveform shape. The nystagmus cycles were detected automatically using a modified version of the previously utilized algorithm [7]. The algorithm was modified in order to overcome errors in the cycle recognition. The new algorithm computes the slope of the eye movement, using a least squares method, returning two different values: +1 for a positive slope and -1 for a negative slope (the output signal is a square wave as shown in figure 1). Therefore, a change in the value of the square wave corresponds to a change in the slope of the eye movement signal. The minimum and maximum of the corresponding eye movement signal for each cycle are searched in a symmetric Selected signal (blue), filtered (red): cicli.txt 18 16
II. MATERIAL AND METHODS
12 Degree [°]
Horizontal projections of eye movements were analyzed from CN patients, at different gaze positions. Forty CN patients (20 male and 20 female subjects), without sensory defects, ranging in age from 6 to 34 years participated in the study. They presented different types of nystagmus (pendular, jerk, etc.) according to the classification proposed by Dell’Osso [6]. Binocular, horizontal eye movements, at different gaze positions, were recorded for each patient. A standard visual acuity measurement was performed for each patient using a classical Landolt Cs technique. The visual acuity values measured (ranging from 0 to 1 with increments of 0.1) were expressed in tenths.
14
10 8 6 4 2 0 -2
0
0.5
1
1.5 Time [s]
2
2.5
3
Fig. 1 An example of the output of the new algorithm (red) superimposed to the selected signal (green). The output assumes two different values: +1 corresponding to a positive slope and -1 corresponding to a negative slope
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
428
M. Cesarelli, P. Bifulco, M. Romano, G. Pasquariello, A. Fratini, L. Loffredo, A. Magli, T. De Berardinis, D. Boccuzzi Patient C.N. - NAEF vs gaze angle
Selected signal (blue); chosen maximum (red) and minimum (green): segnale.txt
0.2
20
0.18 0.16
15 0.14
NAEF
Degree [°]
0.12
10
0.1 0.08
5
0.06 0.04
0
0.02
0
0.5
1
1.5 Time(s)
2
2.5
3
Fig. 2 An example of the recognized sequences of maxima (red points) and minima (green points) superimposed to the selected signal (blue line). interval around the change. Furthermore, to eliminate artifacts or to add undetected cycles a manually procedure was also implemented. The sequence of the nystagmus waveform minima and maxima corresponds to sequence of signal tracts in which the eye velocity was close to 0°/sec. In jerk nystagmus, the algorithm recognizes automatically which sequence between minima or maxima corresponds to the foveation periods, identifying the fast phase with a velocity criteria. However, the physician can also select manually the sequence corresponding to foveations. An example of results generated by this procedure is shown in figure 2. The foveation window was computed by considering the time interval for which the eye velocity was lower than 4°/sec [8, 9] and the eye position was contained within 0.5°, with respect to the maximum (or minimum) point selected. To minimize the effect of the noise superimposed to the signal, the computation was carried out using a smoothed version of the signal obtained fitting the signal with a parabolic curve. The duration (time length) of the foveation window was considered a measurement of the foveation time (Tf). The standard deviation of the eye position during foveations (SDp) was computed as measure of the Standard Deviation of all the samples contained within the foveation windows. III. RESULTS The nystagmus frequencies resulted ranging from 2.4 to 4.4 Hz (average: 3.3 Hz) and amplitudes from 0.5 to 12.8 degrees (average: 4.4 deg.); the BLO frequencies resulted ranging from 0.16 to 0.64 Hz (average: 0.36 Hz), while BLO amplitudes resulted ranging from 0.2 to 7.7 degrees
0 -30
-20
-10
0 Gaze angle
10
20
30
Fig. 3 Plot of
patient C.N. NAEF values during fixation of near targets vs. gaze angles ranging. The presence of a large peak NAEF value indicates the CN ‘null’ corresponding to an area of highest acuity.
(average: 2.1 deg.). Nystagmus waveforms resulted of different types, mainly belonging to jerk types. The forty selected patients have a measured visus very low, fifty percent of them (20 patients) have a visus of one on tenth. All these patients have an estimated mean SDp higher than 1 degree. However, also fifty percent of patients with measured visus higher than 1 have a SDp higher than 1 degree. The low measured visus seems to depend mainly by the SDp. Whereas, the effect of the foveation time on the measured visus is low in this group of patients. The NAEF estimates the visual acuity giving continuous values, not quantized in tenths, as shown in figure 3, where the estimated visus is reported versus gaze position for one patient. In figure 3, the maximum value of the visus is about 2 on tenths reached on 20 degrees gaze position. The NAEF for this patient, during eye fixation, does indicate better acuity on right gaze. For each subject, the NAEF clearly indicates a dependence of the estimated potential visual acuity by the gaze position. The distribution of maximum visus for the forty patients is not symmetrically distributed with a mode in 0 degrees. Only ten per cent of the maximum are located on left gaze position and about sixty-seven per cent on right gaze position (see table 1). IV. CONCLUSIONS This study presents an analysis of characteristics of nystagmus recorded during fixation of a fixed target, at different gaze positions in CN patients. The current algorithm for calculating the NAEF does automatically and with minimal human intervention.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Analysis of foveation duration and repeatability at different gaze positions in patients affected by congenital nystagmus.
Table 1 Maximum value of NAEF versus gaze position Gaze Angle (degree) - 30° (left) - 20° (left) - 10° (left) - 5° (left) 0° 5° (right) 10° (right) 20° (right) 30° (right)
Number of patients 0 1 1 2 9 8 6 8 5
Percentage 0.0% 2.5% 2.5% 5.0% 22.5% 20.0% 15.0% 20.0% 12.5%
The number of errors in cycle recognition was significantly reduced using the new developed algorithm. It represents a large step forward from the original in both its applicability to most, if not all, CN subjects and in the reduction of the expertise necessary to use it accurately. The NAEF could be considered an objective measure of the foveation ability and, therefore, of the potential visual acuity. For individual subjects, the NAEF clearly demonstrates differences in potential visual acuity due to gaze position. Studying the distribution of gaze position corresponding to the maximum value of NAEF obtained in each patient, we observed a mode at a 0° gaze position and a prevalence of right gaze positions respect left gaze positions. The NAEF could be used to determine the amount of broadening of the range of gaze angles of highest acuity produced by surgical treatment, like tenotomy. This measure is similar to the null-broadening hypothesized to be due to tenotomy but is more directly related to potential visual acuity than measurement of the null region based simply on CN amplitude. It is worth mentioning that our studies were also promoted by some successful botulinum toxin treatments at our Institute of Ophthalmology of CN children, who recovered a remarkable higher visual acuity. Such therapy, by temporary suppressing eye muscles activity, significantly decreases nystagmus amplitudes and the BLO, consequently, also the SDp will be reduced turning in an increasing of the patient’s visual acuity. The NAEF could be used also to measure the effect of this therapy on potential visual acuity.
429
However, at moment, there is no clear evidence of general effectiveness of botulinum toxin therapy
REFERENCES 1. 2.
3.
4.
5.
6. 7.
8. 9.
Abadi R.V., Bjerre A. (2002): ‘Motor and sensory characteristics of infantile nystagmus’. Br J Ophthalmol; 86, pp. 11521160 Dell’Osso L.F. and Jacobs J.B. (2002): ‘An Expanded Nystagmus Acuity Function: Intra- and Intersubject Prediction of Best-Corrected Visual Acuity’. Doc Ophthalmol.; 104, pp. 249–276 Bifulco P., Cesarelli M., Loffredo L., Sansone M., Bracale M. (2003): ‘Eye Movement Baseline Oscillation and Variability of Eye Position During Foveation in Congenital Nystagmus’, Doc Ophthalmol 107, pp. 131-136. Cesarelli M., Coppola P., Bifulco P., Romano M., Sansone M. (2005), ‘Slow Eye Movement Oscillation In Congenital Nystagmus’, IFMBE Proc. 2005 11(1), The 3rd European Medical and Biological Engineering Conference, Prague, Czech Republic, 2005, pp Cesarelli M., Bifulco P., Loffredo L. and Bracale M. (2000): ‘Relationship between visual acuity and eye position variability during foveation in congenital nystagmus’. Doc Ophthalmol 101, pp. 59-72. Dell’Osso LF, Darof RB. Congenital nystagmus waveform and foveation strategy. Doc Ophthalmol 1975; 39: 155–82. Cesarelli M, D'Addio G., Loffredo L, Daniele A. (1994): ‘A System to Automatically Analyse Nystagmus’. Otorino Rossi Award Conference - International Workshop on Eye Movements, pp. 262-64. Dell’Osso LF, Van Der Steen J, Steinman RM, Collewijn H. Foveation dynamics in congenital nystagmus. I: Fixation. Doc Ophthalmol 1992; 79: 1–23. Ukwade MT, Bedell HE. Variation of congenital nystagmus with viewing distance. Optometry Vision Sci 1992; 69: 976– 85. Author: Institute: Street: City: Country: Email:
Prof. Mario Cesarelli Department of Electronic and Telecommunication via Claudio, 21 Naples Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Detection of the cancerous tissue sections in the breast optical biopsy dataflow using neural networks A. Nuzhny1, S. Shumsky1, T. Lyubynskaya2 1
2
Lebedev’s Physical Institute of Russian Academy of Science, Moscow, Russia BioFil (Biophysical Laboratory) and Russian Federal Nuclear Center-VNIIEF, Sarov, Russia
Abstract— The method of artificial neural networks was applied for analysis of the data obtained in the clinical trials of the optical biopsy system. Detection of malignant tissue sections was carried out using a multilayer perceptron. The coefficients of wavelet decomposition of optical scattering spectra were given at the perceptron input and its output gave the malignancy probability for the current spectrum. End-toend probability calculation throughout the optical biopsy procedure dataset showed reliable detection of the cancer sections in the same place as it was specified by experts. Keywords— breast cancer, optical scattering spectrum, artificial neural networks.
I. INTRODUCTION Now the equipment has been developed and clinical trials are being carried out for the new method of beast cancer diagnostics namely optical biopsy [1]. Parallel with the traditional diagnostic procedures such as mammography and fine or core biopsy, the results of classification of optical biopsy data give the base for decision making in the diagnosis of breast cancer. The specific feature of the optical biopsy method is a large amount of information: recording of current optical scattering spectrum was done with the frequency of. 100 Hz, that means 1000 records per 1 cm at the recommended speed of needle movement ~1mm/sec. As a result of each measurement a spectral curve was formed. The scattered signal power was registered in 184 fixed spectral points. Such method of data recording allows analyzing not only local tissue state, but the tumor structure too. By now more than 150 medical experiments have been completed. In future their number must be increased significantly to provide a representative database for optical scattering spectra automatic analysis algorithms development. Changes in tissue types are usually accompanied by jumping of the spectral envelope. Basing on that fact and on the comments of the physician conducting the procedure the sections between two spectral jumps corresponding to needle presence in the tumor were extracted. In the course of visual analysis several patterns of spectral curves typical for cancer tissue were found. But the spectra of optical
scattering from cancerous tissue are various and in some cases are very similar to those of benign ones. Data complexity and ambiguity required application of probability approach. Optical biopsy primary data are much noisy. The noise is brought by measuring equipment and also is a result of summarizing of the signal intensity over extended tissue area with changing structure. Random drops in the scattered intensity caused by optical contact loss in the fiber joints do not permit to use in full the current intensity values as a component of analysis. So, large amount of information, fuzziness in criteria of spectrum type definition and data noisiness make neural networks to be an attractive tool in optical biopsy result analysis. II. MATHEMATICAL MODEL The decision-making mathematical model of detection of cancerous tissue presence along the needle trajectory is based on the principles of learning theory. The learning sample was formed containing 12 sections of ‘cancer’ spectra and 10 ‘non-cancer’ ones from different experiments. The ‘non-cancer’ sections included the data of healthy and benign tissues and were taken from the datasets of ‘non-cancer’ patients. To analyze the spectra over various scales a wavelet decomposition procedure was applied [2]. The Daubechi1 (or Haar) wavelet was used: 1 ⎧ ⎪1, −0 < x < 2 ⎪ 1 ⎪ ψ = ⎨−1, − < x < 0 2 ⎪ 1 ⎪ ⎪0, x > 2 ⎩
The set of functions obtained from the given mother wavelet by scaling and translations together with the scaling function:
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 438–441, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Detection of the cancerous tissue sections in the breast optical biopsy dataflow using neural networks
1 ⎧ ⎪⎪1, x < 2 ϕ=⎨ ⎪0, x > 1 ⎪⎩ 2
forms the complete orthonormal basis for the set of curves defined on a discrete mesh. The spectral curves were preliminary averaged over 10 time readings. Such averaging decreased the influence of instrument noise, which was significant for the small spectral intervals. In the space of wavelet coefficients a cluster analysis was fulfilled:
a)
439
⎛ λ −b ⎞ wm , n = ∑ ϕ ⎜ i n ⎟ i ⎝ am ⎠
where λi were the wavelength values, which the spectra were defined on, a m was a scale parameter of the wavelet function, bn was a shift parameter. To save the information of respective cluster location in the initial space the clusterization was done using Kohonen maps [3]. This method is commonly used as a tool of multidimensional space analysis. Each cluster was associated with probability: P=
NM NM + α NH
b)
Fig. 1 The Kohonen maps built over the total set of the wavelet coefficients (a) and that (b) after the decimation procedure. The black marks show respective cluster populations with ‘cancer’ spectra and the gray marks are those for the ‘non-cancer’ ones.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
440
A. Nuzhny, S. Shumsky, T. Lyubynskaya
where N M and N H were the number of ‘cancer’ and ‘noncancer’ spectra fell into the given cluster, α was a regularization parameter of the model. Then each spectra was associated with the value of probability of the cluster, which it felt into:
(
W (t ) = P d (t )
)
The Kohonen map for this case is presented in Fig. 2a. The black marks show the respective cluster populations for the ‘cancer’ spectra and the gray marks are those for the ‘non-cancer’ ones. The map shows that some spectra are typical only for malignant patterns, but some belong to both ‘cancer’ and ‘non-cancer’ families. The ‘cancer’ regions are disconnected because of variety of the ‘cancer’ spectra. A multilayer perceptron with weight decimation was learnt to reproduce the malignancy probability of the current spectrum basing on its wavelet portrait [4]. The goal of learning was minimization of approximation error of the learning sample in sum with Laplace regularization term: 2 ∑ (W (t ) − Y (t )) + λ ∑ wk t
k
where Y (t ) is the perceptron output for the vector d (t ) , wk are the parameters of the perceptron. Regularization parameter λ was chosen using the Bayesian approach [4]. A two-layer perceptron was used, the first layer included 3 neurons, and the second layer had one neuron. As an activation function the sigmoid one was applied.
Fig.2 presents a histogram of the perceptron synaptic coefficients modules after the decimation procedure. Each coefficient in the histogram is placed in the position of the respective wavelet coefficient. The top raw gives the coefficients of the smallest scale, the bottom one corresponds to convolution with the scaling functions [2]. Fig.1b shows the Kohonen map built over the coefficients remained after decimation. In this space ‘cancer’ and ‘non-cancer’ spectra are separated better, because the decimation procedure resulted in reduction of the model noise. Spectrum separation was mainly provided by the small-scale coefficients, which are probably connected with biological tissue absorption bands. III. END-TO-END CALCULATION OF THE MALIGNANCY PROBABILITY
Fig.3a presents a temporal dependence of the malignancy probability for an individual investigation of the cancer patient. The time period specified by the experts as needle presence in the tumor is marked here. The developed neural network marked it as malignant too, but also it found malignant sections before the tumor. It can be interpreted as the tumor invasion, because we never saw cancer detection before the tumor in the cases of benign tumors. Fig.3b demonstrates the same dependence for a benign tumor case. The false detection in the very beginning of needle insertion is explainable: the spectra of some cancer-
a)
b)
Fig. 2 The histogram of the synaptic coefficients modules maximal over
Fig. 3 Temporal dependence of malignancy probability for the
the neurons of the first layer.
investigation of cancer (a) and non-cancer patients.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Detection of the cancerous tissue sections in the breast optical biopsy dataflow using neural networks
ous tissue are similar to those of skin. Also the neural network frequently recognized the spectra as malignant while analyzed the reverse movement, when the channel was filled with blood. That was because the tissue surrounding the cancerous tumor is filled with capillary net, which can be observed in the registered spectra as a hemoglobin notch in the central spectrum part. These false detections presented in all tests, but they can be easily excluded from consideration basing on the indications of position sensor. Modification of the position sensor of the optical biopsy system is envisaged and after that the automatic datafiltering algorithm will be developed.
441
ence of the model regularization parameter α on the quality of separation between ‘cancer’ and ‘non-cancer’ spectra.
ACKNOWLEDGMENT This work was supported by funding from IPP (Initiatives for Proliferation Prevention) Program of U.S. Department of Energy under the contract LLNL-T2-0242-RU and the Project #3075p of International Science and Technology Center.
REFERENCES IV. CONCLUSIONS Basing on expert’s opinions and physician’s comments a set of record sections corresponding to cancerous and noncancer tissue was reliably selected from the dataflow obtained in the clinical studies of the optical biopsy system. 12 pieces of records relevant to malignant tissue and 10 those of non-cancer one formed a learning sample for artificial neural network. The mathematical model for ‘cancer’ spectra extraction based on a multiplayer perceptron was developed. A quantitative assessment of malignant family membership probability of the current spectrum was built. The set of wavelet decomposition coefficients formed an input vector for the perceptron and its output gave the required probabilities. Decimation procedure reduced the noise level in the model and improved separation of ‘cancer’ and ‘noncancer’ areas on the Kohonen map. End-to-end analysis throughout the dataset related to individual optical biopsy procedure proved the ability of the developed model to detect the malignancy. False detections were interpreted as the model reaction on skin and blood. Automatic filtering algorithm will be built basing on position sensor indications. The nearest plans include also investigation of influ-
1 2 3 4
Belkov S, Kochemasov G, Kulikov S, et. al. (2007) Optical biopsy system for breast cancer diagnostics. Report on the conference Daubechies I. (1992) Ten Lectures on Wavelets, SIAM, USA Kohonen T. (1982) Self-Organized formation of topologically correct feature maps, Biological Cybernetics 43:56-69. Williams P. (1995) Bayesian regularization and pruning using a Laplace prior, Neural Computation, 7:117-143 Addresses of the corresponding authors: Author: Nuzhny, Anton Institute: Lebedev’s Physical Institute of Russian Academy of Science Street: 53 Leninsky Prospekt City: Moscow Country: Russia Email:
[email protected] Author: Lyubynskaya, Tatiana Institute: Street: City: Country: Email:
BioFil, Russian Federal Nuclear Center-VNIIEF 37 Prospekt Mira Sarov, Nizhny Novgorod reg. Russia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Estimation of Neural Noise Spectrum in a Postural Control Model A.F. Kohn Biomedical Engineering Laboratory, Universidade de São Paulo, EPUSP, Brazil Abstract— A simple linear feedback control model representing a standing human is driven by neural and torque noise sources. A mathematical expression was derived for the neural noise spectrum as a function of spectra computed from two signals: the electromyogram (EMG) and the angle of the subject with respect to the vertical direction. Simulations of a stochastic postural control system were used to generate “EMG” and “angle” signals that were used in the theoretically derived neural noise spectrum. The comparison with the directly estimated neural noise spectrum showed that the mathematical expression yields estimates that have useful information about the spectral shape of the neural noise. In addition, the method also yields useful estimates of the neural noise spectral bandwidth. Keywords— neural noise, postural oscillations, spectral estimation, postural control model.
I. INTRODUCTION During quiet standing, a human being presents random oscillations, which can be quantified by the variations of the angle of the body with respect to the direction of gravity. There are at least two sources for these random oscillations: one due the neuromuscular system (“neural noise”) and the other due to endogenous mechanical phenomena (“torque noise”). An important source of neural noise is the synaptic bombardment received by a motoneuron [1], which may cause variations in the times of discharge of the motoneurons and in the recruitment of the motor units from the motoneuron pool. These sources of variability of neural origin cause random variations in the maintained force by a given muscle, which will cause torque variability around a joint. The sources of direct mechanical variability would be due to movement of internal organs like the heart and the lung, besides eventual involuntary movement of other parts of the body, such as slight arm movements. The latter should be well controlled during an experiment. Theoretical studies of postural control have included both the torque and neural noises [2-4], even though the features of both are practically unknown in humans, due to the inherent difficulties of noninvasive experiments. An overall objective of this work is to develop a methodology to obtain estimates of the power spectral density of the neural noise when the electromyogram and the angle of the subject with respect to the vertical are measured along time. As a first step
towards this goal a simple model of the postural control system is proposed here, and an expression of the neural noise power spectrum is derived in terms of the assumed known elements. The model deals with oscillations in the anteroposterior direction, which are the most relevant when the subject is standing with the feet apart in a relaxed standing position. The model is linear and time-invariant, which is, in a first approximation, compatible with physiology. [2, 3] II. MODEL The model adopted for the postural control system is shown in Fig. 1. The human being is represented as an inverted pendulum, with rotations occurring around the ankle. The reference angle will be taken as 0o, but any other value could have been adopted, respecting the physiology, without any loss of generality. The central nervous system (CNS) receives spike trains from the sensory receptors that signal the angle of the ankle joint. The variability of this angle is the same as that between the vertical and the axis of the inverted pendulum, which in Fig. 1 is indicated as θ. The CNS generates commands to the muscle (soleus) that are uncorrelated with the additive noise of neural origin. The neural command is assumed to be approximately equal to the electromyogram (EMG) envelope. The force generated by the muscle results in a torque applied on the inverted pendulum, which is described by a second order unstable system GL. Two sources of noise are included: noise generated by the nervous system v, and torque noise q. A basic hypothesis is that both noises are independent, which is physiologically plausible.
Fig. 1. Model adopted for the postural control system.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 419–422, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
420
A.F. Kohn
The proposition with respect to Fig. 1 is to estimate Svv(jω), the power spectral density of the neural noise v. It is assumed that the following signals and systems are known: signals u and y and the transfer functions of each block (Gn, Gm, GL and Gr). The expected values of all random signals in the control loop are assumed equal to zero, without loss of generality. The cross-spectrum Sxy(jω) of the two random signals x(t) and y(t) will be taken as the Fourier transform of Rxy(τ)=E[X(t+τ)Y(t)]. Keeping the consistency with this notation and to make the math easier to develop, Sxy(jω) will be taken as the limit as T → ∞ of
1 X ( jω )Y * ( jω ) T where X(jω) is the Fourier transform of a segment of x(t) of finite duration T and the bar over the spectral product indicates the expected value. The derivations will use X instead X(jω), for example, to ease the notation. The finite Fourier transforms will be indicated by the corresponding capital letter to that of the random signal. The average and limit operations will be taken only at the end. III. THE NEURAL NOISE POWER SPECTRUM
In equation (5) one should notice that Svq is equal to zero, by hypothesis. For more compactness the following definition will be used:
GMA = Gn Gr GL Gm
The product of V with the complex-conjugate of equation (3) gives:
VU * =
VV * − Gn*Gr*GL*VQ * (7) * 1 + GMA
Computing the averages and then the limit for T → ∞ in (7), and remembering that the neural noise is assumed independent from the torque noise, we have
S vu =
S vv * 1 + GMA
S uy =
GL* Gm* S vv − Gn Gr S yy * 1 + GMA
(1)
and (2)
U = V − Gn Gr GL Q − Gn Gr GL GmU
(3)
Multiplying equation (1) by Y*:
UY * = VY * − Gn Gr YY * (4) Substituting (2) in (4), computing the averages and then the limit for T → ∞ , one has
S uy = GL* S vq + GL* Gm* S vu − Gn Gr S yy
where
G=
GL* Gm* * 1 + GMA
(11)
⎡1 + Gn Gr GL Gm ⎤ S vv = [ S yu + Gn*Gr* S yy ] ⋅ ⎢ ⎥ (12) G L Gm ⎣ ⎦
and hence:
V − Gn Gr G L Q 1 + Gn Gr G L Gm
(10)
Another expression, given below is equivalent to (10) but shows every frequency response function shown in Fig. 1:
Substituting (2) in (1):
U=
(9)
From (9), Svv may be isolated as
S vv = [ S uy + Gn Gr S yy ] / G
Y = G L Q + GL GM U
(8)
The expression for Svu in (8) will be substituted into equation (5), resulting in:
From Fig. 1:
U = V − Gn Gr Y
(6)
From (12) it may be seen that an expression for the neural noise power spectrum was obtained as a function of i the cross-spectrum of the angle θ and the EMG envelope and ii the auto-spectrum of the angle θ. These two spectra may be estimated without difficulties from the angle and EMG measured during a typical experiment of postural control. In an experimental situation with human subjects, the transfer functions of some of the blocks would have to be identified, which is out of the scope of this paper.
(5)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Estimation of Neural Noise Spectrum in a Postural Control Model
421
IV. SIMULATIONS Expression (12) shall be used to estimate the neural noise power spectrum from simulations of the system of Fig. 1. The simulations were implemented in Simulink (Mathworks, USA) based on the diagram shown in Fig. 2. Block Gn Is a PD (proportional/derivative) controller, as usually assumed for postural control systems [4]. The neural noise v is obtained applying white noise to the filter given by H1. The torque generation by the soleus muscle was modelled from the experimental curves obtained by Bawa e Stein [5] and is represented by the second order system Gm. The human subject in the standing position is modelled by an inverted pendulum (GL), with parameters usually employed in the literature [2, 3]. The feedback (Gr) was adopted as a constant gain, in part with the assumption that the sensory receptors respond to a wide frequency range. This is also a simplification usually found in the literature [2]. Finally, the torque noise is also modelled as a white noise filtered by a system (H2). The numerical integration was the Dormand-Prince of fifth order, with a fixed step equal to 0.01 s. Each simulation generated 800 s of random signals from which the spectra needed in expression (12) were computed. As the neural noise signal was available directly from the simulation, it was used to obtain a direct spectral estimation so it is possible to evaluate the quality of the spectrum estimated by (12). In a first phase, we used only the neural noise, i.e., the torque noise was made equal to zero. Fig. 3a shows the estimated spectrum from 800 s signals u and y based on expression (12). Fig. 3b shows the neural noise spectrum obtained directly from the noise signal v. This noise was generated by filtering white noise by a second order filter with a resonance peak (H1(s)=1100/(s2+0,4s+100). The spectra in Fig. 3 were normalized so that the regions around 5 Hz would have a similar amplitude. There is a good reproducibility of the spectral shape using (12), but there is an error around 3 dB at frequencies below 1 Hz.
Fig. 3. Neural noise spectra: (a) estimated from (12), (b) estimated directly from v. Torque noise was zero.
Modifying the neural noise spectrum by changing the filter to a first order lowpass system (H1(s)=252/(100s+1)), the spectra of Fig. 4 were obtained. Again, there is a good reproducibility of the spectral shape, but there is an error of 3 dB at 1 Hz and 3,75 dB at 0,2 Hz. In this simulation torque noise was made equal to zero.
Fig. 4. Neural noise spectra, (a) estimated by (12); Fig. 2. Simulation diagram of the postural control system.
(b) directly from v. Torque noise was zero.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
422
A.F. Kohn
rect methods have to used, based on the measurement of externally available signals, such as the EMG and the sway angle. In spite of the simplicity of the chosen postural control model and of the assumption of known transfer functions of each subsystem, the approach has its validity as a first step in the goal of estimating the neural noise from real experiments in humans. The simulation results have suggested that, under the adopted hypotheses, it is possible to obtain spectral estimates of the neural noise which are potentially useful for the study of postural control. The estimation errors seem to be acceptably small for the intended applications. Questions such as (i) what is the shape of the neural noise power spectrum, and (ii) what is the frequency range where there is more power in the neural noise, seem feasible to be answered by the presented approach. The application to real problems will require additional effort, from the signal processing, modeling and system identification points of view. The concepts and initial approach of the present work should serve as a starting point for the much more challenging task ahead.
ACKNOWLEDGMENT
Fig. 5. Neural noise spectra, (a) estimated by (12); (b) directly from v. Torque noise ≠ 0.
Next, both noise sources were included in the simulation. The neural noise had the same spectrum as in Fig. 4b and the torque noise came from filtering white noise by H2(s)=10/(s+1). The standard deviation of the output y was 7.1467e-5 (3.6391e-5) when only the neural (torque) noise were applied. The spectrum estimated from (12) is seen in Fig. 5a and the direct spectral estimation is seen in Fig. 5b. The shape is well reproduced but the low frequency errors are larger than the cases without simultaneous torque noise. For example, the error at 1 Hz was 4 dB and at 0,2 Hz it was 4,12 dB. The filters H1(s) and H2(s) in these simulations, when they were lowpass, were chosen similarly to those found in the literature [2, 4]. The choice of a passband filter was only to check for the sensitivity of (12) to the neural noise spectral shape. V. DISCUSSION AND CONCLUSION The neural noise acting on the postural control system has been postulated in theoretical work [2,3] but no approach to its estimation was found in the literature. Its direct experimental measurement is partially feasible in cats [1, 6] but clearly impossible to obtain in humans. Therefore, indi-
Project was funded by Fapesp, CNPq and Capes.
REFERENCES 1. 2. 3. 4. 5. 6.
Calvin WH, Stevens CF (1968) Synaptic noise and other sources of randomness in motoneuron interspike intervals. J Neurophysiol 31:574-587 Maurer C, Peterka RJ (2005) A new interpretation of spontaneous sway measures based on a simple model of human postural control. J Neurophysiol 93:189-200 Peterka RJ (2000) Postural control model interpretation of stabilogram diffusion analysis. Biol Cybern 82:335-343 Masani K et al (2003) Importance of body sway velocity information in controlling ankle extensor activities during quiet stance. J Neurophysiol 90:3774-3782 Bawa P, Stein RB (1976) Frequency responses of human soleus muscle. J Neurophysiol 39:788-793 Manjarrez E, Hernandez-Paxtian ZJ, Kohn AF (2005) A spinal source for the synchronous fluctuations of bilateral monosynaptic reflexes in cats. J Neurophysiol 94: 3199-3210 Author: Institute: Street: City: Country: Email:
André Fabio Kohn Universidade de Sao Paulo, LEB, EPUSP Cx.P. 61548 CEP 05424-970 São Paulo Brazil
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Frequency characteristics of arterial catheters – an in vitro study F. T. Molnar1 and G. Halasz1 1
Budapest University of Technology and Economics (BUTE)/Department of Hydrodynamic Systems, Budapest, Hungary
Abstract— Continuous blood pressure recording carries the most information on the cardiovascular state of a person. Therefore accurate instrumentation is of high importance. Nowadays the most accurate continuous blood pressure measuring method is the intra-arterial catheterization. However, the accuracy of the fluid-filled catheters raises doubts: the elastic wall of the catheter and the transmission tube each has a damping effect that could play a significant role together. Furthermore the intra-arterial part of a cardiac catheter is in a pulsatile flow, which is assumed to affect the pressure transmission within the measuring line. In this paper behavior of two different types of fluid filled catheters (femoral and cardiac) is described. For the in vitro experiments, a pulsatile arterial system model was applied. Simultaneous measurements of the intra-arterial pressure were carried out: directly with use of a pressure transmitter and through the catheter. Thus the accuracy and the frequency response of the catheters could be obtained and a comparison between the two different types could be made. A numerical model – based on the method of impedances – was developed to describe the frequency transmitting ability of the catheters. The numerical results were compared to the measurements. We found that the experimental results of the different catheters show significant similarities; the numerical and experimental results of the femoral catheter were in a good accordance whereas those of the cardiac catheter show discrepancies. Keywords— arterial catheter, frequency response, method of impedance.
I. INTRODUCTION Arterial catheters are used to obtain accurate and continuous blood pressure data directly from the arteries. One – and the most frequently used – type of the arterial catheters is the fluid-filled catheter. This system is composed of an intra-arterial catheter and a pressure transducer connected by a pressure transmitting tube (arterial line). The transmitting ability of this system depends on numerous parameters and so distortion of the input signal is unavoidable. Several studies have investigated transfer functions by which the input signal can be properly restored from the output data. These transfer functions can be obtained through series of measurements or from theoretical studies and can be applied in medical devices. The goals of our study are to investigate how accurate the output signal is in the medically important frequency range
and to find an adequate numerical model describing the frequency response of the different catheter systems. II. MATERIALS AND METHODS A. Measurements To obtain the frequency characteristics of the measuring equipment a signal that contains several harmonic components was applied. To measure pressure signals inside tubes an arterial model built up from artificial vessels was used [1]. A controlled membrane pump was used to model the function of the heart. By using this, different pressure wave forms could be easily achieved. Peripheral resistance was modeled with adjustable clips at the end of the outlet tubes. The insertion of the catheters (femoral – PULSIOCATH Arterial Thermodilution Catheter, Pulsion; cardiac – EXPO, Boston Scientific, Scimed) was made possible with the use of a special part that also prevented leakage. To measure the pressure directly in the tubes rigid elements with bores were used through which pressure transducers (HottingerBaldwin Messtechnik P3MB/R) were recording the pressure history with a sampling rate of 200 Hz. The catheter tips were in the vicinity of the bores, thus identical inlet pressures were applied for the direct recordings and the catheter measurements (Fig. 1). The pressure data were recorded through a data acquisition system (HBM Spider 8) by a personal computer. The catheters were carefully flushed and air bubbles were removed from the pressure transducers. The dimensions of the arterial catheters are given in Table1. Table 1 Section c1 c2 c3 c4 c5
Dimensions of the arterial catheters Inner diameter [mm] Femoral catheter 90 0.55 115 1.1 1500 1.3 Cardiac catheter 1000 1.42 1500 1.3
Length [mm]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 430–433, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Wall thickness [mm] 0.45 0.45 0.75 0.29 0.75
Frequency characteristics of arterial catheters – an in vitro study
431
ing velocity component ( v > v′ ), while as in our measurements v = 0 , therefore we developed the method for this special case; the impedance of the membrane can be calculated as Ac ⎛ s ⎞ (1) Zm = ⎜ mjω + ⎟ Am K m ⎝ jω ⎠
(c1, c2, c3 are the sections of the femoral, c4 and c5 are that of the cardiac catheter system, pa is the real arterial pressure, pm is the pressure measured at the end of the catheter, while 1 and 2 denote the pressure transducers)
We characterized the transmitting ability of the catheter with the frequency response. Therefore the Fourier-spectrum of both measured pressure histories was calculated and the ratios of the amplitudes at the dominant frequencies were determined. Discrete Fourier transformation was applied for the measured data using MatLab software (“FFT” function). A very important factor at managing critically ill patients is the accurate measuring of mean arterial pressure. Therefore the mean pressures measured by both instrumentations were compared to the one derived from the direct measurements. B. Mathematical model There are numerous studies of describing and developing the arterial catheters through measurements but much less has been done to describe its behavior through a mathematical model. The model set up by Webster et al. [2] deals only with the deformation of the membrane of the sensor, neglecting the deformation of the catheter wall. Although the model described in [3] takes wall deformation into consideration, it has used one elasticity modulus for the whole cathetersensor system, which is quite complicated. Our model – which is described in details in [4] – is based on the method of impedance [5] with some modifications to approach the behavior of the catheter better. The main modifications are as follows: the original method presumes that the mean velocity is bigger than the oscillat-
III. RESULTS A. Measurements When measuring with the femoral catheter the directly measured pressure and the one measured through the catheter shows a good accordance. The maximum difference between the two curves is 3 mmHg. No significant distortion of the wave form can be observed (Fig. 2). The frequency spectra of the two pressures show that there is no
120 real c atheter 110
100
pressure [mmHg]
Fig. 1 Simultaneous pressure recording with pressure transducers
(where j = −1 , Ac is the cross-sectional area of the catheter, Am is that of the membrane, Km is the membrane-constant, m is the mass of the liquid in the catheter-line, s is the stiffness of the membrane and ω is the angular velocity of the exciting pressure). The elasticity modulus of the catheter-line could be obtained from measurements (assuming that for a relatively small pressure range – as it is in this case – the elasticity modulus of the catheter wall does not change significantly). Thus modifying the frequency of the exciting pressure the frequency response could be computed for a wide frequency range. To determine the reduced elasticity modulus [6] of the catheter-line system a sharp pressure jump was generated at the measured point. From the time delay between the pressure jump measured by the pressure transducers connected directly (1 on Fig. 1) and at the end of the catheter (2) – knowing the dimensions of the catheter and the elasticity modulus of the liquid – the wave velocity and from that the reduced elasticity modulus could be calculated.
90
80
70
60
50 16
16.5
17
17.5
time [s]
18
18.5
19
Fig. 2 A typical result of the femoral measurements (black line: measured pressure through the catheter; grey dashed line: directly measured pressure)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
432
F. T. Molnar and G. Halasz 160
Directly measured
12
real c atheter 140
Abs. Magnitude
10 8
120
6 100
pressure [mmHg]
4 2 0
0
5
10
15
20
Frequency [Hz]
25
30
80
60
Measured through the femoral catheter
12
40
Abs. Magnitude
10 20
8 6
0 16
4
0
5
10
15
20
25
30
Frequency [Hz]
Fig. 3 Frequency spectra of the pressures measured directly (above) and through the femoral catheter (below)
definite frequency cut in the measured frequency range (Fig. 3). Calculating the ratios of the amplitudes (the amplitude of the Fourier term in case of measuring through the catheter over the one from direct measurement at discrete frequency components) it turned out that in the measured range there is a slight amplification, while around 14 Hz it reaches its maximum, i.e. the natural frequency of the catheter-line system is around 14 Hz (Fig. 4). Comparing the mean arterial pressures we found a good accordance, i.e. the deviation from the mean arterial pressure measured directly was 2.1%±0.9%. Measurements made with the cardiac catheter showed other results: the maximum difference at the peak values is about 8 mmHg, while the overall maximum difference can even reach higher values. A significant amplification could be observed as the signal measured through the cardiac catheter oscillates around the one recorded directly (Fig. 5). The frequency spectra of the pressure measured with the cardiac catheter and the directly measured one show that with use of the catheter there is no transmission above 17 - 18 Hz (Fig. 6). The amplitude ratio (calculated the same way as above) of the different frequency components showed that there is a significant amplification in the measured range. The natural frequency of this catheter system is about 7 Hz (Fig. 7).
17
17.5
18
time [s]
18.5
19
Fig. 5 A typical result of the cardiac measurements (black line: measured pressure through the catheter; grey dashed line: directly measured pressure) The mean pressure measured with the catheter turned out to be accurate as the mean difference was as small as 0.5%±0.2%. B. Mathematical model First the elasticity moduli of the catheters were determined. After repeated measurements it turned at to be 3.5*107 Pa for both devices. Applying this and the geometric data to the model frequency response were computed. Directly measured
30 25
Abs. Magnitude
0
16.5
20 15 10 5 0
0
5
10
15
Frequency [Hertz]
20
25
30
25
30
Measured through the cardiac catheter
30 25
Abs. Magnitude
2
20 15 10 5 0
0
5
10
15
Frequency [Hertz]
20
Fig. 6 Frequency spectra of the pressures measured directly (above) and through the cardiac catheter (below) 3,0
Amplitude ratio (cardiac catheter/direct)
2,5
Amplitude ratio
2,0
1,5
1,0
0,5
2,5
2,0
1,5
1,0
0,5
0,0
0,0 0
2
4
6
8
10
12
14
16
18
20
Frequency [Hz]
Fig. 4 Ratios of the amplitudes at different measurements with the femoral catheter (data from different measurements are denoted with different colors and forms)
0
2
4
6
8
10
12
14
16
18
20
Frequency (Hz)
Fig. 7 Ratio of the amplitudes at different measurements with the cardiac catheter (data from different measurements are denoted with different colors and forms)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Frequency characteristics of arterial catheters – an in vitro study
For the femoral catheter a slight amplification can be seen in the lower frequency range while at about 12 Hz it reaches a maximum – this is the natural frequency computed by our model. The model shows a significant cut at 16 Hz (Fig. 8). Whereas for the cardiac catheter a significant amplification can be observed in the lower frequency range and the modeled natural frequency is at 12 Hz. The numerical model predicts a sharp cut at 13.5 Hz (Fig. 9).
433
model is that the elasticity of the catheter wall is assumed to be linear. Hence a further developed numerical model is needed to eliminate these limitations. The results of measurements and model show that the frequency response of the catheter system is more limited than in the model of Webster et al. [2]. The measurement results are also in a very good accordance with measurements of other studies [7] and [8].
IV. CONCLUSIONS
V. ACKNOWLEDGEMENTS
The results of the study show that the catheters measure the mean arterial pressure extremely well, though the frequency response is poor. In case of measuring with a cardiac catheter signal transmission above 18 Hz is not possible. The loss of information in this frequency range can ruin the development of more sophisticated diagnostic methods in the future as the real arterial pressure wave can contain components up to 30 Hz (measured with catheter-tip sensor). The numerical model describes the frequency characteristics of the femoral catheter quite well, whereas in case of the cardiac one it gives poor numerical results. However the sharp peak and the cutoff frequency are qualitatively well described. The main reason could be the fact that the described numerical model does not deal with the oscillating pressure surrounding the catheter. Another limitation of the 2
REFERENCES 1.
2.
1.8 1.6
3.
1.4
Amplification
The Department of Hydrodynamic Systems of the BUTE was granted a three-year research sponsorship from the HSRF No.: T048529 (Hungarian Scientific Research Fund) to utilize experience in Numerical and Experimental Investigation of Hemodynamic and Hydrodynamic Systems. The research group was granted a three-year research sponsorship from the HSRF No.: 46538 (Hungarian Scientific Research Fund) to utilize experience in Analysing the Intravascular Volume Status.
1.2
1
4.
0.8 0.6 0.4
5.
0.2 0
0
2
4
6
8
10
12
14
16
18
20
Frequency [Hz ]
Fig. 8 Frequency response of the femoral catheter calculated with the mathematical model
6. 7.
4
3.5
8.
3
Amplification
2.5
2
F. T. MOLNAR, S. TILL AND G. HALASZ: Arterial blood flow and blood pressure measurements on a physical model of human arterial system, Embec 2005 Proc. (3rd Europian Medical & Biological Engineering Conference), 2005, ISSN 17271983 JOHN G. WEBSTER (EDITOR): Medical instrumentation, John Wiley & Sons Inc., Third Edition 1998, pp 295-307. WILMER W. NICHOLS AND MICHAEL F. O’ROURKE: McDonald’s Blood Flow In Arteries, Hodder Arnold, Fifth Edition 2005, pp 107-120. F. T. MOLNAR, G. HALASZ: Fluid mechanical investigation of the behaviour of an arterial catheter, ESBME 2006 Proc., 5th European Symposium on Biomedical Engineering STREETER, V. L., WYLIE, E. B.: Hydraulic Transients, McGraw-Hill Book Company, 1967, pp. 228-238. G. HALASZ, G. KRISTOF, L. KULLMANN: Flow in pipe systems (in Hungarian), 2002. C. PROMONET ET AL.: Time-dependent Pressure Distortion in a Catheter-Transducer System, Anesthesiology, V 92, No 1, 2000, pp 208-218. E. WELLNHOFER ET AL.: High Fidelity Correction of Pressure Signals from Fluid-Filled Systems by Harmonic Analysis, Journal of Clinical Monitoring and Computing ,15, 1999, pp 307-315.
1.5
1
0.5
0
0
2
4
6
8
10
12
14
16
18
20
Frequency [Hz]
Fig. 9 Frequency response of the cardiac catheter calculated with the mathematical model
Author: Institute: Street: City: Country: Email:
Ferenc Tamas Molnar Budapest University of Technology and Economics Stoczek 2. Budapest Hungary
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
On the Occurrence of Phase-locked Pulse Train in the Peripheral Auditory System T. Matsuoka1, D. Konno2 and M. Ogawa2 1
2
Information and Control Systems Science, Utsunomiya University, Utsunomiya, Japan The Department of Electrical and Electronic Engineering, Utsunomiya University, Utsunomiya, Japan
Abstract— The missing fundamental is not in the frequency band of one channel of a telephone line (from 300Hz to 3400Hz), but we can perceived it over the telephone. It is considered that the missing fundamental f0 is produced in the auditory center, when we listen to a complex tone of f1=nf0 and f2=(n+k)f0. f0 is known as the missing fundamental. Some phenomena, which were made clear by psycho-acoustic experiments, have no evidence data of electrophysiology. The missing fundamental phenomenon is one of the phenomena. By making physiological models we are trying to make clear the mechanism how the missing fundamental is produced. We already showed how the information of the missing fundamental f0 explicitly appeared on the aggregated autocorrelogram of the output pulse train for input signal f1 to one cochlear model and the output pulse train for input signal f2 to another cochlear model. We have made clear that the models have the following characteristics about the models of two neurons (Primary Auditory Nerve (Integrate-and-Fire unit with spontaneous discharge) and Anteroventral Cochlear Nucleus (agreement detector)) that play an important role in producing the information of the missing fundamental. The model of Primary Auditory Nerve generates the pulse train (quasi periodic pulse train) adding spontaneous discharge to periodic pulses for input periodic signal. The model of Anteroventral Cochlear Nucleus works as agreement detector of pulses of the input pulse trains (above-mentioned quasi periodic pulse trains (21 quasi periodic pulse trains in Fig.1)) and generates a periodic pulse train (a phase-locked pulse train) which synchronizes an periodic signal to an cochlear model.The afferent nerves from right Cochlear Nucleus and left Cochlear Nucleus is aggregated at Superior Olivary Complex. We suppose that each periodic pulse train from each ear is fed to Superior Olivary Complex. Keywords— Integrate-and-Fire unit, Spontaneous discharge, Agreement detector, Aggregated autocorrelogram, Missing fundamental.
I. INTRODUCTION The missing fundamental is not in the frequency band of one channel of a telephone line (from 300Hz to 3400Hz), but we can perceived it over the telephone. It is considered that the missing fundamental f0 is produced in the auditory center [1], when we listen to a complex tone of f1=nf0 and f2=(n+k)f0. f0 is known as the missing fundamental. Some phenomena, which were made clear by psycho-acoustic
experiments, have no evidence data of electrophysiology. The missing fundamental phenomenon is one of the phenomena. By making physiological models we are trying to make clear the mechanism how the missing fundamental is produced. We showed how the information of the missing fundamental f0 explicitly appeared on the aggregated autocorrelogram of output pulse trains from cochlear models for input signals of f1 Hz and f2 Hz [2]. In this report, We mention the models of two neurons that play an important role in producing the information of the missing fundamental i.e., (a) the model of Primary Auditory Nerve (Integrate-andFire unit with spontaneous discharge), (b) the model of Anteroventral Cochlear Nucleus (agreement detector) shown in Fig.1 and mention the occurrence of phase-locked pulse train in the peripheral auditory system. II. THE MODEL OF PRIMARY AUDITORY NERVE (INTEGRATEAND-FIRE UNIT WITH SPONTANEOUS DISCHARGE)
S. Greenberg suggested on the basis of experimental results that the auditory center has the function seeing how the fire pattern of a neuron repeats and detecting its pitch[3]. We consider how we perceive the missing fundamental by using two cochlea models. When f1 is fed, an
time
AVCN model (agreement detector) ・・・
・・・
Primary auditory nerves
1
…
11
・・・
Output pulse train
…
21
・・・
Inner hair cell Basilar membrane
Pure tone sin(2πft ) Fig.1 Cochlear model - Anteroventral CochlearNucleus model (AVCN model)A cochlear model consists of Basilar Membrane, Inner hair cells and PrimaryAuditory nerves.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 442–444, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
On the Occurrence of Phase-locked Pulse Train in the Peripheral Auditory System
inner hair cell (which is located at the place of the basilar membrane where the envelope peak of the progressive wave of f1 apears) detects the displacement velocity of the basilar membrane. (There are about 3500 inner hair cells in each ear). Primary auditory nerves transform the information of the displacement velocity into the information of pulse trains. A primary auditory nerve is an integrate-and-fire unit with refractory period. For the consideration of the characteristics of the missing fundamental, 1 for n phase-lock phenomenon (the phenomenon generating the periodic pulse train of one pulse for n periods of a periodic input signal) [2] by the integrate-and-fire unit with refractory period is used. An experimental result by the cochlear model is shown in Fig.2. Fig.2 is the interspike-interval histogram of the aggregated autocorreleogram of output pulse trains from two cochlear models. One output pulse train from one cochlear model, to which a pure tone 500Hz is fed, is 1 for 1 phase-locked (to 500Hz) pulse train (the periodic pulse train of one pulse for one period of a periodic input signal). The interspike-interval histogram of the autocorreleogram of the output pulse train contains peaks of every 1/500ms. The interspike-interval histogram of the autocorreleogram of another output pulse train from another cochlear model, to which a 750Hz pure tone is fed, contains peaks of every 1/750ms. The interspike-interval histogram of the aggregated autocorreleogram by overlapping two autocorreleograms of two output pulse trains from two cochlear models contains peaks of every 1/250ms (250 is the maximum common divisor of 500 and 750). The 1/250ms (250Hz) is the missing fundamental f0. In the case that the influence of spontaneous discharge of primary auditory nerves can be ignored [4], each of 21 pathways in Fig.1 has the same pulse train.
443
In actual fact, the influence of spontaneous discharge of primary auditory nerves cannot be ignored. Each pulse train in 21 pathways in Fig.1 is different. III. THE MODEL OF ANTEROVENTRAL COCHLEAR NUCLEUS (AGREEMENT DETECTOR)
It is suggested that an Anteroventral Cochlear Nucleus (AVCN) has the function of an agreement detector [5]. So, we try to make An Anteroventral Cochlear Nucleus model work as an agreement detector of pulse trains and try to make AVCN model generates a pulse train for 21 input pulse trains. The values of parameters of AVCN model are determined such that the synchronization index and the entrainment index of the pulse train from the model agree with those indexes of the pulse trains (physiological data) from Anteroventral Cochlear Nucleuses [5]. The synchronization index is the followings. When output pulses are generated at the particular phase in every cycle of the input signal, the value of the index is 1. When output pulses are generated at the random phase in the input signal, the value of the index is 0. The entrainment index is the followings. When output pulses are generated in every cycle of the input signal, the value of the index is 1. When output pulses are not generated in any cycles of the input signal, the value of the index is 0. The values of both indexes are between 0 and 1. In the case of Integrate-and-Fire units with spontaneous discharge (the models of Primary Auditory Nerves), we have been able to get a synchronized pulse train to an input pure tone in Fig.1. The synchronized pulse train is 1 for 1 phaselocked pulse train to the input pure tone. Ⅳ. CONCLUSIONS
T h e p e a k o f e v e r y 1 /2 5 0 m s 400
Events
300 200 100 0 0
Fig.2
2
4
6 8 IS I[ m s ]
10
12
The interspike-interval (ISI) histogram of the aggregated autocorreleogram (f1=500Hz,f2=750Hz, a1=a2, θ1=θ2=0).
14
We already showed how the information of the missing fundamental f0 explicitly appeared on the aggregated autocorrelogram of the output pulse train for input signal f1 to one cochlear model and the output pulse train for input signal f2 to another cochlear model. By making models, I am trying to make clear the mechanism how the missing fundamental is produced. I have mentioned the following models of two neurons that play an important role in producing the information of the missing fundamental i.e., (a) the model of Primary Auditory Nerve (Integrate-and-Fire unit with spontaneous discharge), (b) the model of Anteroventral Cochlear Nucleus (agreement detector). We have made clear that two models have the following characteristics.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
444
T. Matsuoka, D. Konno and M. Ogawa
(a) The model of Primary Auditory Nerve has the characteristic of spontaneous discharge. So, It generates the pulse train (quasi periodic pulse train) adding spontaneous discharge to periodic pulses for input periodic signal. (b) The model of Anteroventral Cochlear Nucleus works as agreement detector of pulses of the input pulse trains (above-mentioned quasi periodic pulse trains (21 quasi periodic pulse trains in Fig.1)) and generates a periodic pulse train which synchronizes a periodic signal to an ear. The afferent nerves from right Cochlear Nucleus and left Cochlear Nucleus is aggregated at Superior Olivary Complex. We suppose that each periodic pulse train from each ear is fed to Superior Olivary Complex.
REFERENCES 1. 2.
Kawato M, etc.(1997) Vision and Auditory Sense: Cognitive Science series No.3, Tokyo Iwanami Co. Ltd., p.150(in Japanese) Matsuoka T, Ono Y. (1998) Phase-locking by Integral Pulse Frequency Modulation and Information of Missing Fundamental in Pulse Trains, Proc.20 Annual Int. Conf. IEEE in MBS, Vol.20, No.6, pp 3184-3187
3. 4. 5.
Greenberg S, Rhode W.S. In: Yost W.A, Watson C.S, editors (1987) Auditory Processing of Complex Sound, Lawrence Erblaum Associates, pp.225-236 Matsuoka T, Ito K (2003) Perception of Missing Fundamental and Consideration on its Characteristics, Proc. 25th Annual Int. Conf. IEEE in MBS, pp.2059-2062 Joris P.X, Smith L.H, Yin T.C (1994) Enhancement of Neural Synchronization in the Anteroventral Cochlear Nucleus. I. Responses to Tones at the Characteristic Frequency, J. Neurophysiology, 3, pp.1038-1051 Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Takahide Matsuoka Utsunomiya University 7-1-2 Yoto Utsunomiya Japan
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Optimized Design of Single-sided Quadratic Phase Outer Volume Suppression Pulses for Magnetic Resonance Imaging N. Stikov1, A. Mutapcic1 and J.M. Pauly2 1
DO NOT CARE
PASSBAND
STOPBAND
PASSBAND
QUADRATIC
(C)
0.4
0.2
SINGLE-SIDED
0.6
ROI
0.8
DOUBLE-SIDED
1
0
−0.2 −20
−15
−10
−5
0 5 Frequency [kHz]
10 −15
−10
−5
0 5 Frequency [kHz]
10
15
TBW = [kHz] 32 Frequency
(D) 0.2
Magnitude [G]
Quadratic phase outer volume suppression (OVS) pulses are used for single-shot fast spin-echo cardiac imaging [1], as well as for clinical spectroscopic imaging studies of brain and prostate cancer [2]. In order to further flatten out the energy spread and to lower the peak RF value, single-sided quadratic OVS pulses are designed [3]. As shown in figure 1, these single-sided pulses retain the sharp transition at the edge of the region of interest (ROI), forcing quadratic phase
Δ
(B)
Keywords— MRI, RF saturation pulses, outer volume suppression, quadratic phase, weighted least squares.
I. INTRODUCTION
STOPBAND
(A)
ROI
Abstract— In Magnetic Resonance Imaging, outer volume suppression pulses only need to saturate the magnetization on one side of the region of interest (ROI), so there is no need for them to be symmetric. Single-sided quadratic phase outer volume suppression pulses (OVS) make use of this fact to reduce the peak RF excitation pulse power, while maintaining the high selectivity on one side of the passband. Therefore, single-sided pulses are appropriate for single-shot fast spin echo cardiac imaging, as well as for spectroscopic imaging studies of brain and prostate cancer. The low peak RF value is achieved by designing Shinnar-Le Roux (SLR) polynomials with quadratic phase, which is a FIR filter design problem. Due to the relaxation of the magnitude constraints on one side of the passband (the ‘do not care’ region) it is possible to reduce the peak RF power by extending the quadratic phase beyond the passband. The polynomials are designed using a weighted least squares algorithm in which significant weighting is given to the frequency bands over which the magnitude profile is fixed, and a lower weight is applied beyond the transition width on the ‘do not care’ side. We improve the design of single-sided quadratic phase OVS pulses by optimizing the range of frequencies over which the phase of the pulse is quadratic. This is achieved by iteratively increasing the extent of the ‘do not care’ region, using convex optimization methods at each iteration to find the optimal polynomial with the lowest peak coefficient. Applying the SLR transform to the polynomial with the minimum peak, we get the lowest peak RF pulse that produces the desired single-sided saturation profile. Using this procedure the peak RF power can be reduced by additional 5-10 percent, while the profile’s ripple is comparable with the ripple of double-sided pulses.
STOPBAND
Stanford University/Department of Electrical Engineering, PhD student, Stanford, USA Stanford University/Department of Electrical Engineering, Associate Professor, Stanford, USA
Longitudinal Magnetization Longitudinal Magnetization
2
0.15
0.1
0.05
double−sided single−sided
0 0
1
2
3
Time [ms]
Fig. 1 (A) Desired magnitude (B) Desired phase (C) Saturation profile (D) RF pulse magnitude.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 423–425, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
20
424
N. Stikov, A. Mutapcic and J.M. Pauly
in the ‘do not care’ region with extent Δ. Because quadratic phase contributes to an even spread of energy [1], the pulses designed this way reduce the peak RF power. As scanners are limited by the peak RF they can output, reducing this value enables the scan to be performed faster. However, in double-sided pulses the band of frequencies over which the phase is quadratic is limited to the passband. We would like to extend the quadratic phase beyond the transition width and into the stopband, as that will give us greater freedom to spread the energy and reduce the peak RF power. II. PROCEDURE Following the weighted least squares algorithm outlined in [1], we design a feasible quadratic phase FIR filter that has a ripple of 1%in the passband and the stopband, selectivity over 10 (ratio of passband over transition band) at one edge, and a low peak value in the time domain. The target of the weighted least squares procedure is, as shown in figure 1, an ideal quadratic phase filter with a magnitude response dictated by the profile magnitude constraints, and a phase response that is quadratic over the passband and the ‘don’t care’ region.
⎛ ω − Δ / 2 ⎞ jΦ (ω ) ⎟e des(ω ) = rect⎜ ⎜ 2ω + Δ ⎟ p ⎝ ⎠
gion. As we only care about the high selectivity at one edge of the profile, we relax the constraints in the ‘don’t care’ area beyond the second edge, where the magnitude is irrelevant, hoping to get additional quadratic phase accrual. Given a fixed width of the ‘do not care’ region Δ and the weights w, we can find an optimal FIR filter so that the magnitude of the peak filter coefficient is minimized, while satisfying the magnitude and phase constraints of the profile, i.e., we can find a global solution to the following convex optimization problem:
minimize
max i bi
subject to
J ≤ ε,
(4)
where the variable is the FIR coefficient vector b. This problem can be formulated and readily solved as a secondorder cone program (SOCP), e.g., see book [4]. By spanning Δ from 0 to the end of the stopband, we find the optimal ‘do not care’ region that gives the smallest peak bvalue. Since a small peak b-value implies a low peak RF value, the forward SLR transform is applied to the filter to obtain a low peak RF power B1 pulse [5]. III. RESULTS
(1)
We implemented the algorithm in Matlab, solving the optimization problem (4) using CVX [6], which internally calls the SeDuMi solver [7].
Here, Δ is the width of the ‘don’t care’ region, ωp is the edge of the passband, and Φ(ω) is the quadratic phase obtained by integrating a feasible group delay for the desired time-bandwidth product [1]. Letting the phase be quadratic in the passband results in a small peak value for the FIR filter [1]. Increasing the band of frequencies over which the phase is quadratic will result in an even lower peak RF value [3].
We then want to pick the coefficients b to design a polynomial n −1
B (ω ) = ∑ bi e jω
(2)
i =0
so that it fits the desired magnitude and quadratic phase profile with a prescribed least-squares error, i.e., N −1
J = ∑ w(ω i ) B(ω i ) − des(ω i ) ≤ ε , 2
(3)
i =0
where ε > 0 is the maximum allowed error for the prescribed ripple and ωi are sampled (discrete) frequencies with indices i = 0,…,N-1. Here, w(ω) is a weighting vector that controls the ripple by putting more weight on the fixed profile, and relaxing the weighting in the ‘do not care’ re-
Fig. 2 Cross section of a phantom imaged with the optimized (left) and the non-optimized single-sided design (right).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Optimized Design of Single-sided Quadratic Phase Outer Volume Suppression Pulses for Magnetic Resonance Imaging
than the non-optimized single-sided pulse. At the same time, the profile keeps the same selectivity compared to the double-sided pulses, as can be seen in figure 4.
TBW = 32
Magnitude [G]
0.2
IV. CONCLUSIONS
0.15
0.1
0.05
0 0
double−sided single−sided single−sided optimal 1
2
3
Time [s]
Fig. 3 Plot of the peak RF value in Gauss for all three designs. The optimized single-sided design results in the lowest peak RF value, and therefore could be played out in the shortest amount of time. Figure 2 shows that the optimized pulse gives a flatter saturation profile. The design is much smoother, which lets the quadratic phase extend deep into the ‘do not care’ region, reducing the peak value of the RF pulse. Figure 3 shows a comparison between the peak RF values of all three designs. We notice that the optimized onesided design results in the lowest peak RF value. In the case of time-bandwidth product of 32, the peak value of the double-sided pulse is .202 G, the non-optimized singlesided design gave .174 G, and the optimized design resulted in a peak RF value of .162 G. Thus the optimized RF pulse is lower than the double-sided one by 20%, and is 7% better
While the difference in peak RF power between the two single-sided designs is only 7%, the difference in image quality is surprising. With the optimized design the single sided pulses attain the selectivity and ripple level of the double-sided ones. The optimized design removes the abrupt magnitude changes present in the previous designs, and results in a smooth decrease of the magnetization level from the passband to the stopband. This allows for a greater portion of the frequency band to have quadratic phase, and, as a result, the peak RF power goes down. Figure 4 proves that single-sided pulses are comparable with their double-sided counterparts, as the trade-off only happens on the side of the profile which is not in the region of interest.
REFERENCES 1.
2. 3.
4. 5. 6. 7.
LeRoux P, Gilles R, McKinnon G, Carlier P (1998) Optimized outer volume suppression for single-shot fast spin-echo cardiac imaging. J Magn Reson 8: 1022-1032 Tran T, Vigneron D, Sailasuta N, Tropp J et al. (2000) Very selective suppression pulses for clinical MRSI studies of brain and prostate cancer Magn Reson Med 43 : 23-33 Stikov N, Cunningham C, Lustig M, Pauly J (2006) Singlesided quadratic phase outer volume suppression pulses, ISMRM Proc., International Society for Magnetic Resonance in Medicine Scientific Meeting and Exhibition, Seattle, USA, 2006, pp. 3009 Boyd S, Vandenberghe L (2004) Convex Optimization. Cambridge University Press Pauly J, Le Roux P, Nishimura D, Macovski A (1991) Parameters relations for the Shinnar-Le Roux selective excitation pulse design algorithm. IEEE Trans Med Imaging 10: 53-65 Grant M, Boyd S, Ye Y (2006) CVX: Matlab software for disciplined convex programming, version 1.0RC Sturm J (1999) Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones, Optimization Methods and Software, vol. 11, pp. 625–653 Author: Institute: Street: City: Country: Email:
Fig. 4
425
Nikola Stikov Stanford University Department of Electrical Engineering 210 Packard Building, Serra Mall 350 Stanford CA 94305 United States
[email protected]
Comparison between the optimized single-sided (left) and the double-sided profile (right).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Signal Processing methods for PPG Module to Increase Signal Quality K. Pilt1, K. Meigas1, J. Lass1 and M. Rosmann2 1
Department of Biomedical Engineering, Tallinn University of Technology, Tallinn, Estonia 2 Tensiotrace OÜ, Majaka 26, 11412, Tallinn, Estonia
Abstract— To estimate blood pressure with using pulse wave transit time method, the PPG and ECG signals have to be measured with high quality. This paper describes a device that improves PPG signal quality, with using different analogue and digital signal processing methods. The device is developed for the 24-hour ambulatory blood pressure monitoring system. Part of the device is designed in hardware and part of it is modelled in MATLAB. The experiments with PPG signal, noises and DC component drift included, have been carried out. As a result, the PPG signal quality has been improved with this device. Keywords— Blood pressure, non-invasive measurement, photoplethysmography, pulse wave transit time, DC component.
I. INTRODUCTION In recent research various alternative methods have been found to measure blood pressure in non-invasive way [1, 2]. One of the possible ways to measure blood pressure beat-tobeat is to use pulse wave velocity or pulse wave transit time (PWTT). One possibility to estimate PWTT is to calculate the time between electrocardiogram (ECG) R peak and 50% rising front of photoplethysmograph (PPG) pulse [3]. To locate R peak and 50% rising front accurately enough, both signals have to be with high quality. In this paper, we are concentrating on signal processing methods to increase the quality of PPG signal. PPG signal consists of two main parts: DC and AC component. The AC component amplitude of the signal can be on the order of 0,1% of the DC amplitude. Measured PPG signal also includes noises, which are caused by many reasons, mainly ambient light changes, respiration, motion and electromagnetic disturbances. All unwanted components have to be eliminated with different analogue and digital signal processing methods. Most researches have concentrated on blood pressure measurement in real time. Signal processing methods described here concentrate on device, which is meant for 24hour ambulatory blood pressure monitoring. The PPG signal is firstly processed in real time and recorded into memory card. Signal post-processing for more time consumable methods is done in PC afterwards.
II. MATERIALS AND METHODS The device block diagram to make signal processing for PPG signal is given in figure 1. It consists of two parts. First part describes signal processing, which is done before recording the signal. The second signal chain describes recorded data processing, which is done in PC. The input signal is obtained from PPG sensor, which consists of infrared light emitting diode (IR LED) and photosensitive element. The PPG signal is sent into analogue processing block, where the high frequencies are eliminated for analogue-to-digital converter. In addition, ambient light elimination processing is done in this block. In second block, the DC component is eliminated by using compensation method. In next block, the signal is amplified before it is sent to the analogue-digital converter (ADC). Finally, the signal is saved on a memory card. After 24-hour ambulatory blood pressure monitoring, the recorded data is transferred to PC for post-processing. The post-processing consists of two blocks. In signal reconstruc-
Fig. 1 Block diagram of the PPG signal processing algorithm
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 434–437, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Signal Processing methods for PPG Module to Increase Signal Quality
tion block, the voltage steps that are caused by DC component compensation system are eliminated. In second block, the signal is digitally processed by using filters. In analogue part of signal processing the high frequencies are eliminated by using simple RC filters. The RC filter cutoff frequency mainly depends on analogue-to-digital converter sample frequency. In addition, the accuracy of PWTT estimation is closely related to the sample frequency. If the sampling frequency is too low, the location of 50% rising front of PPG pulse is detected inaccurately. In first approximation, the sampling frequency should be 500Hz. In our first experiment, the sampling frequency of 100Hz was chosen. Usually in PPG devices, the DC component elimination is realized by switching capacitor into the signal chain. On figure 2 RC high-pass filter that removes DC component is given. Under normal conditions, the heart frequency is around 1Hz. To get accurate results from analogue filtration, the cut-off frequency of high-pass filter must be at a frequency 10 times lower, which reduces distortion caused by analogue filter non-linear phase. The filter may still cause remarkable phase distortion if artefacts occur in PPG signal. On figure 3 measured signals from filter circuit points “A” and “B” are shown. Signal with the artefact (Fig. 3a) charges capacitor in RC filter. The capacitor creates a slope when it discharges, which is shown on figure 3b. During the capacitor discharging, the PPG signal is disturbed and the signal under interest is lost. A better way to eliminate DC component is to use a compensation method. The opposite marked signal is given into the signal chain, which compensates DC signal. In recent studies it has been revealed that this method needs much computing power in real-time applications [4]. Authors of the paper have designed a method, which is simple and needs less computing power. The method is based on step by step DC signal drift compensation. As the method and device is still pending patent, it is not possible to describe it in detail here. In first experiments, the DC component elimination is simulated in MATLAB, which will later be implemented on microcontroller. One of the possibilities is to use programmable integrated circuit (PIC) microcontroller. Since the PPG signal has very small AC component, it is difficult to use direct AD conversion. The signal has to be amplified before the AD conversion. In our work, the amplifier has constant gain. The PPG signal is amplified 10 times after DC component elimination. In future study the gain could be changed respectively to AC signal amplitude. For later signal processing the data has to be reconstructed. The voltage steps that are caused by DC signal compensation algorithm have to be eliminated. As there is enough time for signal processing, different and more time consuming methods could be used. Our first approach is to use a step detector. It measures the differences between
435
Fig. 2 Basic concept of DC component elimination with filter.
Fig. 3 Artefact caused capacitor discharging. Graph A and B are measured from circuit points as shown on figure 2 respectively.
previous and present sample; and decides whether there is DC compensation step or not. The algorithm for data reconstruction is simulated in MATLAB. In digital filtering section, the final PPG signal processing is done before pulse transit time is calculated. Electrical noises such as 50Hz are eliminated by low-pass or notch finite impulse response (FIR) filter [5]. The order of the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
436
K. Pilt, K. Meigas, J. Lass and M. Rosmann
filter can be set high as there is no time limit in filtering calculations. In this work the low-pass filter cut-off frequency is 30Hz. The DC component is filtered out with high-pass filter. In this work, the filters used are all with order 1000. High-pass filter cut-off frequency of 0,9Hz is chosen for the DC component elimination. The artefacts can also occur at low frequencies and are often caused by sensor movement. The frequencies of the artefacts may overlap with PPG frequencies. One possibility is to use a high-pass filter with changing cut-off frequency to reduce such disturbances [6]. The filter cut-off frequency is changing with the heartbeat frequency. There is a possibility to design a new filter for each beat-to-beat cycle. In some cases, it can be computationally extensive. The FIR filter is determined with its impulse response. By “stretching” and “compressing” the filter impulse response, the cutoff frequency is changed. III. RESULTS The PPG signal is measured from forehead without any change in physical load. On figure 4a the Signal after analogue processing is given. It consists of 0,35V AC component riding on 3,5V DC component. In addition, the signal has different high frequency noises. To simulate the DC compensation, amplification, signal reconstruction and digital processing methods in MATLAB, there are components added to the signal. The slight DC component drift from 3,5V up to 4,0V is added. The 0,5Hz sinusoidal noise is added with amplitude of 0,05V. The artefacts +5V and 0V are added to the signal from 64 to 65 seconds and from 69 to 70 seconds respectively. The resulting signal is given on figure 4b. On figure 4c there is a signal, which has passed DC compensation. It shows that the artefacts are limited up to 1V. In addition, the signal DC drift has caused the compensation signal change for one step on 58th second. As the signal is in range –1V up to 1V we can amplify it with constant gain. In the next stage, the signal is amplified 2,5 times. On figure 4d the signal after amplification is presented. Because of the DC compensation, the peak-topeak signal value is always 5V. As for the next step the signal is reconstructed and the DC component caused steps are eliminated as described before. The result is shown on figure 4e. The artefacts caused errors are clearly visible. Fig. 4 Outputs from the different stages of the signal processing algorithm
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Signal Processing methods for PPG Module to Increase Signal Quality
With digital filtering, the DC component and low frequency noise is eliminated (fig. 4f). Artefactual disturbances are still visible. However, the filtering has not caused distortion to the signal. With longer artefacts, the result remains the same. Because of the filter order, the signal delay is 10 seconds. It has to be taken in account later in pulse transit time calculations. In this way the PPG signal is measured and processed with minimal losses in signal quality. Artefacts caused interference is the only signal loss, which disturbs PWTT calculations. IV. CONCLUSION The signal processing method has been developed to increase the PPG signal quality for 24-hour ambulatory monitoring. Signal DC compensation, amplification, reconstruction and digital filtering have been modelled in MATLAB. As a future work, DC compensation and amplification sections have to be implemented in hardware. Depending on ADC selection different amplification designs are being considered.
ACKNOWLEDGMENT Estonian Science Foundation Grant G5888.
437
REFERENCES 1. 2. 3.
4.
5. 6.
K..P. Lin, H.D. Lin, B-I. Jan, M.F. Chen, C-H. Yu (2006) Apparatus for evaluating cardiovascular functions. US 2006/0264771 A1 B.G. Asmar, R. Asmar, R.P. Moore (2000) Device and method for assessing cardiovascular function. WO 00/76394 A1 K. Meigas, J. Lass, D. Karai, R. Kattai, J. Kaik (2005) Method and device for beat-to-beat blood pressure measurements, IFMBE Proc. vol. 11, The European Med. and Biol. Eng. Conference, Prague, Czech Republic, 2005, 6 pages on CD D. Thompson, A. Wareing, D, Day, S. Warren (2006) Pulse oximeter improvement with and ADC-DAC feedback loop and a radial reflectance sensor, IEEE Proc. EMBS Annual Int. Conference, New York City, USA, 2006, pp 815-818 A. V. Oppenheim, R. W. Schafer with J. R. Buck (1999) Discrete-time signal processing. 2nd edition, Prentice Hall M. Min, T. Parve, V. Kukk, A. Kuhlberg (2002) An implantable analyzer of bio-impedance dynamics: mixed signal approach, IEEE Transactions on instrumentation and measurement. vol. 51 (4) pp 674-678 Author: Institute: Street: City: Country: Email:
Kristjan Pilt Department of Biomedical Engineering Ehitajate tee 5 Tallinn, 19086 Estonia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Automatic recognition of hemodynamic responses to rare stimuli using functional Near-Infrared Spectroscopy M. Butti1, A. C. Merzagora2, M. Izzetoglu2, S. Bunce3, A. M. Bianchi1, S. Cerutti1, B. Onaral 2 1
2
Department of Biomedical Engineering, Polytechnic University, Milan, Italy School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, USA 3 Department of Psychiatry, College of Medicine, Drexel University, Philadelphia, PA, USA
Abstract— Attention domain is of crucial importance for goal-directed behaviors and it has been widely studied through response analysis to rare stimuli using electroencephalography (EEG). More recent researches have explored the brain circuitry of attention by applying neuroimaging techniques, such as functional magnetic resonance. This paper investigates for the first time the feasibility of automatic recognition of responses to rare stimuli by using functional near-infrared spectroscopy (fNIRS). fNIRS is a portable brain imaging modality that optically measures the cortical hemodynamic activation and may prove useful in monitoring localized activity changes in frontal cortex related to attention processes. In this preliminary study, Fisher Linear Discriminant (FLD) is used to discriminate between average responses to rare task-relevant stimuli and responses to task-irrelevant stimuli. Keywords— fNIRS, pattern classification, attention.
I. INTRODUCTION Attentional processes play a major role in regulating perceptual experiences and goal-directed behaviors. Attention is generally modeled as a mixture of bottom-up and top-down mechanisms. Specifically, bottom-up mechanisms are responsible for shifting attention towards salient stimuli and top-down mechanisms allow selective processing of incoming information [1]. Variability in the attention level may be due to mental and neurological disorders (such as Attention Deficit and Hyperactivity Disorder), brain injuries or other factors (such as tiredness). The subsequent lower attention level greatly affects the subject performance and may cause dangerous situations. Therefore, given its crucial importance in the execution of everyday tasks, it is of fundamental interest not only to understand the neural correlates of attention but also to develop new objective tools to assess it. Among traditional methods to investigate the attentional domain, the analysis of electroencephalographic (EEG) responses to rare events is the most widespread. Rare events, in fact, elicit event-related potentials in which the positive peak (P300) occurring 300ms after the stimulus is the most prominent component. Yet, EEG recordings provide little information about the spatial location of the neural circuits that originate the signal. Responses to rare events have also been examined
also with neuroimaging techniques, such as functional magnetic resonance (fMRI) [2], and the findings consistently support the involvement of frontal cortex. More specifically, frontal cortex is suggested to lead sensory areas to favor the processing of task-relevant stimuli over irrelevant stimuli [3]. Recently, differences between responses to task-relevant and task-irrelevant stimuli have also been detected in the frontal cortex using functional near-infrared spectroscopy (fNIRS) as well [4]. fNIRS is a safe and portable neuroimaging modality that monitors changes in the oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (oxy-Hb) concentrations in the frontal cortex. This is achieved by shining near-infrared light (with wavelength in the range between 700 and 900 nm) at the subject’s forehead and by measuring the light subsequently emerging from the scalp. Since oxy-Hb and deoxy-Hb are the major light absorbers in the living tissue for that wavelength range, variations in the absorbed light provide information about changes of their concentration in the underlying cortical tissue [5]. Changes in the oxy- and deoxy-hemoglobin concentrations, and thus changes in the local blood flow, are tightly related to the brain activation due to cognitive activity [6]. Given its ability to monitor cerebral activity in the frontal cortex, fNIRS has been proven able to detect differences between responses to task-relevant stimuli and responses to task-irrelevant stimuli. The aim of this study was to explore the use of fNIRS for automatically categorize responses to rare stimuli and responses to irrelevant stimuli. As a first step toward this goal, discrimination between the two classes of responses is here performed here on average data across multiple trials. The classifier applied in this preliminary study is the Fisher Linear Discriminant (FLD), which seeks maximum separability between two groups based on first- and second-order statistics of sample data. II. MATERIALS AND METHODS A. Data acquisition Frontal cortex activation was measured from 15 subjects during a computerized task of selective attention. All experi-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 473–476, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
474
M. Butti, A. C. Merzagora, M. Izzetoglu, S. Bunce, A. M. Bianchi, S. Cerutti, B. Onaral
mental procedures were approved by Drexel University IRB. Hemodynamic activity was recorded using an fNIRS device [7], comprised of a probe holding 4 near-infrared light sources and 10 photodetectors. The activity of the light sources is controlled in such a way that it is possible to collect oxygenation data from 16 different locations on the forehead; the sampling rate was 2 Hz. Subjects were comfortably seated in a dimly lit room and the fNIRS probe was securely placed on the subject’s forehead in a standardized position. Visual stimuli were presented on a computer monitor using STIM software (Neuroscan, Inc). The stimuli belonged to two classes of letter strings (“XXXXX” for the target and “OOOOO” for the nontarget stimulus), with targets less probable than non-target stimuli. The subjects were asked to press a different button depending on the occurred stimulus. A total of 512 stimuli were presented, 480 non-target stimuli and 32 targets. The stimulus duration was 500 ms, with an interstimulus interval of 1500 ms. Target stimuli were presented pseudo-randomly with respect to context stimuli. In order to allow the hemodynamic response to return to the baseline value between target presentations, a minimum of 12 non-target stimuli was required between successive targets. B. Pre-processing The raw absorption data acquired by the fNIRS device were low-passed and, using the modified Beer-Lambert law with a global baseline, relative changes of oxy-Hb and deoxy Hb concentration were calculated [5]. General oxygenation values were obtained as a difference between oxy-Hb and deoxy-Hb variations and then normalized with respect to the local pre-stimulus baseline. Target epochs were extracted with a target-locked window: the window included 2s before the target and 12s after it. Non-target epochs, instead, were extracted locking similar windows to nontarget stimuli selected in a quasi-random fashion. The nontarget stimuli were selected such that their distance from the next target was random. The reason for this operation was to minimize the effect of a target response appearing in a non-target response. In order to limit drifts in the signal, a quadratic detrending was operated for each epoch. A second order polynomial fitting was performed and then subtracted from the waveform. The residual was then used for feature extraction. Averaged responses were obtained for each subject and for each channel in the two stimulation conditions. C. Feature extraction and statistical analysis Possible features to be used in the classification of target and non-target responses have been identified by examining the time course of oxygenation changes. The initial under-
shoot, occurring in the first 1-3 s, and the successive peak, occurring 6-10 s after the stimulus onset, are in fact in agreement with the hemodynamic time courses recorded with fMRI [8] and easily recognizable. In particular, the differences in amplitude and latency between the two aforementioned peaks have been selected as classification features because of their stability with respect to drifts. Repeated measures ANOVAs with Geisser-Greenhouse correction have then been performed on the selected features for each channel to investigate possible significant differences between the target and non-target conditions. D. Classification The most suitable features were identified on the basis of the statistical analyses results. The classification between target and non-target responses was then performed using the Fisher Linear Discriminant (FLD) [9]. This is a linear method that projects high-dimensional data onto a lower dimensional space (1D space) in such a way that maximizes the separability between groups in a least-squares sense. Let x be a set of n d-dimensional samples belonging to two possible classes. The element y= wTx will then be the projection of x along the line specified by w. In the case of the FLD, w maximizes the ratio of between-class scatter and within-class scatter and is given by:
w = S −1 (m1 − m 2 )
(1)
where S is the sample covariance and mi is the sample mean for the i-th class. The optimal decision boundary is defined by the equation wTx+w0 = 0, and the threshold w0 is computed as: w0 = −
1 (m1 − m2 )S−1(m1 − m2 ) 2
(2)
The classifier was validated using the leave-one-out method: each one of the samples was in turn held out of the training set and used for validation. Considering the target response as the positive condition, the classifier performance was evaluated through its accuracy, sensitivity and specificity, calculated as follows: (TP + TN ) (3) Accuracy = TotalClassifications
Sensitivity =
TP TP + FN
(4)
Specificity =
TN TN + FP
(5)
where TP = True Positives, TN = True Negatives, FN = false Negatives, and FP = False Positives.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Automatic recognition of hemodynamic responses to rare stimuli using functional Near-Infrared Spectroscopy
III. RESULTS AND DISCUSSION The responses to targets and non-targets in one channel (channel 8) are presented in figure 1, which also depicts the two features selected for statistical investigation: the differences in amplitude and latency between the initial undershoot and the successive peak in the oxygenation waveform. The latency difference was not consistent across channels (Fig. 2.A). The difference in amplitude, instead, proved to be consistently higher in the target than in the non-target condition (Fig. 2.B), in agreement with fMRI studies [2].
Fig. 1: Differences in amplitude and latency
Fig. 2:
Latencies and amplitude differences compared for the two stimulation conditions: Target (T) and Non-Target (NT)
475
Table 1: Results of the repeated measured ANOVA; for both the differences in latency and amplitude the adjusted p-value and the degrees of freedom (DF) are shown. Δ LATENCY
CHANNEL
Adjusted p-value 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0.46 0.91 0.94 0.30 0.50 0.62 0.88 0.08 0.50 0.90 0.55 0.65 0.60 1.00 0.52 0.68
DF (1,7) (1,6) (1,6) (1,5) (1,8) (1,10) (1,5) (1,9) (1,3) (1,8) (1,6) (1,8) (1,4) (1,5) (1,3) (1,6)
Δ AMPLITUDE Adjusted p-value 0.37 0.62 0.10 0.19 0.49 0.38 0.05 0.07 0.33 0.13 0.78 0.01 0.14 0.14 0.10 0.19
DF (1,7) (1,6) (1,6) (1,5) (1,8) (1,10) (1,5) (1,9) (1,3) (1,8) (1,6) (1,8) (1,4) (1,5) (1,3) (1,6)
Table 1 presents the results of the repeated measures ANOVAs; for each channel both the adjusted p-value and the degrees of freedom for the subject factor are given. Given their statistical significance and their fairly high degrees of freedom, the amplitude differences in channels 3, 8, and 12 have been chosen for the classification. The cortical areas monitored by these three channels approximately correspond to left and right middle frontal gyri (channels 3 and 12 respectively) and to the left frontal pole. The distribution of the selected features in the threedimensional space (Fig. 3) indicates a quite linear separability between the two groups, justifying the use of a relatively simple discrimination algorithm.
Fig. 3:
3D scatterplot of the selected features (targets are represented by red dots and non-targets are represented by blue dots)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
476
M. Butti, A. C. Merzagora, M. Izzetoglu, S. Bunce, A. M. Bianchi, S. Cerutti, B. Onaral
Table 2: Accuracy 67%
Classification results Sensitivity 57%
Specificity 87%
The results of the classification performed using the FLD and the leave-one-out validation for the chosen features were calculated. Considering the target response as the positive condition, the performance indexes are presented in Table 2. Though the overall performance of the classifier is reasonably satisfactory, the specificity value was found to be higher than the sensitivity. This observation means that the FLD recognizes non-targets more easily than targets. IV. CONCLUSION The study presented in this paper explores for the first time the applicability of fNIRS to non-invasively and automatically detect changes in the frontal cortex activation elicited by rare stimuli. Although the number of subjects included in the research was limited, the results obtained in discriminating between responses to context stimuli (nontargets) and rare stimuli (targets) are overall encouraging and justify further investigation of the hemodynamic signal usefulness for this application. In particular, two different strategies will be implemented in order to improve the classification performance. First, different classifiers will be explored. Their performances will then be compared to that of the Fisher Linear Discriminant, the fairly simple classifier used in this preliminary study. Second, alternative features will be chosen for the classification. In particular, oxy-Hb and deoxy-Hb concentration changes will be analyzed and used for classification separately. Future work will also investigate the suitability of fNIRS signals for the single-trial detection of responses to rare stimuli. This would in fact prove useful in applications where feedback about the brain activation level is needed. Examples of such applications are brain computer interfaces or settings in which attention lapses and cognitive slowing need to be monitored. To these purposes, in particular, the combination of features extracted from the hemodynamic activation
(fNIRS) and features extracted from electrophysiological activation (EEG) will be explored and this is expected to yield to higher performances and to a more robust classification.
REFERENCES 1. 2.
3. 4.
5.
6. 7.
8. 9.
Pessoa L, Kastner S, Ungerleide L G (2003) Neuroimaging Studies of Attention: From Modulation of Sensory Processing to Top-Down Control. J. Neurosci., 23:3990-3998. McCarthy G, Luby M, Gore J, Goldman-Rakic P (1997) Infrequent events transiently activate human prefrontal and parietal cortex as measured by functional MRI. Journal of Neurophysiology, 77:1630-1634. Hopfinger J B, Buonocore M H, Mangun G R (2000) The neural mechanisms of top-down attentional control. Nature Neuroscience, 3:284-91. Izzetoglu K, Yurtsever G, Bozkurt A, Yazici B, Bunce S, Pourrezaei K, Onaral B (2003) NIR spectroscopy measurements of cognitive load elicited by GKT and target categorization. 36th Hawaii International Conference on System Sciences, the Augmented Cognition and Human-Robot Interaction: 6. Villringer A, Planck J, Hock C, Schleinkofer L, Dirnagl U (1993) Near infrared spectroscopy (NIRS): a new tool to study hemodynamic changes during activation of brain function in human adults. Neuroscience Letters, 154:101-104. Villringer A, Dirnagl U (1995) Coupling of brain activity and cerebral blood flow: basis of functional neuroimaging. Cerebrovascular and Brain Metabolism Reviews, 7:240-276. Izzetoglu K, Bunce S, Izzetoglu M, Onaral B, Pourrezaei K (2004) Functional near-infrared neuroimaging. 26th Annual International Conference of IEEE EMBS. San Francisco (CA). Logothetis N K, Wandell B A (2004) Interpreting the BOLD Signal. Annual Review of Physiology, 66:735-769. Fisher R A (1936) The use of multiple measures in taxonomic problems. Ann. Eugenics 7:179-188. Author: Institute: Street: City: Country: Email:
Michele Butti Department of Biomedical Engineering, Polytechnic University of Milan Piazza Leonardo da Vinci 32, 20133 Milan Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Comparison of two hypoxic markers: pimonidazole and glucose transporter 1 (Glut-1) A. Coer1,2, M. Legan1, D. Stiblar-Martincic1, M. Cemazar2,3, G. Sersa3 1
Institute for Histology and Embryology, Medical Faculty, University of Ljubljana, Ljubljana, Slovenia 2 College of HealthCare Isola, University of Primorska, Izola, Slovenia 3 Institute of Oncology, Zaloska 2, Ljubljana, Slovenia
Abstract− Tumour hypoxia occurs as a result of an inadequate supply of blood borne oxygen due to the disorganized and chaotic vascular network that develops in tumours. There is a recognized need for a method of measuring tumour hypoxia that is suitable for widespread clinical use. This problem may be partially overcome by the use of the bioreductive hypoxia marker pimonidazole; however, because the drug must be administered prospectively, studies on archival material are not possible. The presence of hypoxia in tumours results in the overexpression of certain genes, which are controlled via the hypoxia inducible factor-1 (HIF-1). The HIF-1-regulated protein glucose transporter 1 (Glut-1) has recently been introduced as an intrinsic marker of hypoxia. The aim of the current study was to compare the expression of Glut-1 with the binding of the bioreductive drug hypoxia marker pimonidazole and to elucidate the characteristics and pitfalls when they are used as a hypoxic marker. In the study, SA-1 solid subcutaneous tumours in A/J mice were treated by bleomycin given i.v. (1mg/kg), or the application of electric pulses (8 pulses, 1040 V, 100 μs, 1Hz), or a combination of the two - electrochemo-therapy. Pimonidazole was injected 16 hours before tumour excision. The tumour were excised at different time points (0.5, 1 and 2 hours) after therapy. Immunohistochemistry for Glut-1 and pimonidazole adduct was carried out on two consecutive tumour sections and the percentages of positive staining areas were determined. Glut-1 staining was membranous and typically expressed peri-necrotically, whereas pimonidazole, although showing substantial co-localisation with Glut-1, was cytoplasmatic. Our results show that Glut-1 expression significantly correlates with the level of pimonidazole binding. Our study thus confirms that HIF-1 regulated genes, such as Glut-1, have potential for future use as predictors of the decreased sensitivity of tumours to radio- and chemotherapy mediated by hypoxia. Keywords− hypoxic marker, glucose immunohistochemistry, pimonidazole.
transporter-1,
I. INTRODUCTION
Measurements of oxygen partial pressure in tumours performed with microelectrodes have yielded novel information concerning the contribution of the tumour oxygenation status to the course of malignant growth and
showed the presence of hypoxic areas to be a universal characteristic of solid malignant tumours [1]. The presence of hypoxia in tumours is known to lead to resistance to radiotherapy and chemotherapy and is associated with a more aggressive phenotype with an increase propensity for metastases [2, 3]. Tumour hypoxia has an impact on such fundamental aspects of malignancy as a) cell survival and proliferation, b) angiogenesis, c) cancer cell invasiveness, d) metastasis, e) resistance to apoptosis, and f) genetic instability [4]. This is related to the increased expression of a number of proteins acting through the hypoxia inducible factor-1 (HIF-1) pathway, which allows tumour to survive the harsh tumour microenvironment. Since O2 microelectrode measurements are invasive and applicable only to tumour entities accessible to needle electrodes, there is great interest in substitute methods for assessment of the oxygenation status. Bioreductive drug markers such as pimonidazole, provide an alternative approach for assessing the level and extent of tumour hypoxia. Pimonidazole, administered approximately 16 h prior to tumour excision, is reductively activated in an oxygen-dependent manner and is covalently bound to thiolcontaining proteins in hypoxic cells forming intracellular adducts that can be detected immuno-histochemically [5, 6]. A clinical comparative study between microelectrode measurements of pO2 and the extent of pimonidazole adduct formation in carcinoma of the cervix has recently been done [7]. However, because the drug must be administered prospectively, if studies necessitate the use of archival material the administration of pimonidazole is not possible. There is an increasing need for an endogenous marker to assess the presence of hypoxia. Endogenous markers would have additional advantages, since they do not require the application of a foreign substance and would allow studying of the oxygenation status in archival paraffin material [8]. Glucose transporter- 1 (Glut-1) is one of the proteins upregulated in a hypoxic condition. In a tumour microenvironment, hypoxia results in an increased transcription of the Glut-1 gene, mediated through HIF-1. Tumours show increased uptake of glucose compared to normal tissue and Glut-1 is responsible for the passive
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 465–468, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
466
A. Coer, M. Legan, D. Stiblar-Martincic, M. Cemazar, G. Sersa
transport of glucose across the cell membrane [9]. Glut-1 over-expression has been associated with enhanced tumour aggressiveness and an unfavourable clinical outcome in various tumour types [4, 10]. It has been suggested that Glut-1 might represent an intrinsic marker of hypoxia. The aim of our study was to compare the expression of Glut-1 with the binding of the bioreductive drug hypoxia marker pimonidazole and to elucidate the characteristics and pitfalls when each is used as a hypoxic marker. II.
MATERIAL AND METHODS
Murine fibrosarcoma SA-1 cells and A/J mice were used for this study. Solid subcutaneous tumours, located dorsolateraly in mice, were initiated by an injection of 5x105 SA-1 cells in 0.1 ml 0.9% NaCl solution. Six to 8 days after implantation, when the tumours reached approximately 40mm3 in volume, the mice were randomly divided into experimental groups. In the first group bleomycin at a dose of 1 mg/kg was injected intravenously. In the second group eight square electric pulses (at a voltage to distance ratio of 1300V/cm) were delivered by two flat electrodes 8 mm apart. In the electrochemotherapy group, the mice were treated with electric pulses 3 minutes after bleomycin injection. Tumours without treatment were used as control. At different time points (0.5, 1 and 2 hours) after treatment with bleomycin, application of electric pulses or electrochemotherapy of tumours, the tumours were excised. Immunohistochemistry for Glut-1 and pimonidazole adduct was carried out on two consecutive sections and the percentage of positive staining areas was determined. Pimonidazole HCl (Hypoxyprobe-1, Natural Pharmacia International Inc.) was administered to mice at a single dose of 100 mg/kg 16h before sacrifice and excision. Tumour tissue specimens were formalin-fixed and paraffinembedded. Immunohistochemical staining was carried out using the avidin-biotin peroxidase complex method. For pimonidazole evaluation, mouse monoclonal antibody raised against intracellular pimonidazole adduct (Natural Pharmacia International Inc.) at a dilution of 1/100 was used. Immunostaining for Glut-1 expression was carried out using purified anti rabbit Glut-1 (Alpha Diagnostic International) at a dilution of 1/100. A semi-quantitative scoring system was applied to the Glut-1 and pimonidazole adduct stained sections, and for each microscopic field was assigned a score 1-4, representative of the approximate area of immunostaining (0, 0%; 1, 0-5%: 2, 5-15%; 3, 15-30%; 4, >30%). Areas of necrosis, stroma, normal epithelium and distinct edge effects were ignored. The overall scores used in our study
summarise pimonidazole binding or Glut-1 expression across the tumour was derived from the average score, for all fields. Two-tailed, Spearman’s rank correlations were used to assess the relationship between the hypoxia markers used in our study. A significance level of p=0.05 was used. III.
RESULTS
Glut-1 staining was membranous and typically expressed perinecrotically (Fig.1), whereas pimonidazole, although showing substantial co-localisation with Glut-1, was cytoplasmatic (Fig.2). Electrochemotherapy with bleomycin has been very effective in the treatment of subcutaneous SA-1 tumours, resulting in substantial tumour growth delay and even a high percentage of tumour cures, compared to untreated tumours and tumours treated with bleomycin or application of electric pulses only (Fig.3). For hypoxia marker analysis, two consecutive sections from each tumour were analysed for Glut-1 expression and pimonidazole binding.
Fig. 1. Immunohistochemical expression pattern of Glut-1 is membranous.
Fig. 2. Immunohistochemical staining of pimonidazole is cytoplasmatic.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Comparison of two hypoxic markers: pimonidazole and glucose transporter 1 (Glut-1)
467
Fig. 3. Glut-1 expression in representative tumour sections two hours after tretement with bleomycin, application of electric pulses or electrochemotherapy.
With both hypoxic markers, onset of tumour hypoxia was instant, reaching the highest value at 2 h after applied electric pulses or electrochemotherapy. The total area of staining varied between the two markers. However, there were significant correlations between Glut-1 and pimonidazole scores (p=0.028). IV.
DISCUSSION
Tumour hypoxia confers a poor prognosis in a wide range of solid tumours, due to an increased malignancy and increased likelihood of metastasis. Poor availability of molecular oxygen and the metabolic changes occurring in these conditions also confer resistance of tumours to both chemo- and radiotherapy, leading to treatment failure [11]. Chronically hypoxic cells tend to be out of cycle and resistant to cell cycle phase specific drugs. The individualization of therapy depends upon a reliable and clinically feasible means of detecting the heterogeneity that exists among tumours [12] Currently, two methods most widely used to measure tumour oxygenation are the Eppendorf polarographic oxygen electrode and luminescence-based optical sensor OxyLite (13). However, they are invasive methods and
this assessment of oxygenation will not distinguish between levels of O2 in areas of necrosis and areas of viable tumour tissue. An alternative method is the immunohistochemical assessment of pimonidazole binding, injected prior to the biopsy hypoxia [14]. The advantage of the immunohistochemical marker approach is that the same or contiguous formalin-fixed tissue sections may be examined for relationships between hypoxia and other physiological parameters for which immuno-histochemical assays exist. In addition, histological sections permit study of the geographical distribution of hypoxia and microenvironmental factors such as blood vessels, areas of cell proliferation, and angiogenesis [15]. However, this approach requires an additional intervention and, as yet, there are only limited data on their relationship with treatment outcome. Further, pimonidazole is administered 16 to 24 hours before tumour biopsy. Taken together, pimonidazole binding is likely to reflect the presence of more long term or chronic hypoxia than is likely with the oxygen electrode. Alternatively, or put another way, this could mean that pimonidazole may not indicate the level of acute hypoxia. The discovery that tumours metabolise sugars at an increased rate to normal tissue was made over 70 years ago. Glut-1 is one of the glucose transporters located in the cell membrane. We now know that increased expression of Glut-1
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
468
A. Coer, M. Legan, D. Stiblar-Martincic, M. Cemazar, G. Sersa
is not only found in a wide variety of tumour types, but invariably predicts a poor prognosis [16, 17]. It has been suggested that Glut-1 might represent an intrinsic marker of hypoxia [Airley et al., 2003]. Two recent studies have shown that levels of Glut-1 expression in carcinoma of the cervix correlate with the level of tumour hypoxia measured using either polarographic needle electrodes [18] or pimonidazole staining [12]. There is a fundamental drawback, in that Glut-1 expression does not exclusively correspond to the tumour oxygenation status. Rather, the expression in tumour cells might reflect the activation of other oncogenic pathways, independent of hypoxia. In fact, two recent reports comparing oxygen tension in cervical cancers to HIF-1α and Glut-1 expression provided strong evidence for a regulation of these pathways independently of hypoxia [9, 19]. However, Kunkel and coworkers recently confirmed the value of Glut-1 expression as a predictive marker for radio-resistant squamous cell carcinoma of the oral cavity. In addition to providing prognostic information, data suggest that modulation of radiation resistance by inhibition of glucose transport in the tumour may be a novel strategy to improve the effectiveness of radiotherapy in tumours [20]. The advantage of using intrinsic markers of hypoxia, such as Glut-1, is that the approach is simple and quick, and could potential be applied to a wide variety of solid tumour types. V.
CONCLUSIONS
The correlations between the hypoxia markers Glut-1 and pimonidazole used in this study confirm that intrinsic markers of hypoxia such as Glut-1 are a reliable means of evaluating tumour hypoxia, which will continue to be useful in future investigations involving archival material from a range of sources. REFERENCES 1. 2. 3. 4.
Sersa G, Krzic M, Sentjurc M et al. Reduced blood flow and oxygenation in SA-1 tumours after electrochemottherapy with cisplatin. Br. J Cancer 2002; 87:1047-54 Brizel DM, Sibley GS, Prosnitz LR et al. (1997)Tumor hypoxia adversely affects the prognosis of carcinoma of the head and neck. Int J Radiat Oncol Biol Phys 38: 285 – 289 Fyles AW, Milosevic M, Wong R et al. (1998) Oxygenation predicts radiation response and survival in patients with cervix cancer. Radiother Oncol 48: 149-156 Mayer A, Höckel M, Wree A, Vaupel P. (2004) Lack of correlation between expression of HIF-1α protein and oxygenation
5. 6. 7. 8. 9. 10. 11.
12. 13.
14.
15. 16. 17. 18.
19. 20.
status in identical tissue areas of squamous cell carcinomas of the uterine cervix. Cancer Res 64: 5876-5881 Varia MA, Calkins-Adams DP, Rinker LH. Pimonidazole: A novel Hypoxia marker for complementary study of tumor hypoxila and cell proliferation in cervical carcinoma. Gynecol Oncol 1998; 71: 270-277. Raleigh JA, Chou SC, Bono EL, Thral DE, Varia MA. (2001) Semiquantitative immunohistochemical analysis for hypoxia in human tumours. Int J Radiat Oncol Biol Phys 49: 569-574 Nordsmark M, Loncaster J, Chou SC, et al.(2001) Invasive oxygen measurements and pimonidazole labelling in human cervix carcinoma. Int J Radiat Oncol Biol Phys 49: 581-586 Sakata K, Someya M, Nagakura H, et al. A clinical study of hypoxia using endogenous hypoxic markers and polarographic oxygen electrodes. Strahlenther Oncol 2006; 182: 511-7 Chen C, Pore N, Behrooz A, Ismail-Beigi F, Maity A. (2001) Regulation of glut-1 mRNA by hypoxia inducible factor-1. J Biol Chem 276: 9519-9525 Cooper R, Sarioglu S, Sökmen S, et al. (2003) Glucose transporter-1 (GLUT-1): a potential marker of prognosis in rectal carcinoma. Br J Cancer 89:870-876 Airley RE, Philips RM, Evans AE, et al. (2005) Hypoxia-regulated glucose transporter Glut-1 may influence chemosensitivity to some alkylating agents: results of EORTC (first translational award) study of the relevance of tumour-hypoxia to the outcome of chemotherapy in human tumour derived xenografts. Int J Oncol 26: 1477-1484 Airley RE, Loncaster J, Raleigh JA, et al. (2003) Glut-1 and CAIX as intrinsic markers of hypoxial in carcinoma of the cervix: Relationship to pimonidazole binding. Int J Cancer 104: 85-91 Jarm T, Serša G, Miklavčič D.(2002) Oxygenation and blood flow in tumours treated with hydralazine: Evaluation with a novel luminescence-based fiber-opric sensor. Technol Health care 10: 363380. Kennedy AS, Raleigh JA, Perez GM, et al. (1997) Proliferation and hypoxia in human squamous cell carcinoma of the cervix: first report of combined immunohistochemical assays. Int J Radiat Oncol Biol Phys 37: 897-905 Olive PL, Durand RE, Raleigh JA, Luo C, Aquino-Parsons C. (2000)Comparison between the comet assay and pimonidazole binding for measuring tumour hypoxia. Br J Cancer 83: 1525-1531 Heber RS, Rathan A, Weiser KR, et al. (1998) Glut-1 glucose transporter expression in colorectal carcinoma. Cancer 83: 34-40 Cartiana G, fagotti G, Megathaes A, et al. (20019 Glut-1 expression in ovarian carcinoma: association with survival and response to chemotherapy. Cancer 92: 1144-1150 Airley RE, Loncaster J, Davidson S, et al. (2001) Glucose transporter Glut-1 expression correlates with tumour hypoxia and predicts metastasis-free survival in advanced carcinoma of the cervix. Clin Cancer Res 7: 928-934 Mayer A, Höckel M, Vaupel P. (2006) Endogenous hypoxia markers in locally advanced cancers of the uterine cervix: reality or whishful thinking. Strahlenther Onkol 182: 501-510 Kunkel M, Moergel M, Stockinger M, et al. (2006) Overexpression of Glut-1 is associated with resistence to radiotherapy and adverse prognosis in squamous cell carcinoma of the oral cavity. Oral Oncol DOI 10.1016/j.oraloncology.2006.10.009 Andrej Coer Institute for Histology and Embryology Medical Faculty Korytkova 2 1000 Ljubljana,Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Effects of vinblastine on blood flow of solid tumours in mice S. Kranjc1, T. Jarm2, M. Cemazar1, G. Sersa1, A. Secerov1, M. Auersperg1 1
2
Institute of Oncology Ljubljana, Ljubljana, Slovenia Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
Abstract— The aim of our study was to determine the effects of low doses of Vinblastine (VLB) on tumor growth and time course of blood flow changes in solid tumors of mice. Two murine tumors were used in the study; EAT carcinoma and SA-1 fibrosarcoma, syngeneic to CBA and A/J mice, respectively. Mice were treated with intraperitoneal injection of 2 different VLB doses when the tumors reached 6 mm in diameter. Antitumor effectiveness was determined by tumor growth delay assay. Tumor blood perfusion changes were assessed by means of Laser Doppler flowmetry. The antitumor effectiveness of both doses in SA-1 tumors was minimal, resulting in 2.0 days tumor growth delay. The antitumor effectiveness in EAT tumor was also minimal, but dose dependent; the lower dose induced 1.1 tumor growth delay, while the higher dose resulted in 2.1 tumor growth delay. This higher dose of 50 μg VLB produced statistically significant and highly reproducible decrease in blood flow. The maximum effect was reached 1.5 hours after the treatment. The lower doses did not significantly affected tumor blood perfusion. Our data demonstrate that VLB at the dose that minimally affect tumor growth, have a significant effect on tumor blood perfusion. Keywords— vinblastine, experimental tumors, tumor blood flow, mice
I. INTRODUCTION Knowledge about tumor physiology is important for understanding of tumor growth as well for rational planning of tumor treatment [1]. The vascular supply and tumor oxygenation are especially important for tumor growth [2]. Reduction in tumor blood flow can lead to an increase in hypoxia and extracellular acidification [2, 3]. Additionally, if blood flow is chronically impaired, a cascade of tumor cell death will occur, due to the lack of nutrients and accumulation of catabolite products [4]. Therefore, tumor vasculature has become a potential target for cancer treatment. Two approaches have become feasible, anti-angiogenic therapy hindering the neovascularization of the tumor tissue [5], and vascular targeted therapy affecting the existing vascularization of the tumors [6]. Many anticancer agents and therapies in current use have been shown to exert their anti-tumor action, to some extent, as a direct consequence of compromising vascular function [2]. Even if these therapies induce only transient reduction in tumor blood flow, this can be exploited in combination
with bioreductive agents or in optimal combination with other therapies [7]. These agents include hyperthermia, photodynamic therapy, high energy shock waves, cytokines, such as tumor necrosis factor-α (TNF-α) and interleukin-1α (IL-1α), drugs such as hydralazine, serotonin, flavone acetic acid, vinca alkaloides, combretastatin, and application of high voltage electric pulses [2, 8]. However, the potential of modifying tumor blood flow in the clinic with many of the agents is limited by several factors including unacceptable toxicity. Nevertheless, studies in experimental systems with such agents have demonstrated that blood flow effects can be exploited to improve therapeutic outcome. Application of these agents in combination with bioreductive drugs or other treatment approaches that are effective in lower oxygen pressure has already been demonstrated [9, 10]. Therefore the quantitative data on effects of drugs on the vascular system and oxygenation under in vivo conditions should have been acquired, with the intention to translate their use into the clinical setting. VLB, a vinca alkaloid, is a chemotherapeutic drug that is used in combined treatment of testis tumors, Hodgkin's and non-Hodgkin's lymphomas, breast carcinomas, gastric carcinomas, squamous cell carcinomas, and many others [1113]. The cytotoxic action of VLB is predominantly by interference with the polymerization of tubulin, and induction of mitotic cell death [14]. A cytotoxic effect of VLB in interphase was described with VLB doses higher than those that induce mitotic arrest [15]. It has been shown that VLB compromises tumor vasculature to some extent, which contributes to its anti-tumor effectiveness [16-18]. In addition to its cytotoxic effects, VLB increases cell membrane fluidity, which can be exploited for increased drug delivery into the cells [19, 20]. The aim of our study was to determine the effects of low doses of VLB on tumor growth and time course of blood flow changes in solid tumors of mice. II. MATERIALS AND METHODS A. Drug VLB (Vinblastine sulphate, Lilly France S.A.) was dissolved in sterile water at a concentration of 2 mg/ml. The
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 469–472, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
final VLB solutions at concentrations 25 and 50 μg/0.5 ml in 0.9% NaCl was injected intraperitonealy. B. Animals, tumor models Animals: Inbred strains of A/J and CBA mice of both sexes were used in the experiments, purchased from the Institute of Pathology, Medical Faculty, University of Ljubljana, Slovenia. The mice were kept in a constant temperature environment of 21°C with a natural day/night light cycle in a conventional animal colony. Tumors: A fibrosarcoma SA-1 tumor (Jackson Laboratory, Bar Harbour, ME, USA) syngenic to A/J mice and Eahrlich ascites carcinoma (EAT) syngenic to CBA mice were used in the study. SA-1 and EAT tumor cells were obtained from the ascitic form of the tumors in mice, serially transplanted every 3 days. Solid subcutaneous tumors, located on the right flank of the mice, were initiated by injection of 5×105 SA-1 cells and 4×106 EAT cells in 0.1 ml 0.9% NaCl solution. The mice were marked, divided randomly into different experimental groups and subjected to a specific experimental protocol when the tumors reached approximately 40 mm3 in volume (6–8 days). Animal studies were carried out according to the guidelines of the Ministry of Agriculture, Forestry and Food of the Republic of Slovenia (permission #: 323-02-170/2004/2). C. Laser Doppler flowmetry Relative blood perfusion was monitored using an OxyFlo 2000 Laser Doppler flowmeter and OxyData data acquisition unit (Oxford Optronix Ltd., Oxford, U.K.). The signals were sampled and stored at the frequency of 20 Hz. Laser Doppler flowmetry (LDF) measures the spread of the wavelengths of photons emitted by a coherent laser source when the photons scatter on moving red blood cells in capillaries. The distribution of photon wavelengths is used for calculation of relative microcirculation in the tissue. Even though LDF can be applied entirely noninvasively, we used thin invasive probes (diameter 200 μm) in order to assess the perfusion inside the tumor [21]. The mice were anesthetized using isoflurane (Flurane-Isoflurane, Abbot Labs, U.K.) delivered to the mice in a mixture of O2 and N2O (flow of each 0.6 l/min) at 3.0% and 1.7% concentrations respectively for induction and maintenance of anesthesia. The mice were kept on an automatically regulated heating pad to prevent hypothermia. Rectal temperature was kept as close as possible to 37°C with the contact surface temperature of the heating pad always below 39°C. Approximately 4 minutes after the induction of anesthesia, the data acquisition was started and a probe was inserted into the tumor through a small superficial incision in
S. Kranjc, T. Jarm, M. Cemazar, G. Sersa, A. Secerov, M. Auersperg
the skin, pushed a few millimeters into the tumor and then slightly withdrawn in order to minimize the pressure of the tip of the probe on the surrounding tissue. After at least 15 min of stable blood flow recording the treatment was started. Injection of VLB was performed slowly over a period of one minute via i.p. route. Blood flow was monitored up to two hours after the injection. Special care was taken to minimize all movements of the probes or the mouse in order to keep the movement artifacts in recorded signals at a minimum. After the recording the inevitable respiratory movement artifact was removed from the raw signals using a special filter and the sampling frequency was reduced to 1 Hz. Then the baseline blood flow was estimated for each tumor individually within a data window of length 20 s which was moved along the signal in 5 min increments. These values where normalized to the pretreatment value for each tumor and average blood flow was evaluated over all tumors. D. Treatment evaluation Tumor growth was followed by the measurement of three mutually orthogonal tumor diameters with vernier caliper on each consecutive day and the volumes calculated according to the formula for volume of ellipsoid. From the tumor volumes, arithmetic mean (AM) and SEM were calculated for each experimental group. Tumor doubling time (DT) was determined for each individual tumor. Tumor growth delay was calculated for each tumor by subtracting the DT of each tumor from the mean DT of the control group and then averaged for each experimental group. III. RESULTS A. Antitumor effectiveness The anti-tumor effectiveness of single i.p. treatments was tested on subcutaneous SA-1 and EAT tumors in mice for 500 400 300
Tumor volume (mm3)
470
200
100
Control VLB 25 ug/mouse VLB 50ug/mouse
80 60 50 40 0
2
4
6
8
10
12
14
Time (days)
Fig. 1 Antitumor effectiveness of VLB in SA-1 tumors
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Effects of vinblastine on blood flow of solid tumours in mice
above the pretreatment value with the maximum values (less than 120% of pretreatment value; p>0.05) being reached ~30 to 60 min after the injection. Later the blood flow slowly decreased and reached the pretreatment values at the end of observation period. The difference between the groups became significant ~25 min after the injection (p<0.001).
500 400 300
Tumor volume (mm3)
471
200
100 80 Control 25 ug VLB/mouse 50 ug VLB/mouse
60 50 40 0
2
4
6
8
10
IV. DISCUSSION
12
Time (days)
Fig. 2 Antitumor effectiveness of VLB in EAT tumors
Fig. 3 Blood flow changes in SA-1 tumors after treatment measured by means of LDF
two different VLB doses (Figures 1,2). The antitumor effectiveness of both doses in SA-1 tumors was minimal, resulting in 2.0 days tumor growth delay. The antitumor effectiveness in EAT tumor was dose dependent, the lower dose induced 1.1 tumor growth delay, while the higher dose resulted in 2.1 tumor growth delay. None of the VLB doses tested had side effects and all were well tolerated by the animals. B. Relative tumor blood perfusion measurement Figure 3 presents the time course of blood flow changes in tumors measured by LDF and evaluated at 5 min intervals. Initially the i.p. injection induced a transient but significant decrease in blood flow 5 min after the injection in all three experimental groups (p<0.02). After that blood flow continued to decrease in the VLB 50 μg group and reached the lowest plateau value ~1.5 h after the treatment (~66% of the pretreatment value, p<0.001). Blood flow changes in the control and VLB 25 μg-treated tumors followed the same pattern. In both groups blood flow increased
The results of the present study are consistent with the results of previous studies, demonstrating that VLB cause a dramatic and prolonged decrease in blood flow in tumors [16, 17]. In our study, VLB at a dose of 25 μg did not induce perceivable changes in blood flow of tumors within 2 hours after the injection. However, at a higher dose of 50 μg, VLB produced statistically significant and highly reproducible decrease in blood flow. The maximum effect was reached 1.5 hours after the treatment. The data in Figure 3 indicate that this decrease might last considerably longer than just two hours. However, based on very similar pattern of tumor growth after treatment with both doses (Figure 1) this decrease which was completely absent in tumors treated with the lower dose of VLB has probably very little to do with the observed tumor growth delay. Vinca alkaloids, in addition to their anti-tumor action by binding to intracellular protein tubulin, exert their antitumor action in part via impairment of blood flow [16]. In that study, however, the doses that were shown to induce hemorrhagic necrosis of tumors were close to maximal tolerated doses, doses that are higher than used clinically, and doses that cause significant toxicity [16]. This study shows that doses lower than maximal tolerated doses also have an anti-vascular effect. However, it was also shown that VLB dose 25 μg/mouse had no effect on tumor blood flow. Based on the data on tumor blood flow it would be expected that with doses lower than 25 μg/mouse there were no oxygen concentration changes in the tumors, and even less probably in normal tissues. A few studies have demonstrated that conventional chemotherapeutic drugs also can have a vascular component of damage, and indicated that this is associated with vascular complications in the patients. Other chemotherapeutic drugs (cisplatin and bleomycin) whose mechanisms include vascular effects have toxicities that are similar to some of those seen with VLB. It is of importance that VLB induces reduced oxygenation not only in tumors but also in normal tissues, therefore vascular episodes that have been described after or during VLB treatment may be ascribed to reduced heart oxygenation. In a series of patients with locally advanced (T4) thyroid carcinomas treated with infusion of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
472
S. Kranjc, T. Jarm, M. Cemazar, G. Sersa, A. Secerov, M. Auersperg
very low doses of VLB (2 mg over 12 or 24 h), five-times lower than standard bolus dose, it was observed transient angina like chest pain in some patients. No changes in ECG or “cardiac enzymes” could be detected in these patients [16]. Furthermore, these complications are not to be expected only in patients with already known vascular disease, but also in patients without vascular history. In connection to this, it has to be considered that with decreased drug dosage, which will still affect tumor blood flow, less toxicity could be expected to normal tissue, since VLB exerts blood modifying effect selectively towards tumors compared to normal tissues. VLB is often used in combined modality protocols. Based on results of this and other studies, the modulation of tissue oxygenation could be exploited in combined regimes using drugs that are more effective in reduced oxygen tension. Alternatively, it may be used in combination with other therapies that also have an anti-vascular effect, such as TNF-α or other vasoactive drugs [2]. Special attention should be given to the combinations where reduced oxygenation would not provide a good therapeutic effect, especially in combination with radiotherapy [1]. V. CONCLUSIONS Our data demonstrate that VLB at the dose that minimally affect tumor growth, have a significant effect on tumor blood perfusion.
5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
ACKNOWLEDGMENT The authors acknowledge the financial support of the state budget by Slovenian Research Agency (Projects No. P3-0003, J3-7044 and J3-6260).
19.
REFERENCES
21.
1. 2. 3. 4.
Brown JM and Giaccia AJ (1998) The unique physiology of solid tumors: Opportunities (and problems) for cancer therapy. Cancer Res 58: 1408-1416 Chaplin DJ, Hill SA, Bell KM and Tozer GH (1998) Modification of tumor blood flow: Current status and future directions. Semin Radia Oncol 8:151-163 Tannock IF and Rotin D (1989) Acid pH in tumors and its potential for therapeutic exploitation. Cancer Res 49:4373-4384 Denekamp J, Hill SA and Hobson B (1983) Vascular occlusion and tumour cell death. Eur J Cancer Clin Oncol 19:271-275
20.
Boehm-Viswanathan T (2000) Is angiogenesis inhibition the Holy Grail of cancer therapy? Curr Opin Oncol 12:89-94 Chaplin DJ and Dougherty GJ (1999) Tumor vasculature as a target for cancer therapy. Brit J Cancer 80 Suppl 1:57-64. Brown JM (1987) Exploitation of bioreductive agents with vasoactive drugs. Proceedings of the Eight International Congress on Radiation Research Edinburgh UK, Proc. vol. 2 London, UK, 1987 pp.719-724 Sersa G, Cemazar M, Parkins CS and Chaplin DJ (1999) Tumour blood flow changes induced by application of electric pulses. Eur J Cancer 35:672-677 Chaplin DJ (1991) The effect of therapy on tumor vascular function. Int J Radiat Biol 60:311-325 Stratford IJ, Adams GE, Godden J, Nolan J, Howells N and Timpson N (1988) Potentiation of the anti-tumour effect of melphalan by the vasoactive agent, hydralazine. Brit J Cancer 58:122-127 Haskell CM (1990) Drugs used in cancer chemotherapy. In: Haskell CM (ed.), Cancer Treatment, WB Saunders Company, mesto pp. 6970 Auersperg M, Soba E and Vraspir-Porenta O (1977) Intravenous chemotherapy with synchronization in advanced cancer of oral cavity and oropharynx. Z Krebsfotsch 90:149-159 Auersperg M, Us-Krasevec M, Lamovec J, Erjavec M, Benulic T and Vraspir-Porenta O (1989) Chemotherapy-a new approach to the treatment of verrucous carcinoma. Radiol Jugosl 23:387-392 Jordan MA, Thower D and Wilson L (1991) Mechanism of inhibition of cell proliferation by Vinca alkaloids. Cancer Res 51:2212-2222 Madoc-Jones H and Mauro F (1972) Interphase action of vinblastine and vincristine: Differences in their lethal action through the mitotic cycle of cultured mammalian cells. J Cell Physiol 72:185-196 Hill SA, Lonergan SJ, Denekamp J and Chaplin DJ (1993) Vinca alkaloids: Anti-vascular effects in a murine tumour. Eur J Cancer 29A:1320-1324 Hill SA, Sampson LE and Chaplin DJ (1995) Anti-vascular approaches to solid tumour therapy: Evaluation of vinblastine and flovone acetic acid. Int J Cancer 63:119-123 Sersa G, Krzic M, Sentjurc M, Ivanusa T, Beravs K, Cemazar M, Auersperg M, Swartz HM (2001) Reduced Tumor Oxygenation by Treatment with Vinblastine. Cancer Res 61:4266-4271 Sentjurc M, Zorec M, Cemazar M, Auersperg M and Sersa G (1998) Effect of vinblastine on cell membrane fluidity in vinblastinesensitive and – resistant HeLa cells. Cancer Lett 130:183-190 Cemazar M, Auersperg M and Sersa G (2000) Antitumor effectiveness of bleomycin on SA-1 tumor after pretreatment with vinblastine. Radiol Oncol 34:49-57 Jarm T, Sersa G, Miklavcic D (2002) Oxygenation and blood flow in tumors treated with hydralazine: evaluation with a novel luminescence-based fiber-optic sensor. Technol Health Care 20:363-380 Author: Maja Cemazar Institute: Street: City: Country: Email:
Institute of Oncology Ljubljana Zaloska 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Measuring Tumor Oxygenation by Electron Paramagnetic Resonance Oximetry in vivo Z. Abramovic1, M. Sentjurc1 and J. Kristl2 1
Jozef Stefan Institute, Laboratory of Biophysics, Ljubljana, Slovenia University of Ljubljana, Faculty of Pharmacy, Ljubljana, Slovenia
2
Abstract— The efficacy of radiation therapy and combined therapies of tumors were found to depend considerably on the oxygenation of tumors. Therefore it is of great importance to measure the oxygen content in tumors before and during treatments and to modify the oxygen level in tumors with respect to the therapy used. In this work we have applied a new approach for increasing tumor oxygenation in order to increase the efficacy of radiotherapy. For this purpose a vasodilator, benzyl nicotinate (BN) was applied dermally onto the skin of subcutaneously grown radiation-induced fibrosarcoma (RIF-1). Oxygen content in two subcutaneous tumor models during tumor growth and after application of benzyl nicotinate on RIF-1 tumor was measured by electron paramagnetic resonance spectroscopy (EPR) in vivo. It was found that oxygen content in tumors decreases with the time of tumor growth in all tumor types irrespective of initial vascularization of tumors. BN increases significantly oxygen content in tumors in first four days of repeated application, but it becomes inefficient after that. Maximal increase was observed between 20 and 30 min after application of the drug. We have concluded that dermal application of BN might improve the efficacy of radiotherapy. Optimal time for radiotherapy would be 30 min after application of BN for this type of tumors. Keywords— oximetry, tumors, EPR in vivo, vasoactive compound, skin application
I. INTRODUCTION The oxygen concentration in tumors is considered to be one of the most important factors that can in many cases affect the response of tumors to ionizing radiation therapy [1]. The degree of hypoxia in tumors is also strongly correlated with prognosis [2]. Oxygen deficient or hypoxic cells are resistant to ionizing radiation and therefore the presence of hypoxic cells in malignant tumors could be an obstacle to effective radiotherapy of human malignancies [3]. This suggests that treatment strategies aiming at improving tumor oxygenation could prove highly beneficial in attempts to improve the radiation therapy of carcinomas. There are several approaches by which the oxygen level in tumors can be improved. They are based on systemic application of blood flow modifying agents, increasing the oxygen content in the breathing gas or amount of hemoglobin
available to transport oxygen [2, 4]. However, according to our knowledge there are no attempts to increase the oxygen level in tumors by dermal application of vasodilating substance. The information about the variation of tumor pO2 (partial pressure of oxygen) during such treatments can be used as a guide to optimize the effectiveness of hypoxia modifying procedures. The need for such measurements has increased recently because of developments in combined therapies where hypoxia modifying procedures are combined with classical cancer treatments [5]. These all are factors where knowledge of tumor oxygen would be very valuable. Until recently very limited direct measurements of tumor pO2 have been possible due to a lack of appropriate in vivo technology. Some information has been obtained by methods that can give limited information of true tumor oxygen (e.g. the oxygen electrode, luminescence-based optical sensor) or even indirect evidence by monitoring oxygen availability in the circulatory system (NIR spectroscopy and NMR methods) [6,7,8,9]. However the recent development of in vivo EPR (electron paramagnetic resonance) oximetry has the potential to provide non-invasive, accurate, and repetitive direct measurements of tissue oxygen from the same locations in the tissue for longer periods (day, weeks and even years) [10]. The goal of this study was to increase tumor oxygenation by an innovative approach, i.e. by topical application of an vasoactive compound onto the skin of subcutaneously grown tumor and to demonstrate that in vivo EPR oximetry can provide repetitive data on tumor oxygenation during the growth and time course of therapy. II. MATERIALS AND METHODS A. EPR oximetry Electron paramagnetic resonance (EPR) is spectroscopic method, which detects and study paramagnetic centers (atoms, ions or molecules with one or more unpaired electrons) in the sample. EPR oximetry is one of many application of EPR spectroscopy and enables measurements of oxygen concentrations in different samples (aqueous solutions, biological samples and even in living animals). Oxy-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 453–456, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
454
Z. Abramovic, M. Sentjurc and J. Kristl 0.12
N2
0.10
Δ B (mT)
0.08
air
0.1mT
0.06
EPR oximetry in vivo has been used to study the partial pressure of oxygen in a wide range of experimental systems, including muscle, heart, brain, kidney and liver in experimental animals and is now being developed for clinical applications using India ink as oxygen probe [11]. B. Animal tumor models
ΔB
The RIF-1 (radiation-induced fibrosarcoma fibrosarcoma), and LPB sarcoma tumors were implanted in the thigh of mice.
0.04 0.02 0.00 0
20
40
60
80
100 120 140 160 180
pO2 (mmHg) Fig. 1 Calibration curve for lithium phthalocyanine (LiPc) in PBS buffer. From calibration curve the relation between line width of EPR spectra and pO2 can be determined. The insert represents EPR spectra of LiPc in nitrogen and air with indicated line width. gen, having two unpaired electrons, has paramagnetic properties and can be therefore detected by EPR. However EPR signal of oxygen is very broad and is not appropriate for quantitative measurements in physiological conditions. To detect concentration of oxygen in samples special oxygensensitive paramagnetic probes have to be added into the samples. These are paramagnetic materials that have EPR signal, very sensitive to other unpaired electrons in their nearest surrounding in the sample. The method is based on the fact that unpaired electrons of oxygen shorten relaxation time of paramagnetic probe by Heisenberg spin exchange interaction. This is evidenced by broadening of the EPR spectra line-width, which is proportional to the concentration of oxygen in the sample. The oxygen concentration can be obtained from the line width of the EPR spectra with a help of calibration curve (Figure 1). The oxygen-sensitive probes used in EPR oximetry are metabolically inert, and are very sensitive for measuring low levels of oxygen. These are carbon-based material like coals and chars of carbohydrates, India ink, lithium phthalocyanine, naphthalocyanine [10]. The major problem with EPR in vivo is non-resonant absorption of the exciting microwaves because of the high dielectric constant of tissues. The solution of this problem is to use lower frequencies where non-resonant loss is less pronounced. EPR spectrometers for in vivo measurements usually operate at frequencies of 1.2 GHz or lower. At the frequency 1.2 GHz the practical limitations in depth of measurements still exist and for surface coil resonators this limit is 10 mm into the tissue [10].
C. EPR measurements The EPR measurements were carried out when tumors reached size 60-80 mm3. Animals were anesthetized prior to the experiment with 1.5 % isoflurane delivered in oxygen through a nose cone. Several crystals (approximately 40 μg) of oxygen-sensitive paramagnetic material lithium phthalocyanine (LiPc) were injected into the periphery of tumor (facing skin) using 25 G needle. LiPc was synthesized at the EPR Center for the Study of Viable Systems, Dartmouth Medical School. The EPR measurements were started next day after LiPc implantation. In the experiments were pO2 was followed as a function of tumor growth, mouse was anesthetized and pO2 of tumor was measured for 5 consecutive days. According to animal protocol, animals had to be sacrificed 6 days after start of EPR experiments because tumor volume reached critical value. The EPR measurements were performed on a L-band EPR spectrometer with microwave bridge operating at 1.2 GHz with a surface coil resonator (10 mm). The mouse was anesthetized and placed between the poles of the magnet and the loop of the resonator was placed gently over the tumor where the LiPc was implanted. Changes in pO2 were determined by measuring the peak-to-peak line-widths of the EPR spectra (ΔB, Figure 1). The relation between pO2 and the line-width was calculated from the calibration curve. pO2 (mm Hg) = -4.2 + 1783 × ΔB (mT)
(1)
Optimal spectrometer settings for LiPc were used: modulation frequency 27 kHz, magnetic field 43 mT, incident microwave power of 1.2 – 6.4 mW, modulation amplitude not exceeding one third of the peak-to-peak line-width (typical 0.002-0.005 mT), and scan time of 10 seconds (usually 6 scans were averaged to increase signal to noise ration). Because a decrease of body temperature of the mice following anesthesia could significantly influence the measurements, both ambient and body temperature were con-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Measuring Tumor Oxygenation by Electron Paramagnetic Resonance Oximetry in vivo
455
III. RESULTS
350
pO2 (mm Hg)
300
A. Tumor pO2 as a function of tumor growth
250
Two tumor models were chosen since they differ greatly in vascularization. We suspected that also oxygenation would differ considerably. From Fig. 2 it can be seen that LPB tumors of similar tumor volume as RIF-1 tumors have more than an order of magnitude higher pO2. However, in both tumor models a significant decrease in oxygenation with tumor growth was observed.
200
*
150
*
100 50 0
1
2
3
t (days)
4
5 A
4
On RIF-1 tumors 2.5% BN in hydrogel was applied onto the skin above the tumor to increase tumor pO2. Since BN causes vasodilation after topical application in underlying skin tissue we have expected that this agent could change tumor oxygen level. This could be simple approach in order to increase effectiveness of radiation therapy. The effect of BN on tumor pO2 was evaluated by: tmax – the time required for the maximal pO2 increase and ΔpO2max – the maximal relative pO2 increase with respect to the baseline.
2
Table 1 Influence of BN on tumor pO2 for five consecutive days (mean ±
14 12 10
pO2 (mm Hg)
B. Influence of BN application on tumor pO2
8
*
*
6
0
1
2
3
t (days)
4
SE; n=5). Values marked with * are significantly different compared to control tumors where empty hydrogel was applied (p < 0.05).
5 B
Fig. 2 Changes in tumor pO2 with tumor growth for A. LPB tumors and B. RIF-1 tumors (mean ± SE; n = 5-16). Significantly different values compared to day 1 are marked with * (p < 0.05).
trolled. The body temperature was maintained at 37 ± 0.5°C by a thermostatically controlled flow of warm air and a heated pad [12]. D. BN application Benzyl nicotinate is an ester, which acts as a prodrug. It crosses the skin rapidly and, on enzymatic hydrolysis, release nicotinic acid. This agent provokes increased cutaneous blood flow [13]. 30 minutes after the stabile baseline of tumor pO2 was achieved, hydrogel with 2.5 % BN was applied onto the skin above subcutaneous tumors (0.3g). EPR spectra were recorded continuously for next 60 minutes and the line-width of the spectra measured to give the pO2 at the site of the paramagnetic probe. Formulation containing no BN was used as a control.
Time (days)
ΔpO2max after BN (mmHg)
tmax (min)
1 2 3 4 5
9 ± 4* 15 ± 7* 9 ± 5* 9 ± 4* 0.7 ± 0.6
24 ± 6 23 ± 6 22 ± 7 28 ± 4 -
Hydrogel without BN did not change tumor pO2 (Fig. 3). The changes in tumor oxygen level after BN application were significantly different from control tumors from day 1 to day 4, while no significant change was noticed after that (Table 1). The maximal relative increase in tumor pO2 was achieved on second day of continuous application (Fig. 3). IV. DISCUSSION Measurements of pO2 in two tumor models show high variation in tumour oxygenation. As the response to radiotherapy depends strongly on the oxygen level in tumors [14] the information about tumor pO2 is of prime importance for planning the therapy. With tumor growth pO2 decreases in all types of tumors, as was observed already before on other types of tumors [15]. This proves that vascularization of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
456
Z. Abramovic, M. Sentjurc and J. Kristl
REFERENCES
30 BN HG
ΔpO2 (mm Hg)
25 20 15
1. 2.
10 5
3.
0
4.
-5
0
10
20
30
40 50 t (min)
60
70
80
5.
Fig. 3 The time course of relative pO2 (with respect to the baseline) in RIF1 tumors after application of BN in hydrogel (BN) or empty hydrogel (HG) on day 2 when maximal increase was observed (mean ± SE; n = 5).
tumors becomes deficient with tumor growth in all types of tumors irrespective of their initial oxygenation. Many vasoactive drugs have been used in different animal experiments in order to achieve better oxygenation and radioresponse of tumors [4, 16]. In most of these experiments drugs were administrated systemically. Gallez et al. have proven in one of his studies that intraperitoneal administration of nicotinate derivative successfully improved tumor pO2 [16]. From our results it could be seen that also topical application of vasodilator benzyl nicotinate causes a statistically significant increase of tumor oxygenation in first four day of application (from 9 to 15 mm Hg with respect to baseline pO2). Maximal increase in oxygen level was observed between 20 and 30 min after application of BN. This increase in tumor oxygen level could already significantly improve radiosensitivity of cells [16]. We have proven that by EPR oximetry we can monitor not only the amplitude of pO2 changes but also dynamics of these changes. Therefore EPR oximetry could be potentially valuable method to guide treatment planning of radiation therapy for tumors in terms of optimal timing for combined treatments.
6. 7. 8. 9. 10. 11. 12.
13. 14.
15.
V. CONCLUSIONS According to our knowledge this is the first attempt to increase the oxygen level in tumor by topical application of vasodilator. We have shown that pO2 in RIF-1 tumor reaches the level where radiation could be efficient. In the next step it would be necessary to verify if the radiation therapy is more efficient after dermal application of BN.
16.
Gallez B, Baudelet C, Jordan BF (2004) Assessment of tumor oxygenation by electron paramagnetic resonance: principles and applications. NMR Biomed 17:240-262 Feldmann HJ (2001) Oxygenation of human tumors – implications for combined therapy. Lung Cancer 33:S77-S83 Hall EJ (1994) Radiobiology for the radiologist. Lippincott, Philadelphia Chaplin DJ, Hill SA, Bell KM et al. (1998) Modification of tumor blood flow: current status and future directions. Semin Radiat Oncol 8:151-163 Kaanders JH, Bussink J, van der Kogel AJ (2002) ARCON: a novel biology-based approach in radiotherapy. Lancet Oncol. 3:728-737 Nozue M, Lee I, Yuan F et al. (1997) Interlaboratory variation in oxygen tension measurement by Eppendorf "Histograph" and comparison with hypoxic marker. J Surg Oncol 66:30-38 Young WK, Vojnovic B, Wardman P (1996) Measurement of oxygen tension in tumours by time-resolved fluorescence. Br J Cancer Suppl 27:S256-259 Baudelet C, Gallez B (2002) How does blood oxygen leveldependent (BOLD) contrast correlate with oxygen partial pressure (pO2) inside tumors? Magn Reson Med 48:980-986 Swartz HM (2002) Measuring real levels of oxygen in vivo: opportunities and challenges. Biochem Soc Trans 30:248-252 Swartz HM, Clarkson RB (1998) The measurement of oxygen in vivo using EPR techniques. Phys Med Biol 43:1957-1975 Swartz HM, Khan N, Buckey J et al. (2004) Clinical applications of EPR: overview and perspectives. NMR Biomed 17:335-351 Sentjurc M, Kristl J, Abramovic Z (2004) Transport of liposome entrapped substances into skin as measured by electron paramagnetic resonance oximetry in vivo. Methods Enzymol 387:267-287 Wilkin JK, Fortner G, Reinhardt LA et al. (1985) Prostaglandins and nicotinate-provoked increase in cutaneous blood flow. Clin Pharmacol Ther 38:273-277 O’Hara JA, Blumenthal RD, Grinberg OY et al. (2001) Response to radioimunotherapy correlates with tumor pO2 measured by EPR oximetry in human tumor xenografts. Radiat Res 155:466-473 O’Hara JA, Goda F, Liu KJ et al. (1995) The pO2 in a murine tumor after irradiation: An in vivo EPR oximetry study. Radiat Res144:222-229 Gallez B, Jordan BF, Baudelet C et al. (1999) Pharmacological modifications of the partial pressure of oxygen in murine tumors: evaluation using in vivo EPR oximetry. Magn Reson Med 42:627-630 Author: Institute: Street: City: Country: Email:
Zrinka Abramović Jožef Stefan Institute Jamova 39 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Monitoring of preterm infants during crying episodes L. Bocchi1, L. Spaccaterra2, F. Favilli1, L. Favilli1, E. Atrei2, C. Manfredi1 and G. P. Donzelli2 1 2
Dept. Electronics and Telecommunications, Univ. Firenze, Firenze, Italy Department of Pediatrics, AOU A. Meyer - Univ. Firenze, Firenze, Italy
Abstract— Preterm infants often suffer from respiratory problems. Crying is an effort which may have an adverse impact on blood oxygenation. In this work we present a measurement system aimed at the evaluation of the distress occurring during cry, giving a quantitative measure of the decrease of cerebral oxygenation. The system allows to monitor the central and the peripheral oxygenation, together with an audio recording of vocal emissions by the infant. Preliminary results on a data set of 15 preterm infants indicate that in some cases the effort is associated with a large decrease in the oxygenation level during a cry. Keywords— NIRS, pulsioxymetry, infant monitoring, vocal analysis.
I. INTRODUCTION Infant monitoring in Neonatal Critical Care Unit is a common practice in clinical procedure. The most frequently used monitoring instrument is a pulsioxymeter, which provides a non-invasive way for monitoring heart rate and peripheral blood oxygenation. Low birth weight infants, indeed, often present respiratory problems, ranging to insufficient ventilation to apnoea. At the same time, the cerebral blood flow in preterm infant and in the new born has been studied extensively [1,2] as new born infants have an impaired autoregulation of the cerebral blood flow [3,4]. Irregularities in the blood flow and pressure may adversely influence the development of the child [5,6,7,8]. Several studies indicate a correlation of respiratory problems with neurological problems during the growth of the patient [9,10]. Some studies have been performed in order to evaluate the blood flow and oxygenation in the new born by Near InfraRed Spectroscopy (NIRS), also linked with other techniques [11]. One of the most common events that may affect the respiratory flow is related to cry. Crying is a physiologic action for the infant to communicate and to draw attention, however this action requires a great effort for a premature infant, which may cause distress. In this work, we evaluate the correlation between the cerebral oxygenation and cry, by a non invasive monitoring performed simultaneously with a NIRS spectrometer and a pulsioxymeter.
II. MATERIALS AND METHODS A. Data set The analysis has been carried out on a group of preterm infants, having a gestational age ranging from 28 to 34 weeks. Infants were selected by a physician among patients hospitalized in the Critical Care Unit of the A. Meyer hospital in Florence. The measurement session took place on patients having an age between 3 and 4 months. At present, a preliminary data set has been analyzed, in order to assess the information which can be collected and to optimize the experimental setup. This preliminary data set is composed of 15 patients, whose relevant information is reported in Table 1. A control group, composed of 20 patients, has been used as a reference to evaluate the physiologic ranges of the parameters. Table 1 Data set Minimum gestational age Maximum gestational age Minimum birth weight Maximum birth weight
27 weeks 36 weeks 1.400 Kg 2.100 Kg
B. Instrument setup Monitoring has been performed by collecting data from three different sources: heart rate and peripheral blood saturation of oxygen have been measured using a pulsioximeter (3900 - Datex-Ohmeda), central blood saturation has been measured with NIRS (ISS Model 96208 - ISS Inc., Champaign, IL USA), and a microphone has been used to record cry emissions. An important issue which arised during the experiments was related to synchronization among the three sources: in order to correctly relate cry and oxygenation pattern it is essential to obtain a strict synchronization among the instruments which have been used for monitoring. Moreover, the NIRS device we used is not equipped with an output signal which may be used to transfer data in real time to a recording device. The device has 4 analog inputs which are sampled and acquired during the measure, and a sample rate of 1 sample/second. Those inputs can thus be
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 449–452, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
450
L. Bocchi, L. Spaccaterra, F. Favilli, L. Favilli, E. Atrei, C. Manfredi and G. P. Donzelli
Fig 2. Positioning of the sensor on the child head (simulation on a doll)
Fig 1. Acquisition system used to acquire data from the pulsioximeter, which has a similar transfer rate. Audio recording was performed using a multimedia laptop which acquired a single channel audio track, sampled with 12 bits of depth and a sampling rate of 44KHz. An acquisition software has been designed to allow synchronization with the other instruments using a digital output linking the laptop with the input of the NIRS instrument. This acquisition software transforms the single channel data into a two channel signal, where one channel (left signal) corresponds to the acquired waveform, while the other channel contains a synchronization binary pattern, which is at the same time emitted on the digital output. This pattern is recorded together with NIRS and pulsioxymetry signals, allowing to synchronize the two recordings. The overall diagram of the experimental setup is shown in Fig. 1. C. Data acquisition Data acquisition may be adversely affected by environmental factors and by the patient itself. As concerns the environmental factors, the NIRS acquisition may be altered by different levels of light in the acquisition room. In order to ensure a good quality of data, a careful selection of environmental condition is required. A room has been selected in the Intensive Care Unit, satisfying all the necessary requirements: • •
Low background noise, as the room is far from noise sources and it can be successfully insulated from the outside. Levels of illumination can be selected accordingly to the instrument requirements
Moreover, both instruments are sensitive to movement artifacts, as the patient may perform sudden movements. To reduce such disturbances, special care has been used to assure a good contact between sensors and the patient skin. As concerns the factors related to the patient itself, we considered the possible effects of the physiological situation on the measurements. To improve the reliability of data, we performed all the measurements at the same time of the day, and equally spaced between meals. A second factor is related to the positioning of the NIRS sensor on the head of the infant. The size of the sensor is comparable to the mean size of the head of the infant (as described in Fig.2, referring to a doll having about the same head size), so a careful positioning and securing of the sensor on the subject is required. D. Data processing Acquired data exhibit two kind of noise: high frequency noise, with a small amplitude, caused both by intrinsic variability of the data, variations of the blood volume due to the pulsation of the flow, and sampling effects, and spikes, having a large amplitude, caused by movements of the patient. The first part of the analysis is therefore aimed at reducing the effect of noise. A first filter has been used to detect and remove spikes from the input signal. The filter has been designed using an heuristic method, from the study of the variance of each signal on a window of a predefined length. A sample has been assumed to belong to a spike if the difference from the previous one is larger than twice the standard deviation of the signal. When a sample belonging to a spike is detected, it is removed from the signal by substituting its value with the last sample before the beginning of the spike. This simple solution allows to remove spikes having a very sharp peak, which can be associated to problems related to the contact of the sensors with the skin, preserving large signal variations due to alterations of the physiological state of the patient.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Monitoring of preterm infants during crying episodes
451
The second stage of the filter is composed of a low-pass filter, implemented as a symmetric FIR filter. The number N of coefficients of the filter is equal to 19, with an impulse response h(t) equal to: h(t) = sin t / t
(1)
with −9 < t < 9. A Von Hann window, defined as w(t) = 0.585 - cos(2π t/N)
(2)
has been used to improve the performances of the filter. The effect of this filter is to reduce the high frequency noise. To evaluate a long-term behavior of the oxygen saturation, we performed a b-spline approximation of the signal. The approximation has been carried out by using a third order spline, with a control point each 30 samples, corresponding to 60 s. The standard least square approximation has been used to find the optimal coefficients for the spline. III. RESULTS The recorded signals have been analyzed in order to evaluate the meaningfulness of the data. In the first step, we evaluated the physiological range of central oxygenation in the control group. We performed eight monitoring session for each patient, in order to assess both the inter-subject and the intra-subject variability. Fig. 3 shows the mean and the standard deviation of the central blood oxygenation for each patient in the control group. Notice that the average value is in the range between 70% and 85%, however the variance of the signal is rather small. As concerns the hearth rate, almost all patients exhibit values around 140 pulses/min with peaks up to 170 pulses/min. The analysis of the temporal patterns indicates that the hearth rate increases after cry, as it could be expected. This a symptom of the compensation of the effort needed for crying. We also observed a larger variance of the hearth rate when the patient is awake with respect to sleep. Moreover, data indicates that values of hearth rate below 100 pulses/min are due to movement artifacts, and other sensor defects, so they should be discarded. A similar results has been obtained by the analysis of the peripheral saturation, confirming that the stress occurring during cry episodes may cause a significant decrease of the blood oxygenation, which the infant is not completely able to compensate. Normal levels in the infant at rest (awake or sleeping) are above a saturation of 95%.
Fig 3. Mean and standard deviation of oxygen saturation in the central nervous system for the control group
The physiological values of central oxygenation in the group of patient we observed is more difficult to assess, as there are large inter-individual differences. We observed, indeed, average values ranging from 60% to 80%. This may be due to the effect of sensor size (about 3cm) with respect to the diameter of patient head, or to the bias caused by presence of large vessels inside the head. However, the effect of cry is much larger on central blood saturation than on peripheral saturation, as shown in Fig. 4. Moreover, a temporal analysis shows an increase of the saturation after the episode, which means that the nervous system is trying to compensate the loss of oxygen during the cry. For instance, Fig. 5 reports the temporal values of the central saturation during and following cry (circled zone). The figure shows a large decrease of the saturation (down to a level of about 50%) during the episode, followed by a rapid increase of the saturation level after the episode.
Fig 4. Comparison of peripheral and central oxygen saturation
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
452
L. Bocchi, L. Spaccaterra, F. Favilli, L. Favilli, E. Atrei, C. Manfredi and G. P. Donzelli
However, it is quite difficult to propose a continuous monitoring of the patients through NIRS oxymetry, because of the discomfort and the technical issues related to the measurement process. We are currently planning to analyze the vocal emissions of the infant to investigate the correlation among vocal parameters, the level of stress and the drop in oxygenation levels.
REFERENCES 1.
Fig 5. Central saturation during a cry episode IV. CONCLUSIONS Cry is one of the most common events in infants, and it requires a great amount of energy to be spent by the patient. In case of preterm subjects, this effort produces an alteration of normal physiological parameters. Qualitative analysis suggests that physiological compensation systems are not able to maintain the level of blood oxygenation during crying episodes, and peripheral saturation drops by 5-10%. However, the oxygen saturation in the central nervous system appears to have an higher degree of variability between subjects and over time, also in a stable situation. Therefore, it is necessary to process the acquired signal to extract the trend over a larger time span. The proposed bspline approach rejects the short-term variability and high frequency noise, allowing to tune the process by selecting an adequate number of nodes in the interpolation. The analysis of the resulting spline indicates that the decrease in the oxygenation levels is higher in the central nervous system than in the peripheral vessels, accordingly to the higher consumption of oxygen from the brain. Moreover, the difference between the two measures is highly patient dependent. In a few cases, the central saturation dropped to values around 50%, yielding an increased risk for the patient as compared to a peripheral saturation of about 90%, almost in a physiological range.
Greisen G. (1986).Cerebral blood flow preterm infant during the first week of life. Acta Paediatrica Scandinavica, 75, 43-51. 2. Pryds O. & Edwards, A.D. (1996).Cerebral blood flow in the newborn infant. Archives of Disease in Childhood: fetal and neonatal edition,74(1), 63-69. 3. Lou H.C., Lassen N.A. & Frii-Hansen B. (1979). Impaired autoregulation of cerebral blood flow in the distressed new born infant. Journal of Pediatrics, 94, 118-121. 4. Van De Bor M. & Walther F.J.(1991). Cerebral blood flow velocity regulation in preterm infant. Biology of the Neonate, 59, 329-335. 5. Lou, H.C. (1994). Hypoxyc-hemodinamic pathogenesis of brain lesion in the newborn. Brain & Development, 16, 423-431. 6. Miall-Allen V.M., de Vries L.S., Whitelaw A.G. (1987). Mean arterial blood pressure and neonatal cerebral lesion. Archives of Disease Childhood, 62, 1068-1069. 7. Perry, E.H., Bada H.S., Ray J.D., Korones S.B., Arheart K. & Magill H.L. (1990). Blood pressure increases, birth weigh-dependent stability boundary, and intraventricular haemorrhage. Pediatrics, 85, 727-732. 8. Friis - Hansen B. (1985). Perinatal brain injury and cerebral blood flow in newborn infant. Acta Paediatrica Scandinavica, 74, 323-331. 9. Gottlieb D., Chase C., Vezina R.M., Heeren T.C., Corwin M.J., Auerbach S.H., Weese-Mayer D.E. & Lesko S.M. (2004). Sleepdisordered breathing symptoms are associated with poorer cognitive function in 5-year-old children. Journal of Pediatric, 145, 458-464. 10. Gozal D. (1998). Sleep disordered breathing and school performance in children. Pediatrics, 102, 616-620. 11. Delpy DT., Cope MC., Cady EB, Wyatt JS.,Hamilton PA., Hope PL, Wray S. & Reynolds EO.(1987).Cerebral monitoring in newborn infants by magnetic resonance and near infrared spectroscopy. Scandinavian Journal of clinical laboratory investigation,188, 9-17. Author: Leonardo Bocchi Institute: Street: City: Country: Email:
Dept. of Electronics and Telecommunications Via S. Marta, 3 Firenze Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Radiotracer and Microscopic Assessment of Vascular Function in Cancer Therapy G.M. Tozer1 and V.J. Cunningham2 1
Cancer Research UK Tumour Microcirculation Group, University of Sheffield, Sheffield, S10 2JF, U.K. 2 GlaxoSmithKline, Clinical Imaging Centre, Imperial College, London, W12 0NN, UK
Abstract— Vascular disrupting cancer therapy is aimed at causing a rapid shut-down of the established tumour blood supply, sufficient to induce secondary tumour cell death. It is conceptually distinct from anti-angiogenic therapy, which aims to prevent neo-vascularization of solid tumours. Several low molecular weight drugs have recently entered clinical trial as vascular disrupting agents or VDAs. The lead compound is the tubulin-binding, microtubule depolymerising agent, disodium combretastatin A4 3-O-phosphate (CA-4-P). Tissue blood flow rate (F) is a critical parameter for assessing functional efficiency of a blood vessel network following VDA treatment. CA-4-P causes almost complete cessation of blood flow in many tumour models within 1 hour of a moderate single dose of CA-4-P. Significant tumour blood flow shut-down has also been observed in clinical trials, without vascular damage in normal tissues. However, reasons for tumour selectivity of VDAs such as CA-4-P remain unclear. Here, we describe methods for evaluating vascular effects of VDAs in pre-clinical models of cancer and the current status of VDA treatments. Keywords— cancer therapy, vascular disruption, blood flow rate, intravital microscopy, combretastatin.
I. INTRODUCTION Vascular disrupting or anti-vascular cancer therapy is aimed at causing a rapid shut-down of the established tumour blood supply, sufficient to induce secondary tumour cell death. It is conceptually distinct from anti-angiogenic therapy, which aims to prevent neo-vascularization of tumours. Several low molecular weight drugs have recently entered clinical trial as vascular disrupting agents or VDAs. These cause extremely rapid tumour effects, with vascular shut-down initiated within minutes to hours of drug administration. VDAs are typically administered in large intermittent doses to produce their vascular disrupting effects. However, there is some over-lap between the action of VDAs and anti-angiogenic agents, which is scheduledependent. For example, administration of several VDAs in a protracted schedule, can reveal anti-angiogenic effects. The largest group of low molecular weight VDAs is the tubulin-binding combretastatins, which are structurally related to the classical tubulin-binding agent, colchicine. Several other classes of drugs have tumour vascular disrupting properties, including certain flavonoids and inhibitors of
adhesion molecule function. Exploiting the differences in established blood vessel structure and function between tumours and normal tissue for development of novel vascular targets for cancer therapy is an expanding area of research. Determining vascular response and especially blood flow rate in tumour and normal tissues is critical for the assessment of VDAs in pre-clinical research and in clinical trials. Here we describe the use of readily diffusible radiotracers for estimation of blood flow rate and intravital microscopy for vascular function. A brief review of the current status of treatment with VDAs is also given. II. MATERIALS AND METHODS A. Estimation of blood flow rate using radiotracers Small, lipid-soluble, metabolically inert molecules, which rapidly cross the vascular wall and diffuse through the extra-vascular space, are useful as blood flow markers. In this case, the fraction of marker crossing the capillary vascular wall from the blood in a single pass through the tissue (extraction fraction, E) is close to 1.0 and for fully perfused tissue the accessible volume fraction (α) of the tissue is also close to 1.0. A practical approach for determining blood flow rate (F), which has utility for accessing its spatial heterogeneity, is the intravenous administration of a small, lipid soluble, inert molecule dissolved in saline. In this case, net uptake rate into tissue over a short time (seconds) after intra-venous injection is determined primarily by blood flow rate. 125 I or 14C-labelled iodoantipyrine (125I-IAP or 14C-IAP) were used as blood flow tracers in anaesthetized male BDIX rats bearing sub-cutaneous implants of the rat P22 sarcoma [1]. Briefly, 0.2 – 0.3 MBq of the radiotracer was constantly infused over 30 s into a cannulated tail vein. Arterial blood samples were collected at 1 s intervals from a cannulated tail artery during the course of the infusion to obtain the arterial input function. At 30 s, animals were sacrificed via an intravenous injection of sodium pentobarbitone and the tumour and a range of normal tissues rapidly excised. For 125 I-IAP, weighed tissue and blood samples were counted in a well-type gamma counter. For 14C-IAP, blood samples were counted on a liquid scintillation counter and tissue
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 457–460, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
458
G.M. Tozer and V.J. Cunningham
Ca k1 k2
measured tracer concentration at the cannula outflow. This gives a working form of Equation (1): Ctiss(t) = (k1/kd)Cm(t+td) + (1-k2/kd)k1Cm(t+td) 8 exp(-k2t)
Ctiss
(2)
tissue .
blood
Fig. 1
Compartmental model used for the quantitative estimation of F. When the extraction fraction, E, of a blood-borne tracer is 1.0, the rate constant k1 represents F. k2 represents the back-flux and Ca and Ctiss represent the arterial blood and tissue concentrations of the tracer respectively. In this model, the tissue is a single well-mixed compartment.
samples were exposed to autoradiographic film against methyl methacrylate standards of known 14C content. Calculation of F: Analysis is based on a model which assumes a vascular compartment from which the input function derives and a single (extra-vascular) well-mixed tissue compartment (Figure 1) A small, highly soluble and inert tracer, such as IAP, is assumed to rapidly equilibrate between all blood components and the tissue compartment. In this case, the model, based on [2] describes the relationship between the tissue concentration of the tracer at time t, Ctiss(t), and the arterial blood concentration of the tracer at time t, Ca(t), by the Equation: Ctiss(t) = k1Ca(t) 8 exp(-k2t)
(1)
where k1 is tissue blood flow rate (F) and k2 is k1 / αλ; α is the accessible volume fraction of tissue (i.e. the effectively perfused fraction) and λ is the equilibrium partition coefficient of the tracer between tissue and blood; 8 denotes the convolution integral; αλ is equivalent to the apparent volume of distribution (VDapp) of the tracer in the tissue [1]. Ctiss(t) and Ca(t) are expressed in radioactivity counts per g tissue and per ml blood respectively, using 1.05 for the density of blood. Correction was made for delay and dispersion effects of the radiotracer in the blood as it passes through the plastic cannula. Delay (td) was estimated directly from the known volume of the cannula and the rate of flow of blood down the cannula. kd (min-1) is a dispersion constant, which is dependent on flow rate, length and internal diameter of the cannula and the interaction with blood on its internal surface. kd for a particular cannula and flow rate of blood was calculated by conducting in vitro experiments, whereby blood was pumped through a cannula at a particular rate and then switched rapidly between labelled blood and unlabelled blood and the dispersion effect measured in the outflow, where Cm(t) is the
In this method, Ctiss(t) is measured at only one timepoint i.e. after tissue excision. Hence only one parameter, k1 (F), can be estimated from the data. λ is approximated from literature values or estimated from separate experiment and α is taken as 1.0. Studies have shown that the method is relatively insensitive to small changes in λ because of the short time-scale of the experiment [3]. This method is also applicable to other, more sophisticated techniques involving non-invasive imaging, such as positron emission tomography (PET). These imaging techniques enable a full time course of the tissue to be assayed, allowing estimation of αλ (VDapp) for example, as well as F. This is of particular interest for estimation of tumour blood flow, where the perfused fraction (α), is often less than 1.0 because of large intercapillary distances or ischaemic regions [1]. However, the spatial resolution of non-invasive imaging cannot compete with the high spatial resolution achievable with the invasive techniques described here. Solving Equation (2): Data was fitted to Equation (2) using a simple ‘Table Lookup Method’. In this method, since the input function is known, then the expected tissue activity at the time of excision Ctiss (T) can be calculated for each of a range of realistic values of F, using Equation (2). Direct comparison of the observed Ctiss (T) against the table then gives the required estimate of F (Figure 2). Evaluation of the Integral in Equation (2) was carried out in MATLAB (The Mathworks, USA ©). Note that an alternative model to the Tissue Equilibration Method described here, for estimating F, is the Indicator Fractionation Method [4]. Advantages and disadvantages of the two techniques are described in Patlak et al.[3]. B. Intravital microscopy Intravital microscopy was used for on-line determination of tumour vascular parameters before and after administration of CA-4-P. Fragments of the P22 rat sarcoma were grown in transparent window chambers, surgically implanted into a dorsal skin-flap of anaesthetized male BDIX rats, as described previously [5]. Briefly, two circular areas of skin on opposing sides of a dorsal skin-flap were thinned to the fascia layer. These were sandwiched between two glass cover-slips spaced approximately 250 µm apart and held in an aluminium frame (window chamber). A fragment of P22 tumour was placed on one fascial surface before closure of the chamber. This system allows optical access to the tumour, whilst pro-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Radiotracer and Microscopic Assessment of Vascular Function in Cancer Therapy
a)
b)
control
0.35
0
0.6
1.3
a)
600 500
300
c)
200 100
The effects of VDAs on tumours are characterized by a selective shut-down in tumour blood flow (Figure 2). Figure 3 shows the spatial heterogeneity of the response. Figure 4 shows that these effects are initiated within minutes of drug administration and that vascular damage involves extensive tumour haemorrhage. Effective agents cause a prolonged period of vascular shut-down (Figure 3) 1.2 1
0.2
400
500
1440
600 1440
Fig. 4 Intravital microscopy used to monitor the effect of VDAs on tumour vascular function. Panel a) shows a rapid reduction in red cell velocity within blood vessels of the P22 rat sarcoma following a single 30 mg•kg-1 dose of CA-4-P. Panel b) shows the vascular pattern within the HT29 human colorectal carcinoma in SCID mice before treatment. Panel c) is the same tumour as b) several minutes after 100 mg•kg-1 CA-4-P, showing vascular shut-down and haemorrhage of peripheral vessesls.
culminating in tumour cell death, which is characterized by extensive haemorrhagic tumour necrosis [7].
1 0.4
0.1
0.5
0.2
0.05 0h
6h
0
0h
6h
0
0h
heart 5
4
4
3
3
2
2
1
1
6h
brain 1.4
-1
5
-1
1.2 1 0.8 0.6 0.4
0
300
IV. DISCUSSION
0.6
kidney blood flow rate (ml g min )
2.5
1.5
0.15
0
spleen
0.8
0.25
200
2
0.3
-1
-1
blood flow rate (ml g min )
0.4
100
time after treatment (minutes)
III. RESULTS
0.35
b)
400
0
small intestine
2.5
Blood flow rate (F) in ml•g-1•min-1 estimated in the P22 tumour using the uptake of 14C-IAP combined with autoradiography to access spatial distribution of Ctiss either with no treatment (a) or 24 h following a single intraperitoneal 100 mg•kg-1 dose of CA-4-P (b). Note that there is some residual blood flow at the tumour periphery after treatment.
0
tumour
1.9
Fig. 3
red blood cell velocity (µm per second)
viding mechanical protection and stability. Smaller window chambers were implanted into severe combined immunedeficient (SCID) mice in a very similar fashion, except that due to the relative optical transparency of mouse skin, the epidermal layers and panniculus muscle of one skin layer were left intact and the tumour fragment was placed directly onto the muscle surface. All layers of skin on the opposite skin surface were surgically removed. Donor red blood cells from rats or mice, as appropriate, were labeled with the fluorescent dye DiI (Molecular Probes, Cambridge Biosciences, UK) for the measurement of red blood cell velocity in the tumour vasculature [9, 10]. Intravital microscopy was carried out when tumours reached several mm in diameter and were fully vascularized. Animals were placed on a customized microscope stage under transmitted light or under epi-fluorescence illumination for measurement of red blood cell velocity. For the latter, DiI labeled red blood cells were administered intravenously [6]. Red cell velocity in µm•s-1 was calculated from data recorded using a x20 objective from the number of video frames taken for each red cell to travel between two points of measured distance.
459
0h
6h
0
0.2
0h
6h
0
0h
6h
Fig. 2 Blood flow rate (F) in the P22 tumour and a range of normal rat tissues estimated using the uptake of 125I-IAP, either with no treatment (0 h) or 6 h following a single intraperitoneal 30 mg•kg-1 dose of CA-4-P. Note that blood flow to the tumour after treatment is almost zero, whereas there is no major effect in the normal tissues apart from the spleen.
Drug-induced vascular endothelial cell death is too slow a process to account for the rapid tumour vascular changes shown in Figures 2 to 4. For the tubulin binding agent CA4-P, rapid vascular shut-down in vivo is paralleled by very rapid re-modelling of the actin cytoskeleton of endothelial cells in vitro, which is triggered by disruption of interphase microtubules following drug binding [8]. Endothelial cells are particularly sensitive to this compound and effects include rounding up of cells, assembly of actin stress fibres and actinomyosin contractility, formation of focal adhesions, disruption of cell-cell junctions, including those involving VE-cadherin, and an increase in monolayer permeability to macromolecules. In a sub-population of cells,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
460
G.M. Tozer and V.J. Cunningham
additional effects involve F-actin accumulation into surface blebs, with cells rounding up and stress fibres misassembling into a spherical band surrounding the cytoplasm, accompanied by mal-formed focal adhesions. Preliminary studies suggest that the flavonoid, DMXAA (AS1404), also affects the endothelial cytoskeleton, resulting in a partial dissolution of actin filaments. However, unlike CA-4-P, the trigger is not disruption of interphase microtubules, which remain intact. There are currently several groups of VDAs in clinical trial for cancer therapy (Table 1). The clinical promise for VDAs resides in the potential for complementing conventional therapy. Indeed, CA-4-P has shown to be additive or supra-additive with both conventional chemotherapy and radiotherapy in pre-clinical studies [13-14]. Intravital microscopy is a valuable technique for investigating the tumour vascular effects of putative VDAs at high spatial resolution. Radiotracer uptake methods are valuable for quantifying blood flow rate in preclinical studies and are also directly applicable to sophisticated non-invasive imaging methods, which are increasingly used in clinical trials of VDAs. V. CONCLUSIONS VDAs hold the promise of valuable augmentation of conventional treatments for cancer. Accurate estimation of Table 1 VDAs in clinical trials Drug CA4 Prodrug OXI 4503 ZD6126
Web-site www.oxigene.com As above www.astrazeneca.com
AVE8062
www.aventisoncology.co m
ABT751
www.abbott.com
TZT-1027
www.daiichi.co.uk
TrisenoxTM
www.trisenox.com
NPI-2358
www.nereuspharm.com
AS1404
www.antisoma.com
ExherinTM
www.adherex.com
Drug Type CA-4-P, tubulin binding agent CA-1-P, tubulin binding agent Colchicine analogue Synthetic combretastatin Sulfonamide βtubulin inhibitor Tubulin binding agent Arsenic trioxide From marine fungus, tubulin binding DMXAA, flavonoid Peptide N-cadherin antagonist
vascular parameters in animal models is a valuable part of pre-clinical assessment of novel VDAs. VI. ACKNOWLEDGMENTS We gratefully acknowledge all our past and present colleagues at the Gray Cancer Institute, the University of Sheffield and Mount Vernon Hospital, who contributed to the work described.
REFERENCES 1.
Tozer GM, Shaffi KM, Prise VE et al. (1994) Characterisation of tumour blood flow using a "tissue-isolated" preparation. Br J Cancer 70: 1040-1046 2. Kety SS. (1960) Theory of blood tissue exchange and its application to measurements of blood flow. Methods Med Res 8: 223-227 3. Patlak CS, Blasberg RG and Fenstermacher JD. (1984) An evaluation of errors in the determination of blood flow by the indicator fractionation and tissue equilibration (Kety) methods. J Cerebr Blood Flow Metab 4: 47-60 4. Tozer GM, Prise VE and Cunningham VJ. Quantitative estimation of tissue blood flow rate. In: Angiogenesis Protocols II, S.G. Martin and C. Murray, Editors. Humana Press: Totowa, New Jersey, USA, in press 5. Tozer GM, Prise VE, Wilson J et al. (2001) Mechanisms associated with tumor vascular shut-down induced by combretastatin A-4 phosphate: intravital microscopy and measurement of vascular permeability. Cancer Res 61: 6413-6422 6. Kimura K, Braun RD, Ong ET et al. (1996) Fluctuations in red cell flux in tumor microvessels can lead to transient hypoxia and reoxygenation in tumor parenchyma. Cancer Res. 56: 5522-5528 7. Dark GD, Hill SA, Prise VE et al. (1997) Combretastatin A-4, an agent that displays potent and selective toxicity toward tumor vasculature. Cancer Res. 57: 1829-1834 8. Kanthou C and Tozer GM. (2002) The tumor vascular targeting agent combretastatin A-4-phosphate induces reorganization of the actin cytoskeleton and early membrane blebbing in human endothelial cells. Blood 99: 2060-2069 9. Murata R, Siemann DW, Overgaard J et al. (2001) Interaction between combretastatin A-4 disodium phosphate and radiation in murine tumors. Radiother Oncol 60: 155-161 10. Siemann DW, Mercer E, Lepler S et al. (2002) Vascular targeting agents enhance chemotherapeutic agent activities in solid tumor therapy. Int J Cancer 99: 1-6
Author: Professor G M Tozer Institute: Street: City: Country: Email:
University of Sheffield K Floor Royal Hallamshire Hospital Sheffield S10 2JF UK
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Influence of Endurance Training on Brain and Leg Blood Volumes Translocation During an Orthostatic Test A. Usaj Laboratory of Biodynamics, Faculty of Sport, University of Ljubljana, Ljubljana, Slovenia Abstract— The aim of this study was to ascertain whether the redistribution of blood from upper parts towards the lower parts of the body during an orthostatic test was detected by using near infrared spectroscopy (NIRS) measurements. Brain and muscle tissue total hemoglobin concentration (TOTHb) was measured for assessment the displacement of blood during an orthostatic test before and after the 8 weeks of interval running training. For this purpose a NIRS Oxymeter (ISS, Champaign, USA) was used. Two groups: experimental (N=6 subjects, 24±5 years, 177±6 cm, 78±8 kg) and control (N=5 subjects, 25±3 years, 180±5 cm, 82±5 kg) performed initially a incremental walking test on treadmill for assessment of Vo2 peak and competition in uphill walking with additional weight of 15 kg for assessment of endurance performance. The third test was orthostatic test wit a 15 min resting in supine position, following first with 5 min upright, and returning to supine position for another 5 min. Tests were performed before and after training period. The results showed an increase of endurance and Vo2 peak. The brain RTOTHb showed an increase to 1.9±1.3 μmol before training in contrast to decrease from resting values to -2.9±4.4 μmol in the end of training (P<0.01). Differently leg increase of RTOTHb was 30.2±12.8 μmol before training but increased less to a 25.3±11.4 μmol) (P<0.01) after the training. The data suggested that bran was affected by translocation of blood to lower parts of the body, due to endurance training. However, the expected increase of the calf blood volume was not approved. Keywords— NIRS, blood volume, brain, leg, orthostatic test
I. INTRODUCTION Upon moving rapidly from the supine to upright posture (orthostasis), a significant volume of blood is translocated from the upper body compartments into the veins of the legs [1]. This displacement results in a transition fall in brain as well as heart functioning problems. The reduced blood volume may caused coordination and posture regulation problems but also a fall in ventricular filling pressure, stroke volume and cardiac output. Arterial pressure may also be affected [1]. In an attempt to maintain circulatory homeostasis during orthostasis, parasympathetic activity is reduced and sympathetic activity is enhanced [1]. This resulted in increases in heart rate and systemic vascular resistance, however dependent to arterial baroreflex function [1].
Endurance training influence on adaptation of cardiovascular system is contradictory [2]. If arterial baroreflex function is reduced, inadequate increase in heart rate and systemic vascular resistance may result in orthostatic hypotension [2]. Therefore, the enhanced, diminished or unchanged baroreflex regulation may be effect of regular endurance training [1, 2]. This effect may be detectable also as different blood volume changes in brain. Since recently, the brain blood volume has not been possible monitoring. The Near infrared spectroscopy (NIRS) of the brain may be the technology which can permit such type of monitoring with a relative simply maneuvers. The hypothesis which we have followed was that translocation of blood from brain toward legs can be determined by using NIRS. The aim of the study was to ascertain whether the blood translocation from brain to calf muscles during an orthostatic test will be changed as an effect of interval endurance training. II. METHODS A. Subjects Two groups of subjects: experimental group (EX) (N=6 subjects, age = 24±5 years, body height = 177±6 cm, body weight = 78±8 kg) and control group (C) (N=5 subjects, age = 25±3 years, body height = 180±5 cm, body weight = 82±5 kg) participated in the study. B. Procedures The incremental testing protocol on treadmill consisted of continuous waling with constant velocity of 5 km/h, started at inclination of 0º (horizontal walking) and increments of inclination every 3rd minute for 3 %, to fatigue. The subjects wearing backpack with additional 15 kg. The orthostatic test consisted of about 15 min lay supine (Fig. 1) in a silent room. Because the tilt table was not used, the height of the lying was selected so, that the subject can easily only moved legs down and stand up, without any squat. The supine position was kept for another 5 min. Thereafter, subject moved in its initial supine position for another 5 min.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 461–464, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
462
A. Usaj
The competition was performed during an uphill walking competition with wearing of backpack with addition 15 kg mass. Altitude difference between start and finish was 170 m the estimated average inclination on route was about 30 %. EX group used regular interval running training, repeated 4 times per week, for an 8 weeks. The C group did not regularly train, except their usual recreational activities 1-2 per week with a low intensity. C. Instruments The incremental testing protocol consisted of walking at 5 km/h. The treadmill Cosmos (Germany) inclination increased by 3 % every 3 min until fatigue. Respiratory data were measured by using breath-by-breath Vmax 29c (Sensor Medics, USA) instrument. Laser light two channel NIRS Oximeter (ISS, Champaign, USA) was used for brain and leg oxygenation measurements. Laser light was emitted into the selected tissue, after previous calibration by using optical calibration blocks. The brain oxygenation was measured in the frontal region of the head (Fig. 1). The leg oxygenation was measured in the lateral region of the calf muscles (Fig. 2). The emitted light was absorbed or scattered through tissue. The scattered light passed to detector fiber optic bundle and measured [4]. Measurements were performed in 2 sec intervals and stored in the computer memory.
Fig. 1. Lying (supine position) of the subject during NIRS measurements. Positions of calf and brain frontal region sensors are visible.
D. Data analysis Vo2peak was determined as the highest Vo2 reached during the incremental test on treadmill. The walking endurance performance was represented by reaching time on the uphill walking competition. The total hemoglobin concentration (TOTHb) measured by NIRS, was initially smoothed by using running average method. The window for averaging was 60 s. After the smoothing, the average value of data measured in 2 last minutes of supine position, was calculated as reference value. All values 5 min before upright, during upright and during final supine position were subtracted from reference value for obtaining relative (RTOTHb) values. These values were used in all further calculations. The analysis of variance and Tukey post-hoc test were used for comparison RTOTHb data from the 5 min interval during initial (resting) supine, upright and final supine positions. The paired T test was used for comparisons of Vo2max, walking endurance performance on treadmill and uphill competition walking before and after training.
Fig. 2. Upright position of subject, with calf sensor and NIRS oximeter during orthostatic test.
III. RESULTS Subject’s endurance performance was estimated by two tests: incremental walking test and competition of uphill walking. The experimental group (EX) was significantly increased their performance according to the highest inclination reached during treadmill walking test after training and according to maximal oxygen consumption (Vo2max) reached during the same test (Table. 1). Additionally they improved their performance also in specific uphill walking competition (Table 1). Differently, the control group (C) did not improve any of the results used for endurance performance estimation (Table 1). Their results reached the level of the EX group before training period and were lower (P<0.05) than in EX group after the training period.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Influence of Endurance Training on Brain and Leg Blood Volumes Translocation During an Orthostatic Test
463
Table 1 Characteristics of maximal performance of subjects before and after training.
Inclin. max (%) Vo2max (ml/kg*min-1) Vert. veloc. (m/h)
Exp. Group (EX) Contr. Group (C) Bef. Aft. Bef. Aft 20±2 24±1 19±2 20±1 * 48±6 54±6 46±4 48±6 * 703±100 823±125 690±95 710±110 *
LEGEND: Values are mean ± SD * - Significantly (P < 0.05) different then before training.
In the EX group, the relative brain hemoglobin concentration (RTOTHb) increased during upright position of orthostatic test from 0.03±0.5 μmol to 1.9±1.3 μmol (P=0.01) before endurance training period (Fig. 3). After returning to supine position, the RTOTHb decreased towards resting values. Individual differences among subjects increased (Fig. 3) during upright and final returning to supine position. Differently, RTOTHb after training significantly decreased (P<0.01) from supine position (-0.03±0.8 μmol) in interval from 0 to 300 s, to upright position (2.9±4.4 μmol) in interval from 300 to 600 s. The RTOTHb during upright position was lower (P< 0.01) after the training with larger differences between subjects (Fig. 3). After retuning to supine position the RTOTHb was returned towards the initial level. Larger differences among subjects were still persisted. The brain RTOTHb of the C group was not different during upright position before and after training. Similarly, the differences also did not exist during final supine position.
Fig. 3. The brain RTOTHb during orthostatic test. Results are means ±SD. The interval training reduced blood volume during supine position maneuvers, but also increased individual differences between subjects.
Fig. 4. Calf muscles RTOTHb during orthostatic test. Results are means ±SD. The interval training reduced the increase of relative blood volume during supine position. Additionally, also individual differences between subjects increased. Relative calf muscle hemoglobin concentration (RTOTHb) in EX group increased during upright position from 0.0±1.0 μmol to 30.2±12.8 μmol (P<0.01) (Fig. 4) before training period. After returning to supine position also RTOTHb retuned toward initial values, however with larger individual differences. The RTOTHb showed increase to 25.3±11.4 μmol during upright position also after training period (Fig. 4). The individual differences of results remained similar as before training. However the increase was less (P<0.01) dramatic as before training (Fig. 4). The calf muscle of the C group showed similar increase of RTOTHb as EX group, however without significant change related to training. IV. DISCUSSION The moving from supine to upright posture before the training did not decrease of brain blood volume as it can be expected [1, 2, 3, 5, 6]. In contrary, it was actually increased. Differently, the actually increased calf blood volume was also expected [1, 2, 6, 7]. The orthostatic test after the endurance training period showed reduced brain blood volume during upright position and increased individual differences between subjects. This response was different than that before training. It may show a reduced tolerance to orthostatis at brain level. The increase of calf muscles relative blood volume was reached as it was expected [1, 5, 7]. However, it was lower as that before training in spite it was expected that it should be larger as an effect of blood redistribution [5, 6, 7]. In our study, the training seems to influence on a certain reduction in reaction of circulatory homeostasis during orthostatic test. This influenced also an specific reduction of blood volume in brain. However this was not affected subjects general, but specific tolerance to orthostasis at brain
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
464
level. According to subject’s sensations, some kind of transitional dizziness has occurred during upright position. However, the difference between initial and final testing was not approved. It can not be assumed whether the emotions which were more intense during initial testing, play a role in increased relative blood volume in the brain. In spite of such large difference in brain blood volume, this was not reflected in the calf blood volume differences before and after training. Such conflicting results may be also explained in two ways, following. In the study, there were only two compartments which were observed. Therefore, the redistribution of blood may dramatically occur also in the other leg muscles. The housing of NIRS emitting and sensing optodes were mounted on calf muscles by using elastic strips. They caused a certain elevated pressure on calf muscles in addition to increased muscle pressure during passively contracted calf muscles which were activated for regulation of upright posture. Both may increase pressure in opposite direction as blood redistribution was. Therefore this may mask possible differences in blood volume caused by blood volume translocation in a calf muscles after the endurance training. In conclusion, our results have shown that interval endurance running training reduced blood volume in brain during orthostatic test. Differently the expected larger increase of the calf muscle blood volume after the endurance training was not approved. We can assume that tolerance at the level of brain was reduced as a training effect without matching the phenomena at the level of calf muscle.
A. Usaj
REFERENCES 1. 2.
3. 4. 5. 6.
7.
Gabbett TJ, Gass GC, Lukman T, Morris N, Gass EM (2001) Does endurance training affect orthostatic responses in healthy eldery men? Med Sci Sports Exerc 33 (8): 1279-1286 Portier H, Louisy F, Laude D, Berthelot M, Guezennec CY (2001) Intense endurance training on heart rate and blood pressure variability in runners. Med Sci Sports Exerc 33 (7): 1120-1125 Convertino VA, Sather TM, Goldwater DJ, Alford WR (1986) Aerobic fitness does not contribute to prediction of orthostatic intolerance. Med Sci Sports Exerc. 18 (5): 551-556 Ferrari M, Mattola L, Quaresima V (2004) Principles, technologies and limitations of near infrared spectroscopy. Can J Appl Physiol 29 (4): 463-487 Shvartz E (1996) Endurance fitness and orthostatic tolerance. Aviat Space Environ Med 67 (10): 935-939 Louisy F, Jouanin JC, Guezennec CY (1997) Filling and empting characteristics of lower limb venous network in athletes. Study by postural pletismography. Int J Sports Med 18 (1): 26-29 Nazar K, Gasiorowska A, Mikulski T, Cybulski G, Niewiadomski A, et al. (2006) Effest of 6-week endurance training on hemodynmic and neurohormonal reponses to lower body negative pressure (LBNP) in healthy young men. J of Physiol and pharmacy 57 (2): 177-188 Author: Anton Usaj Institute: Laboratory of Biodynamics, Faculty of Sport, University of Ljubljana Street: Gortanova 22 City: 1000, Ljubljana Country: Slovenia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Acetylcholine addition and electrical stimulation of dissociated neurons from an extended subthalamic area – A pilot study in the rat T. Heida1, K.G. Usunoff2 and E. Marani1,2 1
University of Twente/Faculty of Electrical Engineering, Mathematics and Computer Science, Biomedical Signals & Systems, Enschede, The Netherlands 2 Medical University and Inst. Physiology, Bulgarian Academy of Sciences/Dept. Anatomy and Histology, Sofia, Bulgaria
Abstract—Addition of acetylcholine to cultures of STN area’s (5 steps of 10 μM with a step interval of 1000 s) show a direct and longlasting reduction of STN activity. Low frequency stimulation (20 Hz, 500 block pulses) increases STN activity, while high frequency stimulation (80 Hz, 2000 block pulses) reduces STN activity, even after the stimulation period of 25 seconds. Keywords— STN, electrical stimulation, Acetylcholine.
I. INTRODUCTION Parkinson’s disease is characterized by the progressive loss of dopamine neurons in the substantia nigra, which results in a reduction of activity in the thalamus partly due to an increased bursting activity of STN (subthalamic nucleus) cells. This abnormally increased bursting activity of STN cells was correlated with tremor in Parkinson patients [1]. The STN plays important roles in (voluntary) motor control, e.g, pathological changes in the nucleus cause hemiballism. Manipulation of the activities of STN neurons by adding neurotransmitter agonist or antagonists strongly affects spiking behaviour [2], which indicates the importance of knowing how activities of STN neurons are regulated. It is presently firmly established that the STN projection neurons are glutamatergic, excitatory [3], and they heavily innervate the substantia nigra (SN), the internal pallidal segment (GPi), followed by the external pallidal segment (GPe) and the pedunculopontine tegmental nucleus (PPN), by widely branching axons. Some of these connections are reciprocal. Deep brain stimulation (DBS), which is high frequency stimulation in or near the STN, results in an average reduction of: akinesia (42%); rigidity (49%), tremor (27%) and of axial symptoms. DBS produces non-selective stimulation of an unknown group of neuronal elements over an unknown volume of tissue. Therefore the actions of DBS are difficult to understand. In slice preparations, STN neurons show rhythmic single-spike activities at resting membrane potentials. In response to depolarizing current pulses, STN neurons increase their firing frequencies linearly with the magnitude of injected current. Several studies have reported the generation
of a plateau potential, a long-lasting depolarizing potential [4, 5]. A plateau potential can induce long-lasting highfrequency discharge in the absence of synaptic inputs. STN neurons can generate a plateau potential only when the cells are hyperpolarized in advance. By way of this voltage dependent generation of a plateau potential, STN neurons can transform short-lasting synaptic excitatory inputs into longlasting bursts and change their spontaneous activities from single-spike to a burst firing pattern [6]. In addition, the voltage-dependency of a plateau potential may play important roles in the generation of oscillatory bursting activity of the STN neurons, characterized by bursts of long duration and repeating at low frequency.However, the mechanism of this voltage dependency in the generation of a plateau potential remains unknown. Opening of K+ channels by metabolic pathways is one possibility; high-frequency inhibitory input from, for example, the Globus Pallidus or PPN stimulation of the STN, is another. According to Otsuka [6], L-type Ca++ channels play an important role in the generation of a plateau potential due to the slow inactivation kinetics. The burst frequency gradually decreases after which the neuron returns to its normal firing behaviour. It is assumed that dopamine depletion, as occurring in Parkinson’s disease, results in hyperpolarization of STN neurons, and thus bursting activities are more likely to be induced than in the normal situation. Is increased bursting activity due to network activity, within the STN itself or due to influence of the GP or other structures that project to the STN? In order to answer this question, STN cell cultures may provide a useful instrument. Therefore, dissociated STN area cells of the rat are cultured on a micro-electrode array (MEA). The influence of acetylcholine (PPN input) and high frequency stimulation (as in deep brain stimulation) on STN dissociated neuron activity is investigated experimentally. II. METHODS A. Cell culturing STN cells (rats) were dissociated using chemical (trypsine/EDTA) and mechanical dissociation techniques, and
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 521–524, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
522
T. Heida, K.G. Usunoff and E. Marani
D. Electrical stimulation Stimulation through these electrodes occurred at 20 Hz and 80 Hz. Stimulation artefacts are removed from the recorded data. Electrodes with spontaneous activity of at least 1 Hz prior to stimulation were used and translated to a minimum total number of spikes within the period prior to stimulation. One electrode out of 60 was chosen for stimulation. Stimulation settings: 20 Hz, 500 block pulses, start at 300 s (end 325 s); 80 Hz, 2000 block pulses, start at 300 s (end 325 s).
Fig. 1 STN (3*7*12 mm ~ 250 mm3) with presumed somatotopic organization (Source: A. Nambu et al., Neurosci. Res. 43, 2002).
cultured on a micro-electrode array (MEA) consisting of 64 electrodes. The surface of the array was coated with PolyEthylenImine (PEI, 30 ngram/ml) to support attachment and growth of the neurons. During recording periods the electrode array was placed in an incubator while the temperature was kept at 37°C. B. Measurement set up A MC1060BC pre-amplifier and FA60s filter amplifier (both MultiChannelSystems) was used to prepare the signals for AD-conversion. Amplification is 1000 times in a range from 100 Hz to 6000 Hz. A 6024E data-acquisition card (National Instruments, Austin, TX) was used to record all 60 channels at 16 kHz. Custom-made Labview (National instruments, Austin, TX) programs are used to control the data acquisition (DAQ). These programs also apply a threshold detection scheme with the objective of data reduction. Actual detection of action potentials is performed in an offline fashion. During the experiments, the temperature was controlled at 36.0 ºC, using a TC01 (MultiChannel Systems) temperature controller. Recording starts after a minimum of 20 minutes, to prevent any transient effects. Noise levels were typically 3 to 5 μVRMS, somewhat depending on the MEA and electrode. We use commercially available MEA’s from MultiChannel Systems with 60 Titanium-Nitride electrodes in a square grid. The inter-electrode distance is 100 μm, and the diameter of the electrodes is 10 μm. C. Addition of Acetylcholine Acetylcholine was applied in 5 steps of 10 μM with a step interval of 1000 s using a small pipet positioned through the cover placed over the electrode array for sterility.
III. RESULTS A. Addition of Acetylcholine Under normal culturing conditions single spike activity with an average frequency of 5.5 Hz was recorded. Bursts, i.e. sequences of at least four spikes with an inter-spike interval less than or equal to 20 ms, were also recorded but no synchrony was found. Acetylcholine was applied in 5 steps of 10 μM with a step interval of 1000 s. After application neuronal activity was significantly decreased for about 100 s, after which spiking activity was restored. The total measurement time was 2.25 hr (including preceding normal registration). Up to 1000 s after the last acetylcholine application a total reduction of 25% of the spike activity was measured (p=0.01). The occurrence of bursts did not significantly change during and after the application of acetylcholine. In conclusion, two spike phenomena in STN cultures could be discerned: an acute diminishing effect of acetylcholine and an overall reduction or late acetylcholine effect. B. Electrical stimulation Stimulation of the cultures occurred via one of the 60 electrodes for 25 seconds and were carried out at 20 Hz and 80 Hz and repeated in the experiments. Experiments lasted nearly 1.5 hour and the summation bin was 5 seconds. At low frequency stimulation the total normalized firing rates during the stimulation period increased, while at high frequency stimulation the total normalized firing rate decreased during and after the stimulation period. IV. DISCUSSION The connection that is mimicked by addition of acetylchline is part of the PPN-STN connection. This part of the connetion is cholinergic, but other cell groups are also present (glutamaterg, GABA-ergic and dopaminergic).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
120
676
130
837
160
1089
270
651
320
1819
330
731
350
772
370
1499
410
1017
450
2339
570
1786
620
984
630
805
680
2354
730
750
850
1773
0
100
200
300 time (s)
400
500
1
The moments of stimulation are indicated by red lines; Spikes are indicated by a single line.
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
600
Fig. 2 Activity of STN cells recorded from several electrodes.
523
0.9 average number of spikes per 5 seconds
electrodes / number of spikes
Acetylcholine addition and electrical stimulation of dissociated neurons from an extended subthalamic area – A pilot study in the rat
0
100
200
300 400 time (seconds)
500
600
Fig. 4 Normalized mean firing rate of selected electrodes; electrode 330 and 410 were deselected on the basis of their activity during the stimulation period with a stimulation frequency of 20 Hz, and a total of 500 pulses (25 seconds). 1
average number of spikes per 5 seconds
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
100
200
300 400 time (seconds)
500
600
Fig. 3 Activity of STN cells recorded from several electrodes. The moments of application of ACh are indicated by red lines; after the fifth step (a total of 50 μMol) ACh was washed out.
Fig. 5 Normalized mean firing rate of selected electrodes. Stimulation period is indicated by red lines. Electrode 28 is the stimulation site with a stimulation frequency of 80 Hz, and a total of 2000 pulses (25 seconds).
Destruction of the PPN ends up with hyperactivity of the STN [7]. PPN lesioning was shown to induce akinesia in primates [8, 9]. It is now well established that the cholinergic agonists brought into the rat STN contributes to an higher excitation of the STN neurons [3]. However, muscarine agonists in slices diminished the amplitude of both EPSP’s and IPSP’s in the STN [10, 11]. The reduction of IPSP’s is which leads to a final excitation of STN neurons [11, 12]. Adverse results are found in literature as to the effect of acetylcholine on the subthalamic neurons. This could well be due to the still existing connections. Taking away one connection by lesion, adding neurotransmitters or their agonists, therefore, does not show the pure effect of connections, neurotransmitters or receptors.
Too many parameters are involved to understand the effect of these experiments. Culturing subthalamic neurons at least restricts the amount of parameters, but adds others! and it is rather unexpected that addition of acetylcholine to such cultures shows a short term and a long term effect. One should notice that addition of 10μM acetylcholine to rat cortex neurons increases their activity (unpublished results). If hyperactivity of STN is induced by reducing the PPN neurotransmitters, among them acetylcholine, and motohypoactivity is the consequence, than this MEA culturing experiment explains by the long term effect how such an hyperactivity can result from this type of neurotransmitter, neglecting all the other effects of other PPN neurotransmitters. The results show no effect on bursting activity, and
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
524
T. Heida, K.G. Usunoff and E. Marani
therefore the long term effect of acetylcholine on subthalamic cultured cells may be related to the synchrony or pacemaker effect, stressing the role of the PPN. DBS is carried out in humans under monopolar cathodic stimulation with 120-180 Hz frequency, 1-5 V amplitude and 60-200 ms pulse duration. Although the stimulation conditions in the reported experiments at high frequencies differ from those of human DBS, still some conclusions can be drawn. Compared to low frequency stimulation an overall increase in the spike activity in culture during stimulation is noticed, after which the activity returns to normal. While at high frequency stimulation (80 Hz in our cultures) a decreases in spike activity in cultures appears not only during stimulation but also a certain period afterwards (25 till 100 seconds).
REFERENCES 1. 2.
3. 4.
Levy R, Ashby P, Hutchison WD, et al. (2002) Dependence of subthalamic nucleus oscillations on movement and dopamine in Parkinson’s disease. Brain 125:1196-1209 Wilson CL, Puntis M, Lacey MG (2004) Overwhelmingly asynchronous firing of rat subthalamic nucleus neurons in brain slices proides little evidence for intrinsic interconnectivity. Neurosci 123:187-200 Feger J, Hassani OK, Mouroux M (1997) The subthalamic nucleus and its connections. New electrophysiological and pharmacological data. Adv. Neurol. 74:31-43 Otsuka T, Murakami F, Song W-J (2001) Excitatory postsynaptic potentials trigger a plateau potential in rat subthalamic neurons at hyperpolarized states. J. Neurophysiol. 86:18161825.
5.
Otsuka T, Abe T, Tsukagawa T, et al. (2004) Conductancebased model of the voltage-dependent generation of a plateau potential in subthalamic neurons. J. Neurophysiol. 92:255264. 6. Beurrier C, Congar P, Bioulac B et al. (1999) Subthalamic nucleus neurons switch from single spike activity to burstfiring mode. J. Neurosci. 19:599-609. 7. Breit S, Lessmann L, Benazzouz A et al. (2005) Unilateral lesions of the pedunculopontine nucleus induces hyperactivity in the subthalamic nucleus and substantia nigra. Eur J Neurosci 22: 2283-2294 8. Matsumura M, Kojima J (2001) The role of the pedunculopontine tegmental nucleus in experimental parkinsonism in primates. Stereotact Funct Neurosurg 77: 108-115 9. Matsumura M (2001) Experimental parkinsonism in primates. Stereotact. Funct Neurosurg 77: 91-97 10. Flores G, Hernandez S, Rosales MG et al. (1996) M3 muscarine receptors mediate cholinergic excitation of the spontaneous activity of the subthalamic neurons in the rat. Neurosci Lett 203: 203-206 11. Shen KZ, Johnson SW (2000) Presynaptic dopamine D2 and muscarine M3 receptors inhibit excitatory and inhibitory transmission of rat subthalamic neurons in vitro. J. Physiol 525: 331-341 12. Rosales MG, Flores G, Hernandez S et al. (1994) Activation of subthalamic neurons produces NMDA receptor-mediated dendritic dopamine release in substantia nigra pars reticulate: a microdialysis study in the rat. Brain res 645: 335-337 Author: Institute: Street: City: Country: Email:
T. Heida University of Twente, Biomedical Signals & Systems Drienerlolaan 5 Enschede The Netherlands
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Assessing FSP Index Performance as an Objective MLAEP Detector during Stimulation at Several Sound Pressure Levels M. Cagy1, A.F.C. Infantosi2 and E.J.B. Zaeyen3 1 2
Fluminense Federal University / Department of Epidemiology and Biostatistics, Lecturer, Niterói, Brazil Federal University of Rio de Janeiro / Biomedical Engineering Program, Professor, Rio de Janeiro, Brazil 3 Military Police Central Hospital of Rio de Janeiro, Rio de Janeiro, Physician, Brazil
Abstract— The need for a better approach for auditory screening is due to pathologies that can affect higher auditory centers. Therefore, the Middle Latency Auditory Evoked Potential (MLAEP) was investigated by using the FSP statistical index. The EEG of ten adults during click stimulation with different sound pressure levels was collected. With the critical value for the statistical null hypothesis (absence of response), particularly considering EEG as a colored noise and fitting the number of degrees of freedom of the index distribution, objective detection of MLAEP resulted in a better performance than the threshold of 3.1, commonly employed in the literature. This finding suggests the FSP for detecting MLAEP response as an auxiliary tool for determining objectively the neurophysiologic acoustical threshold level. Keywords— MLAEP, Objective Psycho-acoustic Threshold.
Response
Detection,
I. INTRODUCTION Among the objective techniques employed for auditory test, the brainstem auditory evoked potential (BAEP) is useful to assess the integrity of the auditory pathway from the inner hair cells (IHC) up to the inferior colliculus in the midbrain (brainstem). An audiometric test based on the wave V of BAEP, named BERA, has good correlation with the tonal audiometry, indicating the lowest stimulus pressure level that is able to produce auditory response [1]. The Otoacoustic Emissions (OAE) [2] have also been employed in order to assess the integrity of the auditory bioamplification system of the outer hair cells (OHC) in the cochlea. Hence, one can point out these techniques as a neurophysiologic acoustic threshold measure. However, some pathologies may also affect higher auditory centres, i.e. those above the inferior colliculi. Hence the dysfunction of these structures cannot be detected through BERA or OAE [4,5]. For instance, the Auditory Neuropathy (AN) diagnosis, which is also named auditory desynchronisation, does not specify in neonates (NN) the dysfunctional site of deafness neither the neuropathological aspects [4]. Hence, the Mid-latency Auditory Evoked Potential (MLAEP) could be employed, since it reflects the
activity of structures above the inferior colliculi up to the primary auditory cortex [6]. The detection criterion usually employed in auditory evoked potentials is based on the response morphology (particularly the amplitude and latency of peaks). The FSP index, a time-domain parameter related to the Signal-toNoise Ratio (SNR), has been developed by Elberling and Don [6] as a quality estimator for evoked potentials, originally applied to BAEP. This index has also been used as an objective detection criterion on BAEP in auditory screening programs for neonates – where values of FSP higher than 3.1 indicate presence of auditory response [7] – as well as to detect MLAEP [8]. This work uses the FSP index to objectively detect MLAEP in normal subjects under stimulation at several sound pressure levels, including the psycho-acoustic threshold, aiming at estimating the neurophysiological acoustic threshold. II. THE FSP INDEX
Proposed by Elberling and Don [6] to estimate the BAEP waveform quality, the FSP parameter is directly related to the evoked potential SNR, and can be defined as the ratio FSP = var(S ) var(SP) , where var(S ) is the time variance of the evoked potential and var(SP) is the “estimated variance of the averaged background noise” [6], assumed to be 1/M times the variance across epochs of the acquired signal at a single time point (SP) arbitrarily fixed. Hence, considering a null-mean evoked potential and using the biased estimator for variance, FSP can be rewritten as: 2
1 n ⎛ 1 M ⎞ ⎜ ∑ x j [ n] ⎟ ∑ N n = n ⎝ M j =1 ⎠ , FSP = 1 M 2 ∑ (x j [nSP ] − x[nSP ]) M 2 j =1 f
i
(1)
where x[n] is the result from averaging M signal epochs x j [n] within a interest time interval (n = ni, ni+1, ..., nf), N = nf - ni+1 and nSP is the sample index corresponding to SP. Assuming that the acquired signal has Gaussian
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 492–496, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Assessing FSP Index Performance as an Objective MLAEP Detector during Stimulation at Several Sound Pressure Levels
distribution for all epochs and that var(S ) and var(SP ) are mutually independent, FSP follows a Fν ,ν distribution, where ν1 and ν2 are the degrees of freedom for the numerator and denominator respectively. Considering ν2 equal to the number of epochs (M), and estimating ν1 by means of curve fitting the FSP histogram obtained for spontaneous EEG [6], FSP allows one to infer about the necessary number of epochs to result in a required waveform quality. Besides, one can establish a response detection criterion based on the null hypothesis of no response. Thus, the critical value for FSP can be obtained by: 1
FSP crit = Fcrit ν1 , M ,α
2
(2)
where Fcrit ν , M ,α is the critical value of the F-distribution for a significance level α. 1
III. MATERIAL AND METHODS
A. Casuistry The EEG signal was collected from 10 normal volunteers aging from 21 to 57 years (mean: 35.3 years), in dorsal decubitus, completely relaxed and comfortable in silent ambient. Each acquisition lasted about 60 minutes. All volunteers signed up a consent agreement form. B. Experimental Protocol Ag/AgCl electrodes were positioned according to the 1020 international system in order to acquire the derivations [Cz-Mi] and [Cz-Mc] (vertex-ipsilateral and contralateral mastoids: left and right respectively), grounded at Fpz. Impedance was kept below 2 kΩ during the whole experiment. Stimulation was carried out using the two-channel evoked potential equipment Nihon Koden MEB 9102 (Japan), by means of 100 µs-wide rarefaction clicks driven at 9 Hz (frequency band around 1-4 kHz [2]), and transduced via earphone Elega model DR-531B-14. Sound pressure level was measured in dBNHL (0 dBNHL = 30 dBpeSPL in this equipment). Only left ear was stimulated, while right ear received masking white noise at 40 dB below the stimulation level employed. The number of stimuli lied between 600 and 2000, depending on the sound pressure level. Higher number of stimuli was applied for lower level stimulation, aiming at maintaining response detection even for low SNR. Initially, the auditory threshold (L) was determined for each volunteer (varied from 0 to 11 dBNHL, mean of 7 dBNHL = 37 dBpeSPL) and, then, EEG was collected without
493
stimulation for circa 90 s. Then, 600 stimuli at 85 dBNHL were applied, followed by two sessions of 1000 stimuli (60 and [L+26] dBNHL − i.e. 26 dB above individual threshold) and a third session of 1200 stimuli at [L+18] dBNHL. Another session of pure EEG was collected for 110 s, followed by two sessions of 1200 stimuli ([L+15] and [L+12] dBNHL). The remaining sessions consisted of 2000 stimuli each one (sound pressure levels of [L+10], [L+8], [L+5], [L+2] and [L] dBNHL). C. Acquisition and Pre-Processing The EEG derivations were amplified and filtered (20 Hz high-pass at 6 dB/octave, 2000 Hz low-pass at 12 dB/octave, and 60 Hz notch) by means of the MEB 9102. Then, the EEG and the stimulation trigger were digitalized at 6 kHz (DAQPad 1200) via acquisition software developed in LabVIEW (National Instruments, Austin, USA). During acquisition, epochs containing samples with amplitude higher than 20 µV were considered as artifact contaminated and hence were automatically rejected, while the averaged waveform was visually monitored on the MEB 9102 screen during the whole experiment. D. Evoked Potential and Estimating FSP For obtaining the auditory evoked potential (AEP) shown in Figure 1 (both derivations) different numbers of epochs (M) have been averaged, according to the description of the experimental protocol, i.e., higher M for lower stimulation levels. The resulting AEPs show evident amplitude reduction for lower sound pressure levels. Figure 1 exhibits with relative clearness the wave V from BAEP down to [L+10] dBNHL, when one used M = 2000. Despite a quasi three-fold increase of M compared to that used for 85 dBNHL, the amplitude of wave V does not considerably differ from the AEP fluctuations neither from the averaged spontaneous EEG. The high amplitude wave occurring after wave V, observed with 60 and 85 dBNHL stimulations, is the Post-Auricular Muscle Response (PAMR, usually found with high stimulation levels). The waves N0, P0, Pa, Na, Pb, Nb from MLAEP can also be recognized for low stimulation levels, as well as the wave V from BAEP. For high sound pressure levels, the PAMR makes difficult to identify the initial waves from MLAEP. Since the FSP is directly related to SNR, the EEG was further digitally filtered within the band between 20 and 100 Hz, characteristic from MLAEP. Besides, a notch filter was applied in 180 Hz. The FSP index was then estimated using (1) with a number M of 110-ms epochs (N = 660) synchronized with the stimuli onset, and defining SP as
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
494
M. Cagy, A.F.C. Infantosi and E.J.B. Zaeyen
Fig. 1 AEP from volunteer #1 (L = 11 dBNHL), as a function of the sound
Fig. 2 Required number of epochs for detecting auditory response in volunteer #1 (L = 11 dBNHL): a) derivation Cz-Mi; b) Cz-Mc.
pressure level used in stimulation: a) derivation Cz-Mi; b) Cz-Mc.
35 ms post-stimulus [9] (nSP = 210). Three detection thresholds have been used: the first is the constant 3.1, as pointed out in literature [7], and the remaining ones were based on (2) in order to extract critical values based on the parameter statistics for α = 0.01. In a simpler approach, assuming EEG to be a white noise, ν1 was established as N, and M varied depending on the stimulation level — FSPcritA. Further, assuming EEG as a colored noise, the value of ν1 was estimated for each subject by fitting an F-cumulative distribution function to the cumulative histogram for FSP estimated in sets of M = 100 epochs. Thus, the critical value was then obtained using the individual fitted ν1 (also varying M) — FSPcritB. Hence, one expects a 1% rate of false positives, i.e. erroneous auditory response detection. IV. RESULTS
Figure 2 exhibits the number of epochs required for detecting auditory response by applying FSP to the EEG of volunteer #1. In general, the lower the sound pressure level, the higher the number of epochs required for detection. Using the value 3.1 as the threshold, besides the consistent need of more epochs, no detection occurs for stimulation below [L+5] = 16 dBNHL in both derivations. In derivation Cz-Mc and with sound pressure levels of [L+10] and [L+8] dBNHL, the required M is around 600, achieving 1000 for [L+5] dBNHL. For the ipsilateral derivation, values of M are consistently higher than those for the contralateral derivation. Using FSP critA (α = 0.01), which also depends on M and varied between 1.16 and 1.45, detection stops to occur for pressure levels below [L+2] = 13 dBNHL. For this
stimulation level, one can detect response in Cz-Mc with M = 1300. The values of FSP critB varied between 2.20 and 2.37 (ν1 = 12) and detection also occurred for pressure levels as low as [L+2] = 13 dBNHL; the required M in this case (for derivation Cz-Mc) was 1800. Considering the whole casuistry, the percentage of subjects with detected response reduces with lower stimulation levels, as summarized in Table 1. As expected, detection performance with the threshold value 3.1 is equal or worse than that with FSP critA and FSP critB, and no detection can occur even for high stimulation levels. Using these latter thresholds, the performance is similar between derivations, provided that the sound pressure level is higher than [L+2] dBNHL. Using FSP critA, 100% of the subjects had response detected with stimulations as low as [L+12] dBNHL, although false detection (pure EEG) occurs in 50% of the subjects in both derivations. On the other hand, using FSP critB, full detection occurs only with higher stimulation levels (down to [L+26] dBNHL), but no false detection is observed. For this threshold, the individual values of ν1 varied between 4 and 12 (mean: 9). V. DISCUSSION
The FSP index was able to detect MLAEP even for stimulation levels close to the psycho-acoustic individual threshold, and generally requires a number of epochs M < 500, considerably lower that that used for visual BAEP identification (M around 2000). This finding occurs specially for FSP critA and FSP critB, although Sininger [7] uses the constant threshold 3.1 to detect auditory response (BAEP) based on FSP.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Assessing FSP Index Performance as an Objective MLAEP Detector during Stimulation at Several Sound Pressure Levels VI. CONCLUSION
Table 1 Rate of detection considering all volunteers. FSP > FSP critA Cz-Mi
Cz-Mc
FSP > FSP critB Cz-Mi
495
FSP > 3,1
Cz-Mc
Cz-Mi
Cz-Mc
85
100
100
100
100
86
100
60
100
100
100
100
90
100
[L+26]
100
100
100
90
100
80
[L+18]
100
100
80
90
70
90
[L+15]
100
100
80
90
70
80
[L+12]
100
100
56
89
56
67
[L+10]
90
80
70
80
70
80
[L+8]
80
100
60
70
50
60
[L+5]
70
80
20
60
20
50
[L+2]
38
75
13
25
13
0
[L]
60
50
20
10
10
10
EEG
50
50
0
0
0
0
One should emphasize that this threshold has been suggested by Elberling and Don [6] as a value that produces a 1% rate of false detections, i.e. in the absence of BAEP. Based on estimated values of FSP for blocks with M = 250 epochs (time length of 10 ms for each one) of spontaneous EEG, these authors adjusted an F-distribution changing the number of degrees of freedom (ν1) for the numerator of (1), keeping ν2 = 250. From 8 subjects, values of ν1 varied between 8 and 22; thus, assuming Gaussian distribution [6], they obtained a lower limit of ν1 = 5 (worst case). Therefore, the critical value 3.1 results from an Fdistribution F5, 250 for α = 0.01. On the other hand, the approach used in the present work to adjust the number of degrees of freedom differed from the above by using the individual estimated value of ν1, as well as employing a varying value for ν2, depending on the M used. Also, the range of values for ν1 in the present casuistry (4 − 12) noticeably differed from that obtained by those authors. Hence, the use of FSP critB resulted in a better sensitivity to detect MLAEP than the threshold 3.1, and in a better specificity than FSP critA, since it did not falsely detect auditory response during spontaneous EEG. On the other hand, using FSP critB or the threshold 3.1, 100% of detection in any derivation occurred only at stimulations levels down to [L+26] dBNHL, whilst using FSP critA, full detection occurred in both derivations at levels as low as [L+12] dBNHL. Based on expert morphological analysis, Smith et al. [10] reported MLAEP detection with stimulation at 15 dB above psycho-acoustic threshold ([L+15] dBNHL). Further, by comparing sensitivity in both derivations, one can notice some complementarity, which suggests their concomitant use in analysis.
The FSP index was able to detect auditory response, particularly the MLAEP, with stimulation at low sound pressure levels, even close to the individual psycho-acoustic threshold. Further, it requires a number of epochs lower that that usually employed when visually identifying auditory response. By adjusting the number of degrees of freedom for the F-distribution (considering EEG as a colored noise), this index has shown a considerable increase in performance when compared to the critical value for white-noise EEG (specificity) and to the constant value 3.1 (sensitivity). Therefore, this index could be applied to MLAEP as an auxiliary tool in the objective detection of the neurophysiological auditory threshold, considering the thalamic-cortical via. Based on this finding, the FSP index could also be investigated in neonates, aiming at its implementation in auditory screening programs. In this case, one could straightly use sound pressure levels close to the normal threshold for each age, reducing the time spent in the exam.
ACKNOWLEDGEMENT To CNPq and FAPERJ for the financial support and to the Military Police Central Hospital of Rio de Janeiro for providing infrastructure support.
REFERENCES [1] [2] [3] [4] [5]
[6] [7] [8]
Chiappa MD (1997) Brain Stem Auditory Evoked Potentials: Methodology, In: Evoked Potentials in Clinical Medicine, 3 ed, Lippincott-Raven Publishers, New York, 157-282. Zaeyen EJB, Infantosi AFC, Souza MN (2002) Avaliação da audição em Recém-nascidos: Estado atual e perspectiva, In: Clínica de Perinatologia, 2/3: 501-530. Rapin I, Gravel J (2003) Auditory neurophaty, physiologic and pathologic evidence calls for more diagnostic specificity. Int J Ped Otorhinol 67: 707-728. Shapiro SM, Nakamura H (2001), Bilirubin and auditory system. J Perinatol 21-suppl.1: S52-5; discussion: S59-62. Liégois-Chauvel C, Musolino A, Badier JM et al. (1994) Evoked potentials recorded from the auditory cortex in man: evaluation and topography of the middle latency components. Electroenc Clin Neurophysiol 92: 204-214. Elberling C, Don M (1984) Quality estimation of averaged auditory brainstem responses. Scand Audiol 13: 187-197. Sininger YS (1993) Auditory brain stem response for objective measures of hearing. Ear and Hearing 14(1): 23-30 Gould HJ, Crawford MR, Mendel MI, Dosson SL (1992) Quantification technique for the middle latency response. J Am Acad Audiol 3(3): 153-8.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
496 [9]
Bell SL, Allen R, Lutman ME (2002) Optimizing the Acquisition Time of the Middle Latency Response Using Maximum Length Sequence and Chirps. J Acoust Soc Am 112(5): 2065-2073. [10] Smith DI, Lee FS, Mills JH (1989) Middle latency response: Frequency and intensity effects. Hearing Research 42: 293303.
M. Cagy, A.F.C. Infantosi and E.J.B. Zaeyen Author: Institute: P.O.Box: City: Country: Email:
Antonio Fernando Catelli Infantosi Biomedical Engineering Program – UFRJ 68.510 Zip Code: 21941-972 Rio de Janeiro Brazil
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Brain on a Chip: Engineering Form and Function in Cultured Neuronal Networks B.C. Wheeler University of Illinois, Department of Bioengineering, Urbana IL USA Abstract— We culture embryonic rat hippocampal neurons to learn how small networks of neurons interact and code information. We design the networks by using microlithography to control surface chemistry that in turn controls the initial position of the neurons and strongly influences subsequent growth. The lithography also permits us to guide neurons preferentially to electrodes of a microelectrode array, with a resultant increase in recordability and excitability of the cultured neurons. Geometric control also allows us to begin to investigate the question as to whether the geometric pattern of a neuronal network influences the patterns of its neuroelectric activity. Various neuronal network behaviors can be demonstrated, including propagation of both action potential and synaptically coupled activity, graded activation of networks, convergence of information flow, and elementary learning phenomena. The immediate aim of the research is the creation of a reliable,
repeatable, and robust tool for understanding neuronal information processing. Long term the results will assist basic and applied neuroscience including prosthetics and cell based biosensors. Keywords— Neural, culture, electrodes, networks.
ACKNOWLEDGMENT Collaborator: Dr. GJ Brewer, SIU School of Medicine, Springfield IL. Support: NIH: (1 R01 NS052233-01A1 and subcontract to R01 EB000786 at Georgia Tech); NSF (EIA 0130828).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 477, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
BSI versus the Eye: EEG Monitoring in Carotid Endarterectomy W.A. Hofstra1 and M.J.A.M. van Putten2 1
Department of Neurology and Clinical Neurophysiology, Medisch Spectrum Twente, Enschede, The Netherlands Institute of Technical Medicine of the Faculty of Science Technology, Biomedical Signals and Systems Group, University of Twente, Enschede, The Netherlands (E-mail:
[email protected])
2
Abstract- Carotid endarterectomy is a common procedure as an important secondary prevention of stroke. For selective shunting, continuous EEG monitoring is a standard technique, with visual assessment to track possible ischemia. Recently, the extended BSI was proposed as a pair of quantitative features to support the visual interpretation. Here, we further evaluate its potential clinical use using a large data set. The extended BSI (consisting of a spatial and temporal symmetry measure, sBSI and tBSI, respectively) was calculated retrospectively from a group of 111 patients who underwent a carotid endarterectomy in our hospital. EEG recordings were visually assessed to determine shunt placement and compared to the calculated BSI-values. All unilateral changes in the EEG found by visual assessment are reflected by ∆-sBSI ≥ 0.060 and all diffuse changes by ∆-tBSI ≥ 0.065. In EEGs with both unilateral and diffuse changes, ∆-sBSI ≥ 0.060 and ∆-tBSI ≥ 0.065. This study extends and confirms our previous pilot results, that the sBSI and tBSI correlate strongly with the visual assessment of the EEG, as performed by experienced neurophysiologists. The extended BSI supports the visual intraoperative EEG monitoring during carotid endarterectomies and assists in a more reliable decision for selective shunting. Keywords- EEG, Carotid endarterectomy, Brain symmetry index, BSI
I. I. INTRODUCTION Carotid endarterectomy (CEA) is a commonly used procedure in patients suffering from a symptomatic stenosis of the internal carotid artery as an important secondary prevention of stroke [1-3]. Because performing CEA requires temporary clamping of the internal carotid artery, there is a potential risk of brain ischemia. In case of an inadequate blood flow, temporary shunting is necessary. However, as shunting has a 5-7% complication-risk [4,5], many surgeons advocate selective shunting above routine shunting [6-8]. Large differences exist between European countries in percentages of patients shunted. In the Netherlands, about 15 % of the patients is shunted [9]. A test occlusion of the carotid artery is performed to evaluate potential changes in the cerebral perfusion that may warrant (temporary) shunting [10,11]. If local anesthesia is applied, brain function and potential ischemia can safely be assessed
by clinical examination [12]. However, as the majority of patients is being operated under general anesthesia, different approaches of monitoring are needed. Continuous EEG monitoring is one of the procedures most implemented at present [10,13,14]. During test-clamping the EEG is visually analyzed. If the EEG shows hemispheric asymmetries, diffuse slowing or both, ischemia is present [11,15-18], and shunt placement is indicated. However, visual EEG interpretation is not always reliable. Visual analysis has a limited sensitivity to observe asymmetries or temporal variations in patients with a low-voltage EEG or showing slow EEG changes. Also, it is not uncommon to be in doubt whether or not shunting should be performed in patients showing relatively mild EEG changes. Quantification of the EEG and providing trend curves can assist in the decision for selective shunting, and provides objective criteria [18-21]. To quantify hemispheric changes in spectral symmetry, the spatial brain symmetry index (sBSI) was proposed to assist in the visual EEG interpretation [18]. The BSI is a normalized measure for hemispheric or spatial asymmetries. Its values range from 0 to 1. Perfect symmetry is indicated by 0 and maximal asymmetry by 1. The BSI 0.05 in physiological conditions and increases monotonically if (progressive) unilateral ischemia is present. The BSI has been shown to be a very useful parameter in our hospital to quantify hemispheric asymmetry in the EEG, both during CEA [18] as for monitoring stroke patients [22]. Furthermore, the BSI has been shown to be a sensitive feature to detect focal seizure activity, for example in temporal lobe epilepsy [23]. In addition to the spatial BSI, the temporal BSI (tBSI) was introduced [24]. This measure quantifies diffuse changes in the EEG. The tBSI is primarily sensitive for temporal changes in spectral characteristics that are not caused by changes in spatial symmetry. This provides us with two different indices, the sBSI for changes in spatial symmetry and the tBSI for changes in temporal symmetry. The recent report about the sBSI and tBSI was a pilot study, performed on a data set of 25 patients. Here, we evaluate these features using a large EEG set from patients who underwent a CEA in our hospital in 2000-2007.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 487–491, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
488
W.A. Hofstra and M.J.A.M. van Putten
II. METHODS A. Description of the BSI The sBSI is defined as a normalized measure for interhemispheric spectral symmetry. Although the sBSI is originally defined in the frequency range 1- 25 Hz, we will study two additional frequency ranges as well, i.e. 1- 10 Hz and 1- 15 Hz.
sBSI =
1 N 1 ∑ N i =1 M
∑
M
Ri , j − Li , j
j =1
Ri , j + Li , j
(1)
with Ri,j (Li,j) the Fourier coefficient belonging to frequency i = 1 ,…, N of right (left) hemispheric bipolar derivations j = 1, 2 ,…, M. For a standard 10–20 system, M = 8. Temporal changes in the EEG are quantified by the tBSI′, defined as the normalized difference between the actual spectral characteristics and a baseline EEG epoch, e.g. a segment prior to test-clamping.
tBSI ' =
1 N 1 K Si , j − S ref ,i , j ∑ ∑ j =1 S + S N i =1 K i, j ref , i , j
(2)
with Si,j the Fourier coefficient belonging to frequency i = 1 ,… ,N of bipolar derivations j = 1, 2 ,…, K. Clearly, K = 2 · M, since the total number of bipolar derivations is twice the number of the bipolar derivations of each hemisphere. The tBSI is now defined as
tBSI =
2tBSI '− sBSI 2
(3)
effectively eliminating the contribution of changes in spatial symmetry. The factor two in the nominator is introduced to account for the fact that the number of channel pairs K = 2 · M involved in the calculation of the tBSI′ is twice the number of pairs (M) used in the sBSI calculation. Since K is present in the denominator of Eq. (2), the tBSI′ is half as sensitive for unilateral changes compared to the sBSI. The tBSI as defined in Eq. (3) now only captures EEG changes that are not due to changes in symmetry. The division by 2 normalizes the tBSI to the range [0–1], similar to the range of the sBSI [18]. For more details we refer to [18,24] B. Patients Data were analyzed retrospectively from all patients (n=111) who underwent a CEA in this hospital between
2000 and 2007. Patients were obtained from our digital EEG database (Neurocenter (TM), Clinical Science Systems, Netherlands). The decision to shunt was based on intraoperative EEG monitoring by visual analysis by an experienced electroencephalographer. Typically, shunting was advised if the EEG showed significant changes, either unilateral, diffuse, or both. C.EEG recording and analysis EEGs were recorded according to the international 10–20 system using Ag/AgCl electrodes. Electrode impedance was kept below 5 kΩ to reduce polarization effects. Recording was performed using BrainLab (OSG, Belgium). The sampling frequency was set to 250 Hz. Sixteen bipolar derivations were subsequently used for the analysis, i.e. Fp2-F4, F4-C4, C4-P4, P4-O2, Fp1-F3, F3-C3, C3-P3, P3O1, Fp2-F8, F8-T4, T4-T6, T6-O2, Fp1-F7, F7-T3, T3-T5, and T5-O1. Analysis of the EEGs was performed using software developed in our own department that allowed analysis of subsequent 10 s epochs of the EEG. Routines were implemented in MatLab (The Mathworks, Inc). The power was estimated using Welch's averaged periodogram method. The signal from each bipolar derivation, containing 10 s of data (5000 datapoints) was divided into overlapping sections containing N=1024 points, each of which was detrended and windowed. The magnitude of the length N discrete FFTs of the sections was averaged to form the spectral density. Subsequently, the BSI was calculated. The baseline BSI was calculated from an epoch with duration of 60 s preceding the test-clamping procedure, using the mean value in this period. This period will hereafter be called the reference period. The post BSI was defined as the maximum value in the 180 s following clamp on (evaluation period). The reference and/ or evaluation period were reset manually when artifacts were present in the EEG that could contribute to unreliable values of the BSI, as identified with visual re-analysis. Based on visual interpretation the EEGs were classified into two groups: 1) there was an indication to shunt or 2) no indication to shunt. III. RESULTS In the original database of n= 111, four EEGs could not be used because clamp-time was not listed. All EEGs were checked afterwards on disturbing artifacts. Eight EEGs (7%) had to be excluded from the database because of too many artifacts to calculate reliable BSI-values. Of the remaining 99 EEGs the sBSI and tBSI have been calculated. In 24 EEGs (22%) the reference and/ or evaluation period had to be revised manually after visual re-interpretation,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
BSI versus the Eye: EEG Monitoring in Carotid Endarterectomy
Table 1 Overview of patient characteristics Characteristics of patients, whose EEGs were used for BSI calculations (n= 99). The patients were divided into two groups, based on visual analysis of the EEGs: shunt-indication or no shunt-indication. Shuntindication
No shuntindication
Total
11
88
99
Total Male
5
61
66
Female
6
27
33
Age (years)
69.5 [51-86]
65.2 [42-83]
65.6 [42-86]
Clamp-time (min)
5.2
32.17
29.17
[0.33-25.33]
[3-84.33]
[0.33-84.33]
because of artifacts in the standard reference and/ or evaluation period, which could otherwise lead to erroneous calculations of the BSI. The remaining 79 EEGs (71%) could be used without interference. In total, EEGs from 99 patients were used to calculate the BSI. In nine out of these patients there was an indication to shunt, based on intraoperative visual analysis of the EEG. In five out of these nine patients shunting was technically not possible and CEA was not performed. The other four operations were finished successfully. During re-analysis of the EEGs, there were two cases where we would have suggested shunt-placement, which was, however, not advised by the clinical neurophysiologist observing the EEG during the CEA. In one of these cases, there was moderate diffuse slowing after test-clamping; in the other patient moderate asymmetry was observed. Fortunately, no complications occurred during the procedure or afterwards. In 88 patients there was no indication to shunt, as no visual changes were observed The calculations were done in three different frequency ranges, 1-10, 1-15, and 1-25 Hz. We found that in the 1-10 Hz range the BSI-values correlated most significantly with visually based shunt-indication. In 88 out of the 88 patients in the non-shunting group the ΔsBSI and/ or the ΔtBSI did not reach values larger than Table 2 . 2-way contingency tableThe patients were divided into two groups, based on visual analysis of the EEGs: shunt-indication or no shunt-indication. The results are compared to the ΔsBSI and ΔtBSI- values. Positive ΔsBSI are values> 0.06 and positive ΔtBSIvalues are values > 0.065.
Positive ΔsBSI and/ or ΔtBSI Negative ΔsBSI and ΔtBSI Total
Shuntindication 11
No shuntindication 0
Total
0
88
88
11
88
99
11
489
respectively 0.05 and 0.065. In all 11 patients whose EEGs showed such changes, that shunting was indicated, we found that the ΔsBSI ≥0.05 and/ or the ΔtBSI ≥.065. This results in a sensitivity of 100% and a specificity of 100%, as shown in Table 2. In all EEGs where unilateral changes were present, the ∆sBSI > 0.05. In all EEGs with diffuse changes, ∆tBSI > 0.065. Also, in EEGs with both unilateral and diffuse changes, the ∆sBSI > 0.05 and ∆tBSI > 0.065. IV. DISCUSSION AND CONCLUSIONS At present visual assessment of the intraoperative EEG during carotid endarterectomy is the standard procedure to determine whether shunting is needed during test-clamping. Unfortunately, however, visual assessment of EEG is prone to human doubt and misinterpretation. Moreover, very experienced neurophysiologists are needed to perform these visual assessments. It is unknown, however, to what extent changes in an EEG can be accepted without consequences for the patient. To be safe, most physicians advice to shunt as soon as EEG changes appear. Therefore, it would be very useful to have an objective quantitative measure to assist in a more reliable decision whether to shunt or not. Furthermore, a continuous quantitative trend is helpful during the operation, to capture, more accurately, possible changes in cerebral perfusion. Here, we show that the ∆-sBSI and ∆-tBSI can assist in the decision whether shunting is needed during CEA, as these measures capture any change in the EEG. These features quantify interhemispheric and diffuse changes in the EEG, respectively, providing an objective measure to help and evaluate whether significant EEG-changes are present and shunting should be performed. In this study we confirm the previous results with the sBSI [18,24], showing that the ∆-sBSI is very sensitive measure to detect hemispheric changes in spectral symmetry, with a very good correlation with the visual interpretation. If shunting was not advised, based on visual analysis, ∆-sBSI ≤ 0.050; if ∆-sBSI ≥ 0.060, visual EEG interpretation showed significant asymmetry, and shunting should strongly be considered, in agreement with the original paper [18]. The tBSI is sensitive for capturing diffuse changes. If ∆tBSI ≤ 0.055, no significant diffuse EEG changes were present. In those circumstances where ∆-tBSI ≥ 0.065, diffuse changes were observed. The limits of the ∆-tBSI are slightly different from those presented in the original paper [24] The present BSI-values are evaluated in the frequency range 1-10 Hz, however, and show most significant correlation with the visual interpretation. Here, we show that the ∆-sBSI and ∆-tBSI can be helpful objective
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
490
W.A. Hofstra and M.J.A.M. van Putten
measures in the operating theatre, as they correlate strongly with the visual assessment of the EEG. However, it is important to realize that although these values may be very supportive, they are not designed (yet) to replace the human intraoperative monitoring, as we show that in 21% of the EEGs the reference period and/or evaluation period had to be revised in order to obtain reliable BSI-values, given their sensitivity for various artifacts. Another 7% had to be excluded because of too many artifacts. In 71% of the EEGs, BSI-values could be determined without interference. Therefore, in general, selective shunting cannot be based on the sBSI and tBSI only, and human interpretation is needed to disregard artifacts that the computer does process. In conclusion, this study extends and confirms our previous pilot results. The sBSI is designed to indicate interhemispheric changes while the tBSI captures diffuse changes, not caused by asymmetry. These measures correlate strongly with the visual assessment of the EEG, as performed by an experienced neurophysiologist. Therefore, these quantitative EEG features can highly contribute to the intraoperative EEG monitoring during carotid endarterectomies, and together with visual interpretation, lead to a more reliable decision whether shunting is necessary or not.
REFERENCES
8.
9.
10.
11. 12. 13. 14. 15.
16. 17.
1. 2.
3. 4.
5.
6. 7.
Hankey G (2005) Secondary prevention of recurrent stroke. Stroke 36:218–221 Rothwell P, Eliasziw M, Gutnikov S, et al. (2003) Analysis of pooled data from the randomised controlled trials of endarterectomy for symptomatic carotid stenosis. Lancet 361:107–116 Rothwell P, Eliasziw M, Gutnikov S, et al. (2004) Endarterectomy for symptomatic carotid stenosis in relation to clinical subgroups and timing of surgery. Lancet 363:915–924 North American Symptomatic Carotid Endarterectomy Trial Collaborators (1991) Beneficial effect of carotid endarterectomy in symptomatic patients with high-grade carotid stenosis. N Engl J Med 325:445–453 Group ECSTC (1998), Randomised trial of endarterectomy for recently symptomatic carotid stenosis: Final results of the MRC European Carotid Surgery Trial (ECST). Lancet 351:1379–1387 Salvian A, Taylor D, Hsiang Y, et al. (1997) Selective shunt with EEG monitoring is safer than routine shunting for carotid endarterectomy. Cardiovasc Surg 5:481–485 Schneider J, Droste J, Schindler N, et al. (2002) Carotid endarterectomy with routine electroencephalography and selective shunting: influence of contralateral internal carotid artery occlusion and utility in prevention of perioperative strokes. J Vasc Surg 35:1114–1122
18. 19.
20.
21.
22.
Kalkman C. (2004) Con: Routine shunting is not the optimal management of the patient undergoing carotid endarterectomy, but neither is neuromonitoring. J Cardiothorac Vasc Anesth 18:381–383 Bond R, Warlow C, Naylor A, et al, on behalf of the European Carotid Surgery Trialists’ Collaborative Group. (2002) Variation in Surgical and Anaesthetic Technique and Associations with Operative Risk in the European Carotid Surgery Trial: Implications for Trials of Ancillary Techniques. Eur J Vasc Endovasc Surg 23:117–126 Konstadinos A, Loubser P, Mizrahi E, et al. (1997) Continuous EEG monitoring and selective shunting reduces neurologic morbidity rates in carotid endarterectomy. J Vasc Surg 25:620–628 Nuwer M, Ahn S, Jordan S, et al. (1991) EEG monitoring in carotid endarterectomy. Arch Surg 126:115 Shah D, Darling R, Chang B, et al. (1994) Carotid endarterectomy in awake patients: its safety, acceptability and outcome. J Vasc Surg 19:1015–1020 McFarland H, Pinkerton J, Frye D. (1988) Continuous electroencephalographic monitoring during carotid endarterectomy. J Cardiovasc Surg 29:12–8 Minicucci F, Cursi M, Fornara C, et al. (2000) Computerassisted EEG monitoring during carotid endarterectomy. J Clin Neurophysiol 17:101–107 Visser G, Wieneke G, van Huffelen A. (1999) Carotid endarterectomy monitoring: patterns of spectral EEG changes due to carotid artery clamping. Clin Neurophysiol 110:286– 294 Vriens E, Wieneke G, van Huffelen A, et al. (2000) Increase in alpha rhythm frequency after carotid endarterectomy. Clin Neurophysiol 111:1505–1513 Pinkerton J. EEG as a criterion for shunt need in carotid endarterectomy. (2002) Ann Vasc Surg 16:756–761. Van Putten M, Peters J, Mulder S, et al. (2004) A brain symmetry index (BSI) for online EEG monitoring in carotid endarterectomy. Clin Neurophysiol 115:1189–1194 Hanowell L, Soriano S, Bennett H. (1992) EEG power changes are more sensitive than spectral edge frequency variation for detection of cerebral ischemia during carotid artery surgery: a prospective assessment of processed EEG monitoring. J Cardiothorac Vasc Anesth 6:292–294 Laman D, van der Reijden C, Wieneke G, et al. (2001) EEG evidence for shunt requirement during carotid endarterectomy: optimal EEG derivations with respect to frequency bands and anesthetic regimen. J Clin Neurophysiol 18:353–363 Laman D, Wieneke G, van Duijn H, et al. (2005) QEEG changes during carotid clamping in carotid endarterectomy: spectral edge frequency parameters and relative band power parameters. J Clin Neurophysiol 22:244–252 Van Putten M, Tavy D. (2004) Continuous quantitative EEG monitoring in hemispheric stroke patients using the brain symmetry index. Stroke 35:2489–2492
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
BSI versus the Eye: EEG Monitoring in Carotid Endarterectomy 23. van Putten M, Kind T, Visser F, et al. (2005) Detecting temporal lobe seizures from scalp EEG recordings: a comparison of various features. Clin Neurophysiol 116:2480– 2489 24. Van Putten M. (2006) Extended BSI for continuous EEG monitoring in carotid endarterectomy. Clin Neurophysiol 117:2661-2666
491 Author: WA Hofstra Institute: Department of Neurology and Clinical Neurophysiology, Medisch Spectrum Twente Street: P.O. Box 7500 KA City: Enschede Country: The Netherlands Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Comparison of methods and co-registration maps of EEG and fMRI in Occipital Lobe Epilepsy M. Forjaz Secca1,2, A. Leal3,4, J. Cabral1 and H. Fernandes1 1
Cefitec, Dep. of Physics, Universidade Nova de Lisboa, Portugal 2 Ressonância Magnética de Caselas, Lisboa, Portugal 3 Department of Neurophysiology, Hospital Júlio de Matos, Lisboa, Portugal 4 Department of Pediatric Neurology, Hospital Dona Estefânia, Lisboa, Portugal Abstract— Clinically childhood occipital lobe epilepsy (OLE) manifests itself with distinct syndromes. The traditional EEG recordings have not been able to overcome the difficulty in correlating the ictal clinical symptoms to the onset in particular areas of the occipital lobes. To understand these syndromes it is important to map with more precision the epileptogenic cortical regions in OLE. Experimentally, we studied three idiopathic childhood OLE patients with EEG source analysis and with the simultaneous acquisition of EEG and fMRI, to map the BOLD effect associated with EEG spikes. The spatial overlap between the EEG and BOLD results was not very good, but the fMRI suggested localizations more consistent with the ictal clinical manifestations of each type of epileptic syndrome. Since our first results show that by associating the BOLD effect with interictal spikes the epileptogenic areas are mapped to localizations different from those calculated from EEG sources and that by using different EEG/fMRI processing methods our results differ to some extent, it is very important to compare the different methods of processing the localization of activation and develop a good methodology for obtaining co-registration maps of high resolution EEG with BOLD localizations. Keywords— fMRI, BOLD, EEG, Epilepsy.
I. INTRODUCTION EEG is useful in localizing epileptic activity to the occipital lobes in childhood occipital lobe epilepsies (OLEs), however OLE manifests itself with different syndromes and EEG localization is not syndrome-specific with abnormalities very often also involving the parietal and temporal areas [1]. The topography of spikes rarely points to the particular region in the occipital lobes originating the seizures and the few attempts to do source analysis did not improve the generally poor electroclinical correlation [2]. In the case of idiopathic epilepsies, where brain imaging is normal, the particular regions of seizure onset remain unknown, despite the consistent clinical picture of each syndrome. The detection of the BOLD effect associated with the occurrence of interictal spikes in simultaneous EEG/fMRI recordings offers a promising way to detect the epileptic neuronal dysfunction with high spatial resolution [3]. Some studies with
this method have been done in symptomatic OLE [3–5] demonstrating significant posterior activation. No work has been done in idiopathic OLE, a group of epilepsies where clues to the particular cortical area of onset are lacking. The goals of this work are to use the BOLD effect associated with the occurrence of interictal spike activity in different types of childhood OLE to improve the electroclinical correlation and also to compare the degree of concordance of this technique with the conventional methods of EEG source analysis. II. MATERIALS AND METHODS We studied three patients with a diagnosis of idiopathic OLE, submitting them to a 60-min EEG recording outside the scanner, including a sleep period, with a cap of 36 AgCl electrodes (Fp1/2, F3/4, FC3/4, C3/4, CP3/4, P3/4, PO1/2, O1/2, F7/8, FT7/8, T3/4, TP7/8, T5/6, FT9/10, A1/2, Fz, FCz, Cz, CPz, Pz, Oz). We used a sampling rate of 256 Hz, filters of 0.5–70 Hz, and performed intermittent photic stimulation at the end of registration. Later in the same day, a session of functional MRI was performed while simultaneously recording the EEG (19 electrodes at standard 10–20 positions). Each patient demonstrated a single, topographically stable, paroxysm type. This was the main neurophysiological criteria for selection of patients for this study. Informed consent was obtained from the parents of the patients. Source analysis of the EEG was done in spikes detected visually in recordings obtained outside the scanner in all patients and also inside for patient 1. The EEG was high pass filtered at 3 Hz (zero phase shift filter with 24 dB/oct) and spikes with good signal to noise ratio were aligned by the peak amplitude to produce an average spike. For patients 2 and 3, not enough spikes could be recorded inside the scanner to produce stable dipole solutions. The sources were obtained from instantaneous moving dipoles at the peak of averaged spikes (n = 43, 19, and 21 for patients 1,2, and 3), with a standard three layer Boundary Element Model (BEM) volume conductor model (conductances of
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 505–508, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
506
M. Forjaz Secca, A. Leal, J. Cabral and H. Fernandes
0.33, 0.0042, 0.33 S/m for scalp, bone and brain), provided in the Source2 software package (Neuroscan, El Paso, Mexico). Standard electrode positions were also used. The confidence ellipsoids for the dipole positions represent the 1 SD interval and are directly proportional to the noise estimate in the averaged spikes [6], evaluated as 5 μV, 6 μV, and 5 μV, for patients 1, 2, and 3. The EEG/fMRI consisted in the acquisition of blocks of 100 brain volumes each one made of 16 EPI images (in plane resolution 3.75 mm and slice thickness of 7 mm, no spacing; echo time 50 μs; flip angle of 90◦) obtained with a TR = 3 s, corresponding to periods of 5 min of continuous and simultaneous monitoring. 4 to 6 blocks were obtained per patient, providing 20 min of simultaneous monitoring for patient 1, 30 min for patient 2, and 20 min for patient 3. A brain T1 weighted anatomic sequence (in plane resolution 0.94 mm and slice thickness of 1.3 mm) was obtained in the same session. Images were acquired in a 1.5T GE CVi/NVi scanner, while the EEG was recorded through a set of AgCl electrodes connected to an amplifier located outside the scanner room through carbon fiber wires (MagLink, Neuroscan, El Paso, TX, U.S.A.). The cap did not produce detectable artifacts in the MRI sequences, so these could be processed without any special correction. The EEG was corrected for artifacts induced by the magnetic field and rapidly changing imaging gradients offline using commercial software (Scan 4.3.2, Neuroscan). The time of occurrence of spikes was determined by visual inspection and used to classify the acquired image volumes, resulting in sequences of events of interest used to build an eventrelated paradigm. To model the hemodynamic function a standard Gamma function with derivatives was used [7]. Sixty, three and nine spikes were analyzed, respectively, for patients 1, 2, and 3. The EPI sequences were corrected for movement and slice acquisition time and smoothed with a Gaussian kernel of FWHM 8 mm. A local autocorrelation correction was used [8] and z statistic images generated. The correction for the multiple comparison problem was done using a cluster threshold with p = 0.05. The preprocessing and paradigm-related analysis of the fMRI was performed using the FSL software package [9]. Representation of dipoles on individual brain anatomy was performed by adjusting the fiducial points (nasion, preauricular points, inion, and vertex) of the BEM model on the individual 3D T1 MRI. Representation on the inflated cortex was done with the FreeSurfer software package [10]. III. RESULTS The recognition of interictal spikes was possible with the EEGs obtained inside the scanner (Figs. 1C and 2C). The
number of these spikes was clearly reduced when compared to the recordings outside the scanner. We present here the results of two of our patients, as an example. For patient 1 the EEG background is normal and high amplitude spikes are present over the right occipitalparietal region (Fig. 1A), most often occurring rhythmically at 1–3 Hz and with a clear reduction while recording with the eyes open. The spike morphology and topography are monotonous and demonstrate a clear bipolar scalp distribution of the electrical potential (Fig. 1B). The source analysis reveals dipoles located over the cortical right parietal-occipital region (Fig. 1B), suggesting a restricted focal epileptogenic area. The source obtained from EEG inside the scanner shows a similar localization but suffers from worse signal to noise ratio (larger confidence ellipsoid) and deficient spatial sampling over right temporal areas (solution shifted to the left) (Fig. 1C). The BOLD activation associated with the occurrence of spikes involves the more posterior and medial areas of both parietal lobes, with small clusters in the neighborhood of the calcarine fissure (Figs. 1D,E). No significant BOLD activation is apparent in the cortex adjacent to the source analysis dipoles. Deactivation analysis produces several clusters, with a more consistent one over the right parietal area, in a localization similar to that of EEG source analysis. Patient 3 presents spikes and sharp waves with maximum amplitude over occipital electrodes (Fig. 2A). The average paroxysm shows a dipolar potential field over the scalp, and the dipole solution locates the generators in the medial occipital lobe (Fig. 2B), but the spatial stability of the solution is low (large confidence ellipsoid) due to the poor signal to noise ratio. The BOLD activations are located over the posterior and medial occipital cortex and also over the basal ganglia in the right hemisphere (Figs. 2D,E). The ones localized in the occipital lobe are more superficial than the dipole solutions, and point to different cortical areas as the source of scalp paroxysms. The BOLD deactivation occurs in the left frontal lobe. IV. CONCLUSIONS We observed a clear discrepancy between the localization of epileptogenic cortical areas suggested by the EEG source analysis and the BOLD activations associated with the paroxysms. This occurs despite the fact that the spatial sampling of the EEG over posterior areas was increased [2] resulting in five electrodes over occipital areas (as compared with two for the 10–20 system) and an even more significant increase over parietal and temporal areas (Figs. 1B and 2B).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Comparison of methods and co-registration maps of EEG and fMRI in Occipital Lobe Epilepsy
Fig. 1 (A) Sample of selected EEG channels demonstrating the interictal spikes in patient 1 over the posterior right hemisphere (the scale bar represents 1 s of time horizontally and 200 μV vertically) and average spike at right (n = 43). Above right, map of electrical potential at the peak of an averaged spike (electrodes are green, nose upward; negative potentials are blue and positive ones red). (B) Moving dipolesolution around the peak of an averaged spike outside (red) and inside (blue) the scanner, with a measure of dispersion represented in confidence ellipsoids (1 SD); (C) Raw spikes inside the scanner after artifact correction, with average (n = 15) and potential map on the scalp ( represents bad channels); (D) BOLD regions of significant activation (red) and deactivation (blue) represented over the patient high resolution T1 MRI; (E) Activation regions represented on the inflated white/gray matter interface. The calcarine and parietaloccipital sulci are indicated by white and yellow arrows, respectively. The fact that the EEG used for source analysis was obtained outside the scanner room evokes the possibility that this discrepancy might be due to different paroxysm types being analyzed in the two methods. This is unlikely because patients were selected to the study with the requirement that they had topographically stable, single paroxysm types in previous EEGs; the data collection was done in the same day; and the paroxysms in the EEG obtained inside and outside the scanner room showed a similar topography (Figs. 1A–C and 2A–C). Dipole localization was clearly
507
Fig. 2 (A) Sample of EEG with posterior paroxysms over the posterior electrodes. Right, average spike (n = 21); (B) Scalp potential map with bipolar distribution at the peak of an average spike. Below, dipole solution at the spike peak located near the primary visual areas, but with a large error ellipsoid associated; (C) Raw EEG inside the scanner with posterior spikes; (D) BOLD activation (red) associated with the spikes, in the posterior areas of both hemispheres and in the basal ganglia on the right. Deactivation over the frontal lobe is shown in blue; (E) Activation clusters over the parietaloccipital region. The activation in the lower right frontal lobe results from artifactual projection of the basal ganglia activation on the nearest surface (scale bars, colors, and arrows as in Fig.1).
discrepant in relation to BOLD activations, while there was concordance with the largest cluster of deactivation. The overall concordance of BOLD deactivation with the source analysis is no better than for activation, a point already mentioned in the literature [12]. We could obtain BOLD activations in 100% of our patients, which is significantly better than the results of larger series, were values in the order of 40–60% are mentioned [3,13] for patients with focal epilepsy. A study in a population with generalized idiopathic epilepsy [11] improves this value to 93%, and together with our results suggest that the method may be
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
508
M. Forjaz Secca, A. Leal, J. Cabral and H. Fernandes 4.
more efficient in idiopathic epilepsies as compared with the symptomatic ones. Overall, our data show different BOLD activations on the occipital lobes in different syndromes of idiopathic OLE. The epileptogenic regions identified by this method show important discrepancies with the ones suggested by dipole analysis, in line with data from the literature [12,14], but as a possible representation of the epileptogenic area, they provide a more satisfactory explanation of the clinical ictal manifestations from the literature. Our first clinical results have already been published [15], but since our experimental analysis also shows that by using different EEG/fMRI processing methods our results differ to some extent, it is very important to compare the different methods of processing the localization of activation and develop a good methodology for obtaining coregistration maps of high resolution EEG with BOLD localizations, which is what we are working on at present. The EEG/fMRI technique is a powerful method for the study of occipital lobe epilepsies and can provide a way to build more detailed models integrating clinical, structural, electrical, and vascular data.
11.
ACKNOWLEDGMENT
12.
The authors would like to thank Cristina Menezes, Elisa Vilar, Rita Pinto, Elizabete Lage, and Bruno Mourão for their technical support.The work was supported by a grant for Research in Epilepsy from Tecnifar SA and by projects Topo3D (POSI/CPS/39758/2001) and EpilBI (POSC/EEACPS/60977/2004) from FCT.
REFERENCES 1. 2.
3.
Taylor I, Scheffer I, Berkovic S. Occipital epilepsies: identification of specific and newly recognized syndromes. Brain 2003;126:753–69. Van der Meij W, Van der Dussen, Huffelen V, et al. Dipole source analysis may differenciate benign focal epilepsy of childhood with occipital paroxysms from symptomatic occipital lobe epilepsy. Brain Topogr 1997;10:115–20. Al-Asmi A, B´enar C, Gross D, et al. fMRI activation in continuous and spike-triggered EEG-fMRI studies of epileptic spikes. Epilepsia 2003;44:1328–39.
5. 6. 7.
8. 9. 10.
13.
14. 15.
B´enar C, Gross D, Wang Y, et al. The BOLD response to interictal epileptiform discharges. Neuroimage 2002; 17:1182–92. Lazeyras F, BlankeO, Perrig S, et al. EEG-triggered functional MRI in patients with pharmacoresistant epilepsy. J Mag Res Imaging 2000;12:177–85. Fuchs M, Wagner M, Kastner J. Confidence limits of dipole source reconstruction results. Clin Neurophysiol 2004;115:1442–51. Huettel S, McKeown J, Song A, et al. Linking hemodynamic and electrophysiological measures of brain activity: evidence from functional MRI and intracranial field potentials. Cereb Cortex 2004;4:165–73. Worsley KJ, EvansAC, Marrett S, Neelin P.Athree-dimensional statistical analysis for CBF activation studies in human brain. J Cerebr Blood Flow and Metab 1992;12:900–18. Smith S, Jenkinson M, Woolrich M, et al. Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 2004;23(suppl 1):208–19. Fischl Bruce, Sereno M.I., Dale A.M. Cortical surface-based analysis II: inflation, flattening, and a surface-based coordinate system. Neuroimage 1999;9:195–207. Aghakhani Y, Bagshaw AP, B´enar CG, et al. fMRI activation during spike and wave discharges in idiopathic epilepsy. Brain 2004;127:1127–44. BagshawA,Kobayashi E, Dubeau F, et al. Correspondence between EEG-fMRI and EEG dipole localization of interictal discharges in focal epilepsy. Neuroimage 2006;30:417–25. Bagshaw A, Aghakhani Y, B´enar C, et al. EEG-fMRI of focal epileptic spikes: analysis with multiple haemodynamic functions and comparison with Gadolinium-enhanced MR angiograms. Hum Brain Mapp 2004;22:179–92. Lemieux L, Krakow K, Fish DR. Comparison of spiketriggered functional MRI BOLD activation and EEG dipole model localization. Neuroimage 2001;14:1097–104. Leal A, Dias A, Vieira JP, Secca M, Jordao C. The BOLD effect of interictal spike activity in childhood occipital lobe epilepsy. Epilepsia. 2006 Sep;47(9):1536-42. Address of the corresponding author: Author: Mario Forjaz Secca Institute: Cefitec, Physics Department, Universidade Nova de Lisboa Street: Quinta da Torre City: 2829-516 Caparica Country: Portugal Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Cross-correlation based methods for estimating the functional connectivity in populations of cortical neurons A.N. Ide1,4, M. Chiappalone1,2 , L. Berdondini3 , V. Sanguineti4 , S. Martinoia1 1
Neuroengineering and Bio-nano Technology group - NBT, DIBE, University of Genova, Genova, Italy 2 Italian Institute of Technology – IIT, Unit of Neuroscience and Brain Technology, Genova, Italy 3 Sensors, Actuators and Microsystems Laboratory, IMT, University of Neuchâtel, Neuchâtel, Switzerland 4 Neurolab, DIST, University of Genova, Genova, Italy Abstract— In this paper we estimate the functional connectivity in in-vitro cultured cortical neurons plated into highdensity passive microelectrodes arrays. We compare standard and partial correlation methods in order to find out the main pathways between two electrodes. In summary, while standard correlation considers just pairs of electrodes, giving a general overview of the network, partial correlation can give more details about the connectivity map due to the cancellation of indirect connections. Keywords— cortical network, microelectrode arrays, crosscorrelation, functional connectivity.
from the rat spinal cord (10 neurons recorded simultaneously for 100 s). In general, simulation and (less often) experimental results suggest that partial coherence methods work well in distinguishing between direct and indirect connections with small networks of neurons, and no systematically studies were performed on real data. In this paper we applied standard and partial correlation to spike trains recorded from high-density microelectrode arrays for estimating the functional connectivity of the network. II. MATERIALS AND METHODS
I. INTRODUCTION In vitro cultured neuronal networks coupled to microelectrode arrays (MEAs) constitute a valuable experimental model to study its electrophysiological properties. After a few days in culture, neurons start to connect each other with functionally active synapses, forming a random network and displaying spontaneous activity. Identification of the causal relationships between pairs of neurons is important in the study of synaptic interactions within the nervous system at population level. The simplest approach uses the cross-correlation function (and its variants) [1][5][8][9] between pairs of spike trains. However, cross-correlograms cannot tell whether the observed peaks or troughs in the correlation function derive from either direct or indirect connections, or result from a common input. This limitation was overcome with the notion of partial coherence [3] where, in assessing the dependence between two spike trains, the effects of the activity of all other spike trains, (assumed to be additive) have been removed [14]. Simulation results showed that for up to 30 neurons, partial coherence seems to work well in distinguishing between direct and indirect connections. Eichler et al. [6] extended the partial coherence model to the time domain. This method uses a Scaled Partial Covariance Density (SPCD) function, which also provides information on direction and type of the interaction. This method was investigated with both synthetic data and spike trains, recorded
A. Dissociated cortical networks Dissociated neuronal cultures were obtained from cerebral cortices of embryonic Sprague-Dawley rats at embryonic day 18 (E18). Cells were then plated on High-Density Passive Micro Electrode Arrays—HDP-MEAs fabricated on Pyrex 7740 substrates, 30 μm electrode diameter and 10 μm electrode pitch. Chip dimension is 13.5 x 13.5 mm2, adapted from a previously developed technology [2]. This first generation of high-dense MEAs is composed by 60 electrodes, divided into 4 clusters of 15 electrodes each (Fig.1). B. Spike Detection Extracellularly recorded spikes are usually embedded in biological and thermal noise of about 10–15 μV peak-to-
Fig.1: HDP-MEA divided in 4 clusters of 15 electrodes each.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 525–528, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
526
A.N. Ide, M. Chiappalone, L. Berdondini, V. Sanguineti, S. Martinoia
peak, and they can be detected using a threshold based algorithm [4][11]. Briefly, a sliding window, sized to contain at most one single-unit spike (i.e., 2 ms) [16][17], is shifted over the signal until the difference between the maximum and the minimum within the window is below the peak-to-peak threshold, and when the difference exceeds the threshold, a spike is found and its time-stamp, together with the amplitude of the spike, is stored. The threshold, calculated as a multiple of the standard deviation (8*SD) of the biological noise, is separately defined for each recording channel [10]. C. Cross-correlation methods Let us consider a neuronal population V and two specific neurons, a, b ∈ V. One simple indicator of functional connectivity among a and b is the standard correlation R ab (τ ) of their associated spike trains, x a (t) , x b (t) . The cross-correlation function is a measure of the frequency at which one cell fires, as a function of time relative to the firing of a spike in another cell. Mathematically, the correlation function represents the average value of the product of two random processes which, in our case, are spike trains [12]. Then the correlation function reduces to a simple probability R ab (τ ) of observing a spike in one train x b (t) at time t + τ, given that there is a spike in a second train x a (t) at time t [13]. Its Fourier transform – the cross-spectral density, | S ab (ω ) | 2 , can be used to estimate the spectral coherence | C ab (ω ) | 2 , where
C ab (ω ) =
S ab (ω ) S aa (ω ) S bb (ω )
(1)
Saa (ω ) and S bb (ω ) are the power spectra of x a (t) , x b (t) . Partialization [3] consists of subtracting from S ab (ω ) the effects of all (possibly multivariate) spike trains x C (t) , where C = V - {a, b} , so that: S ab|C (ω) = S ab (ω ) - S aC (ω )S CC (ω ) -1 S Cb (ω ) (2)
Sab|C(ω) =-
1 1-| Cab|C(ω) |
2
Correspondingly,
its
inverse
Fourier
transform,
defined as well. It has been shown [6][7] that the partial spectral coherence can be efficiently computed by inversion of the spectral matrix, S (ω ) of the whole set of nodes. If G (ω ) = S (ω ) −1 is such and inverse, then:
C ab|C (ω ) = -
G ab (ω ) G aa (ω ).G bb (ω )
(3)
Gaa(ω).Gbb(ω)
(4)
To asses functional connectivity we actually take a scaled version of R ab|C (τ ) , defined as:
s ab|C (τ ) =
R ab|C (τ ) ra rb
(5)
where ra and rb are the values of the maximum peak of the autocorrelation function, and s ab (τ ) is the so called Scaled Partial Correlation Density (SPCD). Partialization allows to cancel from the standard crosscorrelation the effects of indirect connections and that of common inputs. It also allows to distinguish direction and nature of the interaction (excitatory, inhibitory). However, if two connections converge to the same node, after partialization the two input nodes become correlated (marryingparents effect) [6]. D. Detectability index(DI) A connection between two electrodes is calculated based on a well known measure called Coincidence Index (CI) [10][15]. Basically, it is calculated as the ratio of the integral of a (partial) cross correlation function in a specified area under the highest peak Rab (τ max ) near to zero, to the integral of the total area. We used just the area around the peak without the normalization to the total area (Eq. 6). Direction of the connection is given by the latency of the peak from zero.
and,
R ab|C (τ ) , is the partial covariance density and conse2 quently, partial coherence function, | C ab|C (ω ) | , can be
Gab(ω)
.
DI a ,b = where
Δt / 2
∑R
τ = − Δt / 2
ab
(τ max ) ,
(6)
Δt = 300μ sec . III. RESULTS
We estimated the functional connectivity in populations of neurons recorded from HDP-MEAs by means of cross-correlation methods. Fig. 2 shows spontaneous activity recorded from HDP-MEAs divided into 4 different clusters of electrodes. Cluster B and D are the ones that present higher neural activity, while it is very low in cluster C.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Cross-correlation based methods for estimating the functional connectivity in populations of cortical neurons
Cluster A
Cluster B
Cluster D
Cluster C
Fig. 2. Spontaneous activity recorded from cortical networks plated on a 4 high-dense cluster device. Time window is 121sec. Each panel includes a raster plot with 15 spike trains..
Standard correlation
527
Partial correlation
Fig. 4. Connectivity map based on the DI. It is shown the standard (left) and partial (right) correlation of clusters C (down) and D (up).
Fig.3. Comparison between standard (solid) and partial (dashed) correlation between two electrodes. In order to estimate the functional connectivity of the network, we present a comparison between standard and partial correlation. We applied those methods for all (15) electrodes inside of each cluster. Cross-correlation based methods measures the direction (peak latency from zero) of a possible connection between a pair of electrodes meanwhile, partial correlation comes out not only with direction but also eliminates indirect connections and gives the real strength between two channels. Fig. 3 (left) shows an example of a direct connection (both correlations has almost the same amplitude ) and Fig. 3 (right) shows that the strength of the connection comes from indirect contributions (peak decreased almost to zero value). After calculating the correlation between all electrodes inside each cluster , we calculated the DI from either standard or partial correlation. In Fig. 4 we can notice that connectivity is much lower with partial correlation, once indirect connections have been eliminated. The connectivity map is normalized to a minimum value equal to zero and maximum to 1 (auto-correlation) and the other values corre-
spond to the DI of each pair of electrode. Cluster D presents higher number of connections than cluster C, as expected (see neural activity in the raster plots in Fig. 2). It is also important to notice that we calculated the direction of the flow, that is why the map is not completely symmetric. The peaks found in a location different of zero can give the direction, while the ones on zero can be the effect of a coupling coming from a common input [6]. IV. CONCLUSIONS Neuronal cultures produce a rich pattern of electrophysiological activity that changes and matures over the life of the network. One of those changes stands for its development and consequently the formation of new synaptic connections. In our MEA devices we do not propose to detect the connections of each neuron but, how the neurons coupled to the electrodes are functionally connected. Cross-correlation based methods are useful tools to estimate functional connectivity at population level and to quantify changes after external stimulation. Standard cross-correlation is applied just between pairs of electrodes and does not consider the entire network. It is the simplest method to infer about functional connectivity and obtain a general overview of the network map. On the other hand, partial correlation gives more details about the connectivity. Both methods can be used to study the development of the network or changes in the network behavior after electrical and/or chemical stimulation. However, partialization presents some limitations when the number of neurons and connec-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
528
A.N. Ide, M. Chiappalone, L. Berdondini, V. Sanguineti, S. Martinoia
tivity of the network increases. Depending on how big the network is, partial correlation can show unreliable results or even breakdown in the identification of synaptic connections.
ACKNOWLEDGMENT This work is partly supported by the IDEA project (UE Nest Adventure Contract No.516432). A.N. Ide is supported by the Brazilian Ministery of Education - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).
REFERENCES 1. 2.
3. 4.
5. 6. 7. 8.
Aertsen A.M. and Gerstein G.L.. Evaluation of neuronal connectivity: sensitivity of crosscorrelation. Brain Res, 340(2):341–354, Aug 1985. L. Berdondini, P. D. van der Wal, O. Guenat, N. F. de Rooij, M. Koudelka-Hep, P. Seitz, R. Kaufmann, P. Metzler, N. Blanc, and S. Rohr, "High-density electrode array for imaging in vitro electrophysiological activity," Biosens Bioelectron, vol. 21, pp. 167-74, 2005. Brillinger D.R., Bryant H.L., and Segundo J.P. Identification of synaptic interactions. Biol Cybern, 22(4):213–228, May 1976. Chiappalone, M., Vato, A., Tedesco, M.T., Marcoli, M., Davide, F.A., Martinoia, S., 2003. Networks of neurons coupled to microelectrode arrays: a neuronal sensory system for pharmacological applications. Biosens. Bioelectron. 18, 627–634. Chiappalone M, Bove M, Vato A, Tedesco M, Martinoia S. Dissociated cortical networks show spontaneously correlated activity patterns during in vitro development. Brain Res. 2006 Jun 6;1093(1):41-53. Eichler M., Dahlhaus R., and Sandkuhler J.. Partial correlation analysis for the Identification of synaptic connections. Biol Cybern, 89(4):289–302, Oct 2003. Dahlhaus R.. Graphical interaction models for multivariate time series. Metrika, 51(2):157–172, August 2000 Eytan, D., Minerbi, A., Ziv, N., Marom, S., 2004. Dopamine-induced dispersion of correlations between action potentials in networks of cortical neurons. J. Neurophysiol. 92, 1817–1824.
9.
10. 11. 12. 13. 14.
15. 16. 17.
Feber, J. and Rutten, W.L.C. and Stegenga, J. and Wolters, P.S. and Ramakers, G.J. and van Pelt, J. (2006) Cultured cortical networks described by conditional firing probabilities. In: 5th international meeting on substrate-integrated micro electrode arrays, 4-7 jul 2006, Reutlingen, Germany. pp. 67-70 Jimbo, Y., Tateno, Y., Robinson, H.P.C., 1999. Simultaneous induction of pathway-specific potentiation and depression in networks of cortical neurons. Biophys. J. 76, 670–678. Perkel, D.H., Gerstein, G.L., Moore, G.P., 1967. Neuronal spike train and stochastic point processes: I. The single spike train. Biophys. J. 7, 391–418. Knox, C.K., 1981. Detection of neuronal interactions using correlation analysis. Trends Neurosci. 4, 222–225. Rieke, F., Warland, D., de Ruyter van Steveninck, R., Bialek, W., 1997. Spikes: Exploring the Neural Code. The MIT Press, Cambridge, MA. Rosenberg J.R., Amjad A.M., Breeze P., Brillinger D.R., and Halliday D.M.. The Fourier approach to the identification of functional coupling between neuronal spike trains. Prog Biophys Mol Biol, 53(1):1– 31, 1989. Tateno, T., Jimbo, Y., 1999. Activity-dependent enhancement in the reliability of correlated spike timings in cultured cortical neurons. Biol. Cybern. 80, 45–55. Tscherter, A., Heuschkel, M.O., Renaud, P., Streit, J., 2001. Spatiotemporal characterization of rhythmic activity in spinal cord slice cultures. Eur. J. Neurosci. 14, 179–190. Van Pelt, J., Corner, M.A., Wolters, P.S., Rutten, W.L.C., Ramakers, G.J.A., 2004a. Longterm stability and developmental changes in spontaneous network burst firing patterns in dissociated rat cerebral cortex cell cultures on multi-electrode arrays. Neurosci. Lett. 361, 86–89. Address of the corresponding author: Author: Alessandro Noriaki Ide Institute1: NBT-group, DIBE, University of Genova Street:1 Via Opera Pia, 11a Institute2: Neurolab, DIST, University of Genova Street2: Via Opera Pia, 13 City: Genova Country: Italy Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
EEG Peak Alpha Frequency as an Indicator for Physical Fatigue S.C. Ng1, P. Raveendran2 1
Department of Biomedical Engineering, University of Malaya, Kuala Lumpur, Malaysia Department of Biomedical Engineering, University of Malaya, Kuala Lumpur, Malaysia
2
Abstract— The peak alpha frequency (PAF) has been associated with mental abilities. In this study, we use the EEG to investigate the relationship between PAF and physical fatigue. Eight right handed male subjects (age from 23 to 29) volunteered for the experiment. They have to perform a hand grip task for 30 seconds with each hand for 30 times or until they could not continue anymore. Electrodes are placed at 55 locations all over the scalp to detect EEG. Three electrodes are placed around the eyes region to detect EOG. The EEG signals of six subjects clearly indicated a reduction in the PAF around the motor cortex region after the physical exertion. Thus, this study shows that the reduction of PAF can be an indicator of physical fatigue. Keywords— peak alpha frequency, muscle fatigue.
I. INTRODUCTION Lal and Craig, 2001 [1] did a comprehensive review on fatigue and specifically on driver’s fatigue. Their review gave two definitions of fatigue which are: (a) reduced efficiency and a general unwillingness to work [2] and (b) disinclination to continue performing the task, and that it involved an impairment of human efficiency when work continued [3]. Murata et al. [4] relates fatigue to a loss of efficiency and disinclination to effort. Gandevia [5] stated that fatigue could be separated into peripheral and central fatigue. Peripheral fatigue would refer to the reduction in the muscle ability and central fatigue refers to the central nervous system that fails to activate the motor neurons adequately. Review by Srinivasan [6] have defined peripheral fatigue as “the point at which the muscle is no longer able to sustain the required force or work output level [7]” while central fatigue as “the failure to sustain attentional tasks and physical activity as opposed to external stimulation, which exist in the absence of any clinically detectable motor weakness or dementia [8]”. Various studies on fatigue of different muscles have been carried out. Most studies of muscle fatigue are quantified using EMG. Studies have been carried out for fatigue due to different motions such as biking [6,9], soccer [10], computer game [11] and lifting [12] by recording and studying EMG activity. Root Mean Squared (RMS) of the EMG activities is commonly used to find the energy content of the EMG activity. A reduction in the RMS would indicate a
reduction in the muscle ability to carry out the task and thus indicate fatigue. When a muscle is fatigued, lactic acid and carbon dioxide increase and the muscular tissue becomes acidic [2]. In the frequency domain, the peak frequency will shift to a lower value when fatigue sets in due to the lowering of pH in the muscle [13]. Recent works by Liu et al. [14-17] focused on the changes in the central nervous system due to fatigue. They have used fMRI [14] to study sustained maximal handgrip effort for 2 minutes. The fMRI revealed that brain activity increased substantially at first and then decreased due to fatigue. A later study by them [15] indicates that intermittent maximal voluntary contraction would reduce the power output at the muscle level (as indicated by EMG) but not the brain level (as indicated by the fMRI). This research group later moves on from fMRI studies to EEG analysis of muscle fatigue. In 2005, Liu et al. [16] indicated that EEG signals in the alpha (8-14 Hz) and beta (14-35Hz) frequency band reduce in amplitude during maximal contraction when fatigue sets in. Liu et al. [17] showed that the activation center for the muscle activity shifted and grew in size due to fatigue. Thus, the brain requires more resources to activate the muscle when it is tired. II. PEAK ALPHA FREQUENCY Earlier studies have shown that, peak alpha frequency (PAF) in the EEG increased from an infant to an adult and then started declining with age [18]. Marshall et al. [19] found the existence of the maximum relative power at the central region during toddlerhood and postulated that it may be indicative of intense development of locomotor ability. Stroganova et al. [20] found the PAF increased from 6.24 ± 0.45 Hz at 8 months to 6.78 ± 0.38 Hz at 11 months. Based on the sample of 550 subjects acquired from a total of 6 laboratories (2 laboratories from USA, Europe and Australia each), Clark et al. [21] found that PAF of adults reduced with age more prominently at the anterior compared to the posterior brain region. From the studies of Kopruner et al. [22], there seemed to be a linear relationship between age and PAF of adult subjects (PAF=11.95-0.053 x age). Various studies have shown PAF related to response time [23] and memory performance [21,24,25]. Klimesch et al.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 517–520, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
518
S.C. Ng, P. Raveendran
[23] shows a higher PAF for subjects with faster response time. PAF of subjects with good memory is about 1 Hz higher than that of similar age subjects with bad memory [21,24,25]. Angelakis et al. [26] proposed PAF as an indicator for cognitive preparedness (the capacity of the brain to execute complex task). They found that subjects with traumatic brain injury having lower PAF than normal subjects during resting with eyes open after a working memory task. Based on the review, physical fatigue would result in changes at the peripheral as well as the central level. A higher peak alpha frequency is indicative of a higher mental ability (Stated in Klimesch [25] and Angelakis et al. [26]). Studies by Liu et al. [16,17] indicate that muscle fatigue affects the EEG signals. In the present study, it is suggested that the peak alpha frequency, PAF would reduce when physical fatigues sets in. III. METHODOLOGY
lowed by eyes opened for another two minutes. The eyes closed and eyes opened before and after the experiment will serve to indicate the changes in the brain state after the experiment. B. Experimental Data Acquisition EEG data were recorded from the scalp using the 64 channel GTec Electrocap. 55 channels of EEG signals according to the International 10-10 system was recorded (refer Fig. 1). The impedance at all the EEG electrodes are kept below 10 kΩ. Two electrodes are placed at the earlobe to enable Linked Ear referencing method. In order to remove the EOG artifacts according to the regression method suggested by Scholgl et al. [27], three electrodes are place around the eyes region (one at the forehead and the other two at the left and right cheekbone). Two pairs of Bipolar recording is also obtained EMG activities from the muscles at the left and right forearm.
A. Subjects Eight healthy right handed male subjects participated in the study (age: 23 to 30). The experimental protocol is first explained in detail to the subject. The subject is then required to fill in a consent form before the experiment begins. The subject is seated comfortably on a chair. His arms are rested on his thighs. After the placement of electrodes, the subject is required to conduct two sessions of eyes closed for two minutes followed by eyes opened for another two minutes. After that, certain electrooculogram (EOG) artifacts are collected to perform the regression method to remove EOG artifacts for the experimental data. For the EOG artifacts, the subject is required to roll their eyes clockwise and counter clockwise. Then the subject has to look up, left, right and down. Finally, the subject is required to blink his eyes. The actual experiment requires the subject to grip a Hand Grip device until the 2 ends touches or as much as possible if the subject is not strong enough to make the two ends touches. The subject will grip the hand grip using the right hand for 30 seconds. Then, he will wait for 5 seconds before changing it to the other hand. He will also state the level of effort that is involved to grip the device. Ten seconds later, he will grip the device with the other hand for 30 seconds. This set is repeated 30 times or until the subject could no longer grip the device anymore. Each hand will grip the device 30 times for 30 second each or until the subject could not continue anymore. After the experiment, the subject is again required to conduct two sessions of eyes closed for two minutes fol-
ref
Fp2
Fp1 AF3
F7
A1
F5
AF1
F3
F1
AFz
AF4
AF2
Fz
F2
F4
F6
F8
FC5
FC3
FC1
FCz
FC2
FC4
FC6
C5
C3
C1
Cz
C2
C4
C6
CP3
CP1
CPz
CP2
CP4
P1
Pz
P2
PO1
POz
T7
CP5
P7
P5
P3
PO3
O1
grd
PO2
P4
T8
A2
CP6
P6
P8
PO4
O2
Fig. 1 Location for the placement of electrodes IV. SIGNAL PROCESSING A. EOG Artifact Removal Scholgl et al. [27] EOG removal method consists of 2 steps. The first step finds the regression coefficients for each EEG channel by applying equation (1) on to the EOG artifact induced signals. Once the regression coefficient is
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
EEG Peak Alpha Frequency as an Indicator for Physical Fatigue
obtained, the EOG-artifact-removed EEG is obtained by applying equation (2) on to the raw EEG signals. b = inv(EOG'*EOG)* (EOG'*EEG)
(1)
519
The peak alpha frequency for each location will be determined individually. Then, for each segment of eyes closed data the mean for each individual location is found.
where,
V. RESULTS AND DISCUSSION
b = regression coefficient for all the EEG channels EOG = The 3 channels of EOG recording EEG = The 55 channels of EEG recordings. To obtain the artifact-removed EEG data, the formula (2) is used: EEGnew = EEG-EOG*b
(2)
where, EEGnew = EOG-artifact-removed EEG B. Quantifying Peak Alpha Frequency In his review of alpha and theta oscillations, Klimesch [25] proposed two methods to find the peak alpha frequency. The first method is to look for the distinct peak within the alpha frequency range. The second method finds the center of gravity within the alpha frequency range. He suggested using the center of gravity method particularly if there are multiple peaks in the alpha range. In 2005, Neuper et al. [28] tested both methods of finding peak alpha frequency and found that the center of gravity method gave more stable results. Thus, the center of gravity method of finding peak alpha frequency will be applied here. In the current study, only the eyes closed data before and after the experiment will be processed. First, each of the eyes closed and eyes opened data that has been EOG removed is remontage using the Common Average Reference method. Then, it is segmented into 10-second intervals (to give frequency resolution of 0.1 Hz) with one second step size. Each ten second segment would be windowed using a Gaussian window to reduce spectral leakage. Then the signal is transformed into the frequency domain using Fourier Transform. The peak alpha frequency for that segment is determined using the center of gravity method assuming the range of 7-14 Hz as the alpha frequency band. The equation to find the peak alpha frequency is similar to the one used by Klimesch [25] and is given in equation (3) PAF = Σ(af x f)/ Σaf where, af = Amplitude of frequency f f = Frequencies within the 7 to 14 Hz
(3)
In order to compare the effects of fatigue on to the peak alpha frequency (PAF), the PAF of each location before the experiment is subtracted from the corresponding location after the experiment. From Fig. 2, it can be seen that the PAF at the motor cortex regions corresponding with the left and right hand have been reduced more significantly after the experiment as compared to the other region of the brain. This may be an indication that the mental control over the left and right hand have been reduced due to fatigue. Based on Table 1, six out of the eight subjects have distinctive reduction of PAF around the motor cortex region (similar to the topographical plot of Fig. 2). Two subjects (S4 and S7) have no change in the PAF. It is interesting to note that these two subjects skipped their lunch before the experiment.
Fig. 2 Differences of PAF before and after experiment. Table 1
The peak alpha frequency before and after experiment Left Region
S1 FC3 S2 CP3 S3 C5 S4 CP3 S5 C3 S6 C3 S7 CP3 S8 FC3 Mean
Before
After
9.3(0.4) 10.0(0.2) 10.9(0.3) 10.0(0.3) 10.6(0.4) 10.5(0.3) 10.4(0.2) 10.8(0.5) 10.3
9.0(0.3) 9.6(0.3) 10.4(0.2) 10.0(0.2) 10.2(0.3) 10.1(0.2) 10.4(0.3) 10.4(0.2) 10.0
Right Region FC4 CP4 FC6 CP4 C4 C4 C4 FC4
Before
After
9.4(0.4 10.1(0.2) 11.5(0.5) 10.0(0.2 10.6(0.4) 10.5(0.3) 10.0(0.3) 10.7(0.5) 10.4
9.1(0.2) 9.6(0.2) 10.3(0.3) 9.9(0.2) 10.1(0.2) 10.0(0.2) 10.0(0.4) 10.1(0.3) 9.9
The standard deviation is given in brackets
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
520
S.C. Ng, P. Raveendran
It can also be seen that the most reactive location to fatigue is located around the motor cortex region. It is usually at the C3 (or C4) location or their adjacent location (which is the motor cortex region for the control of hand). For the subjects that indicate a reduction in the PAF, the frequency reduction is around 0.4 Hz at the left brain region while it is about 0.6 Hz at the right brain region. This may be indicating that the same load perform by both hands resulted in a higher fatigue level at the left hand (contralateral to the right region) since all the subjects are right handed. VI. CONCLUSION In this experimental study, eight subjects were subjected to hand grip task for 30 seconds with each hand for 30 times or until they could not continue anymore. The study found six of the subjects’ peak alpha frequency reduced. Two of the subjects skipped their lunch and quit after a few trials, which were not sufficient enough to indicate any change in PAF. The PAF reduction is most prominent at the motor cortex region associated with hand control.
ACKNOWLEDGMENT Format the Acknowledgment and References headlines without numbering.
REFERENCES 1.
Lal SKL, Craig A. (2001) A critical review of the psychophysiology of driver fatigue. Biol Psychology 55:173–194 2. Grandjean E. (1979) Fatigue in industry. Br. J. Internal Med. 36: 175– 186. 3. Brown I. (1994) Driver fatigue. Hum. Factors 36:298–314. 4. Murata A, Uetake A, Takasawa Y. (2005) Evaluation of mental fatigue using feature parameter extracted from event-related potential Int J of Industrial Ergonomics 35:761–770. 5. Gandevia SC. (2001) Spinal and supraspinal factors in human muscle fatigue. Physiol Rev;81:1725–89. 6. Srinivasan J, Balasubramanian V. (2006) Low back pain and muscle fatigue due to road cycling—An sEMG study. Journal of Bodywork and Movement Therapies “in press” 7. Moritani T, Takaishi T, Matsumoto T. (1993) Determination of maximal power output at neuromuscular fatigue threshold. Journal of Applied Physiol 74:1729–1734. 8. Chaudhuri A, Behan P. (2000) Fatigue and basal ganglia. Journal of Neurological Sciences 179:34–42. 9. M. Knaflitz, and F. Molinari, (2003)“Assessment of Muscle Fatigue During Biking” IEEE Trans Neural Sys and Rehab Eng. 11:17-23 10. Rahnama N, Lees A, Reilly T. (2006) Electromyography of selected lower-limb muscles fatigued by exercise at the intensity of soccer match-play. J of EMG and Kinesiol. 16:257–263 11. Balasubramanian V, Adalarasu K. (2007) EMG-based analysis of change in muscle activity during simulated driving. Journal of Bodywork and Movement Therapies, “in press”
12. Arjmand N, Shirazi-Adl A. (2006) Sensitivity of kinematics-based model predictions to optimization criteria in static lifting tasks. Med Eng & Phys 28:504–514 13. Brody L., Pollock M., Roy SH, De Luca CJ, Celli B. (1991) pH induced effects on median frequency and conduction velocity on the myoelectric signal. J. Appl. Phys., 71:1878–1885 14. Liu JZ, Dai TH, Sahgal V, Brown RW, Yue GH. (2002) Nonlinear cortical modulation of muscle fatigue: a functional MRI study. Brain Res. 957:320– 329. 15. Liu JZ, Zhang LD, Yao B, Sahgal V, Yue GH. (2005) Fatigue induced by intermittent maximal voluntary contractions is associated with significant losses in muscle output but limited reductions in functional MRI-measured brain activation level. Brain Res. 1040:44– 54. 16. Liu JZ, Yao B, Siemionow V, Sahgal V, Wang XF, Sun JY, Yue GH. (2005) Fatigue induces greater brain signal reduction during sustained than preparation phase of maximal voluntary contraction”, Brain Res. 1057:113 – 126 17. Liu JZ, Lewandowski B, Karakasis C, Yao B, Siemionow V, Sahgal V, Yue GH. (2007) Shifting of activation center in the brain during muscle fatigue: An explanation of minimal central fatigue? NeuroImage 35:299–307 18. Niedermeyer E. (1999) The Normal EEG of the Waking Adult, in: Niedermeyer E, Lopes da Silva FH Eds., Electroencephalography: Basic Principles, Clinical Applications, and Related Fields. Williams and Wilkins, Baltimore, 149–173. 19. Marshall PJ, Bar-Haim Y, Fox NA. (2002) Development of the EEG from 5 months to 4 years of age. Clin Neurophysiol 113:1199–1208 20. Stroganova TA, Orekhova EV, Posikera IN. (1999) EEG alpha rhythm in infants. Clin Neurophysiol 110:997-1012 21. Clark CR, Veltmeyer MD, Hamilton RJ, Simms E, Paul R, Hermens D, Gordon E. (2004) Spontaneous alpha peak frequency predicts working memory performance across the age span. Int J of Psychophysiology 53:1–9 22. Kopruner V, Pfurtscheller G, Auer LM. (1985) Quantitative EEG in normals and in patients with cerebral ischemia, in: Pfurtscheller G, Jonkman EJ, Lopes da Silva FH. Eds., Brain Ischemia: Quantitative EEG and Imaging Techniques, Progress in Brain Research, 23. Klimesch W, Doppelmayr M, Schimke H, Pachinger T. (1996) Alpha frequency, reaction time and the speed of processing information, J Clin Neurophysiol. 13:511–518. 24. Klimesch W. (1997) EEG-alpha rhythms and memory processes. Int. J. Psychophysiol. 26:319–340. 25. Klimesch W. (1999) EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis. Brain Res Rev 29:169–195 26. Angelakis E, Lubar JF, Stathopoulou S, Kounios J. (2004) Peak alpha frequency: an electroencephalographic measure of cognitive preparedness. Clin Neurophysiol 115:887–897 27. Schlögl A, Keinrath C, Zimmermann D, Scherer R, Leeb R, Pfurtscheller G. (2007) A fully automated correction method of EOG artifacts in EEG recordings. Clin Neurophysiol 118:98-104. 28. Neuper C, Grabner RH, Fink A, Neubauer C. (2005) Long-term stability and consistency of EEG event-related desynchronization across different cognitive tasks. Clin Neurophysiol 116:1681-1694 Author: Institute: City: Country: Email:
Ng Siew Cheok University of Malaya Kuala Lumpur Malaysia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Identification of Gripping-Force Control from Electroencephalographic Signals A. Belic1, B. Koritnik2, V. Logar1, S. Brezan2, V. Rutar2, R. Karba1, G. Kurillo1 and J. Zidar2 1 2
University of Ljubljana, Faculty of Electrical Engineering,Tržaška 25, SI-1000, Ljubljana, Slovenia University Medical Centre Ljubljana, Division of Neurology, Institute of Clinical Neurophysiology, Zaloška 7, SI-1525 Ljubljana, Slovenia
Abstract— The exact mechanism of information transfer between different brain regions is still not known. The theory of binding tries to explain how different aspects of perception or motor action combine in the brain to form a unitary experience. The theory presumes that there is no specific center in the brain that would gather the information from all the other brain centers, governing senses, motion, etc., and then make the decision about the action. Instead, the centers bind together when necessary, maybe through electromagnetic (EM) waves of specific frequency. Therefore, it is reasonable to assume that the information that is transferred between the brain centers is somehow coded in the electroencephalographic (EEG) signals. The aim of this study was to explore whether it is possible to extract the information on brain activity from the EEG signals during visuomotor tracking task. In order to achieve the goal, artificial neural network (ANN) was used. The ANN was used to predict the measured gripping-force from the EEG signal measurements and thus to show the correlation between EEG signals and motor activity. The ANN was first trained with raw EEG signals of all the measured electrodes as inputs and gripping-force as the output. However, the ANN could not be trained to perform the task successfully. If we presume that brain centers transmit and receive information through EM signals, as suggested by the binding theory, a simplified model of signal transmission in brain can be proposed. We propose a computational model of a human brain where the information between centers is transmitted as phase-modulation of certain carrier frequency. Demodulated signals were then used as the inputs for the ANN and the gripping-force signal was used as the output. It was possible to train the network to efficiently calculate the gripping-force signal from the phase-demodulated EEG signals. Keywords— Electroencephalography (EEG), Artificial Neural Networks (ANN), phase demodulation, control
I. INTRODUCTION The exact mechanism of information transfer between different brain regions is still not known. The theory of binding tries to explain how different aspects of perception or motor action combine in the brain to form a unitary experience [1,2,3]. The theory presumes that there is no specific center in the brain that would gather the information from all the other brain centers, governing senses, motion, etc., and then make the decision about the action. Instead,
the centers bind themselves when necessary, maybe through electromagnetic waves of specific frequency. The functional integration or binding of different brain centers, as a possible mechanism for stimulus perception, is perhaps mediated by the synchronizing oscillatory activity of neuronal population, which can be determined by the electroencephalographic (EEG) coherence and power spectra analysis [4]. EEG signals are the result of superposition of electromagnetic (EM) activity of neurons during their more or less rhythmic activity. Since there are many active neurons in brain cortex, their superimposed EM activity can be detected on scalp as EEG signals. As it seems, the neighboring neurons are synchronized through these EM pulses and thus produce well-known brain rhythms, such as alpha, beta, etc. [5]. Furthermore, as it seems, such groups of neurons can communicate with each other on the basis of brain rhythms, which is the main idea of the theory of binding. Therefore, it is reasonable to assume that the information that is transferred between the brain centers should be somehow coded in the EEG signals. The aim of this study was to explore whether it is possible to extract the information on brain activity from the EEG signals during visuomotor tracking task. In order to achieve the goal, artificial neural network (ANN) was used. The ANN was used to predict the measured gripping-force from the EEG signal measurements and thus to show the correlation between EEG signals and motor activity. II. EXPERIMENTAL For this study, two types of measurements were performed. EEG signals and gripping force of index finger and thumb was measured simultaneously. For EEG signal recording Medelec system (Profile Multimedia EEG System, version 2.0, Oxford Instruments Medical Systems Division, Surrey, England) was used with standard 10-20 electrode system with two rows of additional electrodes, and without the electrodes FP1 and FP2 (Fig. 1). For gripping-force recording an analog force sensor was used and connected through 12-bit PCI-DAS1002 (Measurement Computing Corp. Middleboro, USA) to MATLAB. Both recordings were synchronized through the signal that was sent from
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 478–481, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Identification of Gripping-Force Control from Electroencephalographic Signals
479
used to calculate the gripping-force from EEG signals with no optimization of structure. Neurons in the first and the second level had tangens sigmoidal activation function and the output neuron had linear activation function. Neural network was trained by scaled conjugate gradient algorithm. III. POWER SPECTRUM AND COHERENCE ANALYSIS
Fig. 1 Standard international system of electrode positioning 10-20 with two rows of additional electrodes
the PC and recorded with EEG recording system. For numeric analysis of signals, MATLAB with neural network toolbox was used [6,7]. In the study, data of 5 healthy, right-handed subjects was used. The EEG signals and gripping-force was measured while the subjects performed four different tasks: visual task, visuomotor task with the right and the left hand, motor task, and visual and motor task. Visual task included observation of a sine wave that was projected on the screen in front of the subject. Visuomotor task included observing of the sine wave, representing the amplitude of desired gripping-force on the screen and following its shape by applying the force to the sensor with an index finger and a thumb as precisely as possible. Motor task included applying of the gripping-force to the sensor in approximately sine shape of similar amplitude and frequency as in visuomotor task, however, the subject had no visual information on how precisely he or she was able to achieve the goal. Blank screen was shown to the subject during the task performance. Visual and motor task was similar to motor task, while the subjects had to observe checker board instead of a blank screen. Each task was divided into 50 s blocks of which first part was active and lasted 25 s and was followed by 25 s of pause. Each task consisted of 20 blocks. Signal analysis was performed in MATLAB. EEG signals were analyzed with power spectra and coherence analysis [4]. When filtering of the signals was necessary butterworth-type filters were used and signals were filtered by matlab’s filfilt function to preserve phase characteristics of the signal. Three layer feed-forward perceptron network with 16 neurons in the first level, 10 neurons in the second level, and one neuron on the output level was
First, power spectrum and coherence analysis was performed. The obtained results [8] are similar to the findings of [9]. The most important for the aim of the study was an increase of power spectra and coherence in beta rhythms during visuomotor task. This indicates that information necessary for gripping-force control is might be coded in beta frequency band, which is also physiologically reasonable. IV. CORRELATION OF EEG SIGNALS AND GRIPPING-FORCE In the literature [10,11] some indications could be found that the time shift of specific neuron EM pulses compared to the rhythmic signal, produced by the neighboring group of neurons that are considered to work synchronously, codes the information that has been stored or computed by the specific neuron. Therefore, phase characteristics of EEG signals could play an important role in information exchange between brain centers during task performance. The calculation of coherence also needs phase information of the signal, to compute the results. If we presume that brain centers transmit and receive information through EM signals, as suggested by the binding theory, a simplified model of signal transmission in brain can be proposed. We propose a computational model of a human brain where the information between centers is transmitted as phase-modulation of certain carrier frequency. The carrier frequency might depend on the type of task that the brain is involved with. Therefore, the EEG signals were phase-demodulated. As mentioned above, during visuomotor task, significant power increase in beta frequency band could be detected, therefore, raw EEG signals were filtered by a band-pass filter to extract beta frequency band, and the same was made for theta frequency band. Next, all signals were filtered using high-pass filter, to eliminate drift from the signals. The high-pass filtering eliminates the effect of falsely chosen carrier frequency for phase-demodulation. Finally, using principal component analysis, the number of transformed EEG signals was reduced from 27 to 5 for each frequency band, and used as the input to the ANN. The output of the ANN was the predicted gripping-force (Fig. 2).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
480
A. Belic, B. Koritnik, V. Logar, S. Brezan, V. Rutar, R. Karba, G. Kurillo and J. Zidar ANN
30
1st layer 20
2nd layer
Inputs derived from
15
EEG signals
neurons
output − gripping−force
1
neuron
10
0
−10 20
Fig. 2 Structure of the ANN system
40
60
80
100
120
140
160
260
280
160
180
t(s)
Fig. 5 Training results for test-person 2 V. RESULTS 25
The measurements of two test-persons were included in the study. After several 100 repetitions of training procedure for the ANN the following results were obtained. In Fig. 3 the results for training on test-person 1 measurements are presented.
20 15 10 5 0 −5 −10 140
160
180
200
240
Fig. 6 Validation results for test-person 2
20
10
30
0
20
−10 20
220 t(s)
30
40
60
80
100
120
140
160
10
t(s) 0
Fig. 3 Training results for test-person 1 In Fig. 4 validation results are shown for ANN force prediction for test-person 1. 25
−10 40
60
80
100
120
140
t(s)
Fig. 7 Cross-validation results VI. CONCLUSIONS
20 15 10 5 0 −5 −10 140
160
180
200
220
240
260
280
t(s)
Fig. 4 Validaton results for test-person 1 Similar procedure was undertaken for test-person 2. In Figs. 5 and 6 training and validation results are presented. At the same time, cross-validation was done, where the ANN that was trained on test-person 1 data was used to predict the gripping-force of test-person 2. The results are shown in Fig. 7.
From Figs. 3 to 7 it can be seen that ANN can be trained to perform the mapping between transformed EEG signals and gripping-force. However, the mapping seems to change with time, as the prediction error gets worse if the validation data are taken further away from training data. This is also physiologically acceptable since the brain is able to adapt to new situations rather quickly, therefore, the nature of the EEG signals should change as well, when the same physical action is repeated. For new situations feedback considered as predominant mode of action in the brain. As the same situation is being repeated, the working memory stores most significant characteristics of the situation and feed-forward is becoming more and more important. This explains increasing prediction error of the ANN system, as well as theta rhythms as necessary inputs to the ANN. First, ANN was trained only with principal components of phasedemodulated beta rhythms, however, when theta rhythms
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Identification of Gripping-Force Control from Electroencephalographic Signals
were included as well, the prediction of validation data has improved. Physiologically, theta rhythms are also known to be involved in working memory operations of the brain. Principal components represent decomposition of EEG signals into statistically independent components. The effect is thus similar to decomposing the EGG signals into signals that represent independent sources in the brain. Thus, two objectives are achieved: the number of inputs can be reduced, and the inputs are statistically independent, which both speed-up the ANN training. Since all the used transformations, band-pass filtering, phase-demodulation, highpass filtering, PCA, and the ANN mapping, can be performed real-time, the presented system can also be used as brain-machine interface.
REFERENCES 1. 2. 3. 4.
Singer, W. and Gray C. M.. (1995). Visual feature integration and the temporal correlation hypothesis. Annu Rev Neurosci 18, 555–586. von der Malsburg, C. (1985). Nervous structures with dynamical links. Ber Bunsenges Phys Chem 89, 703–710. von der Malsburg, C. and Schneider W.. (1986). A neural coctailparty processor. Biol Cybern 54, 29–40. Pfurtscheller, G. and Andrew C.. (1999). Event-related changes of band power and coherence: methodology and interpretation. Journal of clinical neurophysiology 16, 512–519.
481
5.
da Silva, F. L. (1999). Eeg analysis: Theory and practice. Electroencephalography: Basic Principles, Clinical Applications and Related Fields pp. 1125–1159. 6. Mathworks (1998). Using Matlab version 5. The Mathworks Inc.. Natick. 7. Demuth, H. and M. Beale (1998). Neural Network Toolbox User’s Guide Version 3. The Mathworks Inc.. Natick, MA, USA. 8. Brežan S., Rutar V., Logar V., Koritnik B., Kurillo G., Belič A., Bajd T. and Zidar J.. (2003). Electroencephalographic coherence. In: Kononenko I, Jerman I, editors. Information Society IS'03: Cognitive science. Mind-body studies. Proceedings C of the 6th International Multi-Conference. Ljubljana: Inštitut Jožef Stefan, 187. 9. Classen, J., Gerloff C., Honda M. and Hallet M.. (1998). Integrative visuomotor behaviour is associated with interregionally coherent oscilations in the human brain. Journal of Neurophysiology 79, 1567– 1573. 10. Jensen, O. (2004). Hippocampal encoding of behavioural sequences requires a multi-item working memory buffer. In: Remember how your memory works?, Symposium on memory (Mojca Kržan, Ed.). University of Ljubljana, Faculty of Medicine. Sinapsa. Ljubljana. p. 3. 11. Jensen, O. (2001). Information transfer between rhythmically coupled networks: reading the hippocampal phase code. Neural Computation 13, 2743-2761. Author: Aleš Belič Institute: Street: City: Country: Email:
University of Ljubljana, Faculty of Electrical Engineering Tržaška 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Movement Related Potentials in Spontaneous and Provoked Thumb Movement A.B. Sefer1, M. Krbot1, V. Isgum2 and M. Cifrek1 1
Faculty of Electrical Engineering and Computing, Department of Electronic Systems and Information Processing, Zagreb, Croatia 2 University Hospital Rebro, Department of Neurology, Zagreb, Croatia
Abstract– In order to get better insight in the brain electrical activity generated during the preparatory and executive phases of body movement we recorded the movement related evoked potentials (MREP) in three different experimental conditions. First, in the voluntary movement, then in the condition where the movement was performed as a reaction to acoustical signal and finally, in the condition where two different tones where presented and the movement was made after the target one (choice reaction). In both cases the reaction time was also measured. The obtained results clearly showed that apparently equal movements where accompanied with different cerebral dynamics. The self paced movement preparatory phase, reflected in bereitschaft potential (BP) and negative slope potential (NS), lasts longer. The activity was located in the temporal brain region. In the executive phase motor potentials (MP) were generated in parietal region and also in temporal region where motor cortex areas are situated. In the second experiment preparatory phase was shorter. The activity was located in the frontal brain region in the first moment and after that it caught the hole left half of the brain. The bigger influence had executive phase with motor potentials (MP). The majority of activity was situated in the left temporal brain region where three the most conspicuous areas of activity were found. Amplitudes in this part were much higher then in the self paced movement. The results of the third experiment showed that reaction time was longer. This was visible from the moments of the activations of the same brain regions compared with their activations in the second experiment. Longer reaction time can be explained with the influence of the cognitive component that was needed because of the complexity of the task. In the third experiment amplitudes were even higher than in the second experiment. Keywords– movement related evoked potentials, reaction time, choice reaction time, self paced movement
I. INTRODUCTION First works from the evoked potential application in the field of movement analysis date back to mid seventies (XX century), when Kornhuber i Deecke [1] published work where they had shown that upon voluntary limb movements, approximately one second before the muscle activity appearance, evoked potentials with distinctive waveform can be measured on the head surface. Later researches have correlated specific phases of measured evoked potential with specific brain structures bioelectrical
activity [2]. Numerous research studies have been performed by using electrophysiological methods, extraand intracranial potential measurement, electroencephalography (EEG), magneto-encephalography (MEG) or by using neuroimaging methods, such as functional nuclear magnetic resonance imaging (fNMR), positron emission tomography (PET) or single photon emission computed tomography (SPECT). By the use of mentioned imaging methods, applied during movement execution, observation or imagining, brain neural structures involved in the movement processing where well identified and described. Their functionality observed trough their neural electrical activity is still under investigation. The proper method for this purpose is the method of movement related evoked potentials (MRP) which enables extraction of specific electrical activity that is synchronous to some movement event. Usually, that event can be EMG signal of activated muscle or even a performed action identified for example by the closure of reaction button contacts. Obtained evoked potentials reflects the brain electrical activity before and during performed movement. They have characteristic shape where their specific phases can be identified as ‘bereitschaft’ or ‘readiness’ potential (BP) that correlates with movement planning, negative slope (NS) with its preparation and motor potential (MP) with its execution. Multi channel recording of the MRP potentials combined with 2D or 3D brain mapping techniques can reflect spatiotemporal distribution of brain electrical activity. Application of MRP combined with brain mapping techniques can offer better insight in the brain electrical processes developed during voluntary and provoked movement planning and / or execution. It can equally be applied in the research of normal as well as in the pathological situation [3,4]. II. MATERIAL AND METHODS In order to proof the hypothesis that apparently the same movements performed in different cognitive condition have different cerebral dynamics, three types of experiments where performed. Pressing the button was the selected movement. In first experiment, self paced voluntary pressing was investigated. In the second, same activity was
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 529–532, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
530
performed as a obligatory reaction to acoustical stimuli and in the third experiment the same movement was performed on target stimuli appearance where two, target and non target, acoustical stimuli were presented. Investigation was approved by the local ethic committee. Subjects that participated in the experiment were 7 healthy, right handed male. They ranged in age from 21-23 years (mean 22 +/- 0.81). Subjects did not suffer from any known neurological or other illness. After the experiment was explained to them in details, they signed informed consent form. The experiment was divided in three blocks with short breaks between each block. During the experiment subjects were seated in comfortable chair. They were instructed to relax and to close their eyes. They had to minimize blinking, body and ocular movements as much as possible and they had to concentrate on the sound stimuli where appropriated. All their responses to the stimuli had to be as fast as possible. They had to push joystick button held in their right hand, and they had to push the buttons with their thumb. In different parts of the experiment, the subjects had different tasks related to the button. Subjects had headphones which were needed in the second and third part of the experiment. In the first part of the experiment subjects were instructed to perform repetitive trials in which they had to push the button every 5-10 sec. This interval between two movements was enough to avoid signal interference. There were 50 trials for every experiment. The second part of the experiment included sound stimuli which were reproduced by the headphones. The stimuli were single tone pulses with 1000 Hz frequency, 50 ms duration and 10 ms of rise and fall time. Intensity was set to 60 dB HL. Interstimulus interval varied between 4 and 5 s. When detecting signal, subjects were instructed to press the button as fast as possible and not to miss any stimuli. In the third experiment there were two sounds. First one was equal to the previously described, while the second one was the same expect its frequency was changed to 2000 Hz. The first was non-target (NT) while the second one, to which the subject had to response by pressing the button with his right hand thumb again, was a target one. The non target stimuli had to be ignored. There were 30 Target stimuli and 120 Non Target. They where randomly distributed. The entire recording procedure for each subject lasted for about an hour. Before the experiment the disk cup Ag/AgCl where placed according to International 10/20 system on the examinees head. The electrode impedance was carefully adjusted to be below 5 kOhms. Monopolar recording was performed toward the linked mastoids as a reference point. The EEG signal was filtered with the pass band filter with
A.B. Sefer, M. Krbot, V. Isgum and M. Cifrek
lower cutoff frequency set to 0.01 Hz and the upper frequency limit set to 30 Hz. The EMG was filtered between 5 and 30 Hz. To avoid signal contamination with the ocular artifact, vertical (VEOG) and horizontal eye movements (HEOG) were monitored. One additional channel was used for EMG recording. Muscular activity was recorded from right abductor policis brevis (APB). As a trigger signal, the closure of pushbutton contact was applied. During the whole experiment continuous EEG, EMG, VEOG, HEOG and trigger signals where recorded. As a recording apparatus the EBNeuro s.p.a. Italy, Mizar Sirius 40 EEG/EP system was used. The evoked potential analysis was performed off-line after each experiment. The analyzed time interval was 2 s before and 2 s after the trigger onset. The MRP baseline was determined as the average of all samples from the first 200 ms period. Before each signal averaging, computerized artifact rejection was made in order to reject trials in which blinks or deviations in eye position occurred. From the individual EPs grand average was computed. Then the trails were filtered by a low pass filter with a cutoff frequency of 7 Hz in order to eliminate unwanted alpha activity which appears with surprisingly high amplitudes in the grand average results. III. RESULTS Obtained evoked potential signals are displayed on fig. 1. where three pairs of curves are shown. Each pair shows the
Fig. 1. Averaged EMG (upper trace) and MRP(lower trace) signals in three experiments (see text above)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Movement Related Potentials in Spontaneous and Provoked Thumb Movement
531
averaged EMG signal recorded over the APB muscle which was active during the observed thumb movement. Below is the MRP recorded over the left motor area at the C3 electrode. The reference trigger point is located in the middle of the horizontal time axes so that the time interval of 1000 ms before and 1000 ms after the button was pressed is shown. From this picture, it is obvious that muscular activity does not differ very much between the experimental conditions except that its amplitude is doubled in the condition where the thumb movement follows acoustical signal either in simple reaction time (experiment 1.) or in choice reaction time experiment (experiment 2.). Also, the MRP morphology is almost the same except during the voluntary movement where the activity started earlier and finished later. Its amplitude is increased with the complexity of experiment. On the fig. 2. to fig. 5. the series of 3D potential maps are displayed corresponding to first, second and third experimental condition. Each series shows potential distribution in moments when brain cortex shows intensive activity of underlying structures, either in premotor, supplementary motor, lateral prefrontal or parietal activity or their combination. Thick vertical lines between individual head maps indicate discontinuity of the horizontal time axis. Presented maps clearly indicate different kinds of brain dynamics what is not easily visible on the MRP traces display. In the first activation phase for self paced movement activity started in the left temporal region of the brain and then extended to the central region with increased posterior activity before the movement onset. After the movement onset, increased activity in the contralateral motor area (C3 electrode) is visible. This activity spread to the central, frontocentral and lateral frontal cortex. In the second and third experiment where the
Fig. 4. Spatiotemporal distribution of a) self paced voluntary movement – upper row, b) movement as reaction to acoustical stimuli – middle row, c) movement as a reaction to target acoustical stimuli – lower row; from 50 to 300 ms
Fig. 2. Spatiotemporal distribution of a) self paced voluntary movement – upper row, b) movement as reaction to acoustical stimuli – middle row, c) movement as a reaction to target acoustical stimuli – lower row; from –850 to –500 ms
Fig. 5. Spatiotemporal distribution of a) self paced voluntary movement – upper row, b) movement as reaction to acoustical stimuli – middle row, c) movement as a reaction to target acoustical stimuli – lower row; from 400 to 800 ms.
Fig. 3. Spatiotemporal distribution of a) self paced voluntary movement – upper row, b) movement as reaction to acoustical stimuli – middle row, c) movement as a reaction to target acoustical stimuli – lower row; from –400 to 0 ms
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
532
A.B. Sefer, M. Krbot, V. Isgum and M. Cifrek
thumb movement was provoked by acoustic stimuli the early, but very spread activity in the frontal regions was observed and it reached its maximum at the moment of movement onset. This activity was even more pronounced when the subject had to choose between two different stimuli. 50 ms after the reaction the focal activity in the regions of central, frontocentral and frontal lateral cortex. In the second and third experiment reaction time was measured and results are: simple RT = 209.805 +/- 34.9489 and choice RT = 250.498 +/- 52.6468 ms. IV. DISCUSSION According to obtained results there is clear evidence that the apparently equal movements performed in different experimental conditions are followed by different cerebral dynamics. During the preparation period of self paced voluntary movement the activity is more related to the activation of specific motor areas such as primary motor area, premotor area and supplementary motor area. Later on, during the movement execution and after it, the same localized activity is observed . It is very pronounced and also spreads toward the prefrontal lateral area of the contralateral hemisphere. In the reaction to the acoustic stimuli (second and third experiment) the premovement activity is widely spread over the frontal cortex. It is probably due to expectation of stimuli. This amplitude is increased proportionally to the complexity of situation. After the stimuli there is a very intensive, but short lasting activation of motor structures that again finishes with spread activity over frontal and posterior region of the brain. There is also difference between simple and choice reaction time. Choice reaction time is prolonged by about 40 ms. It is probably due to cognitive activity involved in the decision making process.
V. CONCLUSION The results show that the execution of one movement in different conditions can have different consequences on the brain activity. For self paced reaction or different types of stimuli, brain dynamic can be very various. Activation of specific regions of the brain depend whether a motor or cognitive component of brain activity are included in completing the task. The significant role also has the complexity of the task. Different conditions also require different time needed for the execution of the movement. In more complex task subject needs more time to react and activation of regions of the brain is much stronger. This methods and measurements can be used in future studies to gain more information about brain dynamics in different stimuli conditions.
REFERENCES 1.
2. 3. 4.
Kornhuber HH, Deecke L. (1965) Changes in the brain potential in voluntary movements and passive movements in man: readiness potential and reafferent potentials. Pflugers Arch Gesamte Physiol Menschen Tiere 284:1-17 Georgopoulos AP (1990) Neural coding of the direction of reaching and a comparison with saccadic eye movements. Cold Spring Harb Symp Quant Biol 55:849-859 Wiese H, Stude P, Sarge R et al. (2005) Reorganization of Motor Execution Rather Than Preparation in Poststroke Hemiparesis. Stroke 36:1474-1479 Carbonnell L, Hasbroucq T, Grapperon J, Vidal F (2004) Response selection and motor areas: a behavioural and electrophysiological study. Clin Neurophysiol 115:2164-2174
Author: Velimir Isgum Institute: University Hospital Rebro, Department of Neurology Street: Kispaticeva 12 City: Zagreb Country: Croatia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Multimodal imaging issues for electric brain activity mapping in the presence of brain lesions F. Vatta1,2, P. Bruno1,2, F. Di Salle3,4, F. Meneghini1, S. Mininel1,2 and P. Inchingolo1,2 1
Bioengineering BRAIN Unit,aut DEEI, University of Trieste, Trieste, Italy Higher Education in Clinical Engineering, University of Trieste, Trieste, Italy 3 Department of Neurosciences, University of Pisa, Pisa, Italy 4 Maastricht Brain Imaging Center, Faculty of Psychology, University of Maastricht, Maastricht, The Netherlands 2
Abstract— Reconstruction and visualization of sources of EEG activity within the specific patient’s head require the assumption of a precise and realistic volume conductor head model, i.e., a 3-D representation of head’s electrical properties in terms of shape and electrical conductivities. Source reconstruction accuracy is influenced by errors committed in head modeling. Modeling accuracy mainly relies on the correct clinical image-based identification of head structures, characterized by different electrical conductivities, to be included as separate compartments in the model. This paper analyzes the imaging protocols available in clinical practice to define the most suitable procedures for identification of the head structures necessary to build an accurate head model also in the presence of morphologic brain pathologies. Furthermore, tissues anisotropy is discussed and identified as well. With this work we have identified a protocol for the acquisition of multimodal patient’s imaging data for EEG brain activity mapping purposes, able to account for pathological conditions and for head tissues anisotropy. Keywords— Head model, multimodal imaging, anisotropy, EEG, brain lesion.
I. INTRODUCTION Mapping of neural sources of electroencephalographic (EEG) brain activity within the specific patient’s head requires the assumption of a precise and realistic volume conductor head model, i.e., a 3-D representation of the electrical properties of the specific head in terms of shape and electrical conductivities [1]. This is of great interest for both basic research and for clinical applications, for which accurate information about neural source localization may help pre-surgical planning for the removal of brain lesions [1]. Accuracy achievable in EEG source reconstruction is influenced by errors committed in head modeling. Studies conducted so far in the literature concerned the problem of geometric shape and conductivity of realistic head models in normal conditions [2-3]. Brain lesions present electrical conductivity different from normal brain and should be modeled as separate compartments for accurate EEG source reconstruction [1]. Furthermore, studies reported in the
literature are mainly based on the assumption that head tissues have isotropic electric conductivity. It is, however, known that conductivity anisotropy characterizes head tissues as brain white matter [4] or the skull [5]. Head modeling should therefore account also for all these issues. Clinical images, typically MRI and CT, are used for head model building. Head modeling accuracy mainly relies on correct identification, by image segmentation, of head structures characterized by different electrical conductivities to be modeled as separate compartments. Brain lesions show large variability and an intrinsic difficulty for segmentation [6]; hence, acquisition of finely tuned images (e.g., MRI with contrast medium injection) is often required, but this kind of images is not the best also for identification of standard head structures as scalp, skull, etc. The possibility of deriving information about tissue anisotropy from clinical images is also desirable [7]. Notably, the MR-based diffusion tensor imaging (DT-MRI) has recently been suggested to map the conductivity tensor of the brain given the high correlation between electrical conductivity tensor and water self-diffusion tensor, with the potential to further refine the head modeling by taking the anisotropy of white matter into account. In general, one imaging procedure giving best results in some conditions, e.g., for identification by its image contrast of a specific head structure, may not be the optimum in other situations. This paper analyzes the available clinical imaging protocols used for the purpose of clinical morphological analysis from a segmentation point of view, to define the procedures most suitable for accurate identification, also in the presence of pathology, of the head structures necessary for head modeling, also accounting for the above described modeling issues. In this paper a protocol is then identified and proposed for acquisition of multimodal patient’s specific imaging data, to be integrated for head model building for EEG brain activity mapping. II. METHODS This study is composed by two parts. The first one deals with the analysis of imaging procedures consolidated in
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 509–512, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
510
F. Vatta, P. Bruno, F. Di Salle, F. Meneghini, S. Mininel and P. Inchingolo,
clinical environment to identify the most suitable for geometric identification, also in the presence of morphologic brain pathologies, of head structures necessary for model building. The second part is dedicated to the recently developed DT-MRI technology for extraction of tissue conductivity anisotropy information. The study was conducted on the O3-DPACS (Open Three - Data and Picture Archiving and Communication System) [8] systems of Cattinara Hospital in Trieste and of Santa Chiara Hospital in Pisa (Italy). The following sets of clinical images have been analyzed: Proton Density, FLAIR T2, Inversion Recovery, Spin Echo with contrast medium injection, Spin Echo DP/T2, Spin Echo T1, T2 dry, Turbo SE T2 and CT. The following head model compartments have been identified by means of image segmentation: skin, fat tissue, skull, cerebro-spinal fluid (CSF), ventricles, gray matter (GM), white matter (WM), medulla and cerebellum, eyes, muscle, internal air and brain lesions. Given the purposes of this study, a search has been conducted for sets of clinical images from patients with expansive brain lesions who underwent interventions of stereotactic neurosurgery. Clinical history of the 29 identified patients was recovered to collect several studies made by MRI and CT modalities. The image sets were segmented with 3D Slicer 2.5 [9]. Wherever possible just a thresholdbased algorithm was used, as the objective was not to obtain the best segmentation but to focus on the identification of optimal imaging acquisition protocols for tissues identification, assuming that if a simple algorithm like threshold gives a good segmentation, more refined algorithms will provide a more accurate segmentation on the same image. Semi-automated threshold segmentation has then been conducted on the listed image sets researching for optimal results to evaluate optimal image acquisition protocol for each tissue. For acquisitions with DT-MRI technology we had to search “ex-novo” for optimal image acquisition protocol instead of comparing consolidated image protocols; optimal refers to the capability of extracting tissues anisotropy information necessary for head model building purpose. Test acquisitions were performed with the widest range of possible spatial and intrinsic parameters, with attention to the number of diffusion weighting directions which was varied from a minimum of 6 to a maximum of 32. All acquisitions have been performed with axial orientation, with slices parallel to the rostro-caudal plan of corpus callosum. The reference plan for the first slice included the lower part of frontal lobe and the tentorium membrane. Given the high number of test acquisitions required for this research, 2 subjects were needed to avoid excessive exposition to electromagnetic field for a single subject. The 2 subjects were selected of similar age, height (175 cm), weight (70 kg) and both healthy following preliminary interview and counter-
check by MRI acquisitions with T1, T2 and Proton Density. Image acquisitions have been performed on a 1.5T MR Philips Gyroscan Intera equipment. DT data were transferred on a separate workstation for data analysis and processed with DTI-studio 2.4. The acquired signal has been filtered for suppression of background noise. Finally, the procedure of white matter fibers reconstruction has been performed considering values of fractional anisotropy lager than 0.25 with at least 70° for reconstruction angle, to maximize its efficacy. III. RESULTS A. MRI and CT Results of segmentation applied to the adopted image sets to identify the above listed head structures for head modeling purposes demonstrate that an appropriate multimodal image set has to be acquired for accurate model compartments identification. Table 1 summarizes, for each image set, tissues identifiable by segmentation with a qualitative evaluation referenced to an anatomical brain atlas. The performed analysis allowed identification of the imaging sequences best suitable for the extraction of the various head compartments. Skull can be clearly identified only from CT (Fig. 1A) in which, however, it is not possible to separate hard bone from the inner spongiosa. Optimal image set for identification of WM, GM, cerebellum and CSF is Inversion Recovery (IR), which allows their identification in few segmentation steps (Fig. 1B). Medulla can be identified from axial or sagittal IR scans extended to the neck. Skin is well identifiable by means of IR and related sequences (e.g., FLAIR T2), after modifying visualization parameters to enhance tissue profile. Eye tissue is identifiable by T2-weighted MR sequences. Turbo Spin Echo and Table 1 Quality of segmentation for head tissues in different image sets Image set
Identifiable tissues
Proton Density
GM, WM, ventricles, CSF, eyes CSF, ventricles, skin GM, WM, ventricles, CSF, skin
FLAIR T2 Inversion Recovery Spin Echo + contrast medium Spin Echo PD/T2 Spin Echo T1 CT T2 Dry Turbo Spin Echo T2
ventricles, CSF, eyes, skin CSF, ventricles, eyes GM, WM, ventricles, CSF, eyes, fat Skull CSF, eyes, glioblastoma CSF, eyes, abscess
Segmentation quality medium excellent excellent depending on acquisition good medium excellent medium medium
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Multimodal imaging issues for electric brain activity mapping in the presence of brain lesions
Dry sequences lead also to good results. Identification of para-nasal sinuses, i.e., the air pockets in frontal bone, jawbone and sphenoid, is rather clear as they are surrounded by tissue well visible in MRI. Spin Echo T1-weighted MRI resulted optimal for their identification. Air pockets surrounded by bone tissue (e.g., mastoid), on the contrary, cannot be identified, as both air and bone are not composed by a quantity of hydrogen atoms sufficient to allow a different visualization for each of them with different MR signal intensity and they both appear as a black zone in which signal is absent. Fat tissue is characterized by a hyperintense signal in T1-weighted Spin Echo sequences and is identifiable by threshold application (Fig. 1C). Conversely, muscles boundaries tend to appear confused with soft tissues as consequence of field inhomogeneities and of intermediate layers interpolation. The best differentiation between muscle and background signal was recognized in FLAIR T2 sets. Brain lesions can be identified using appropriate techniques, as contrast medium injection. Lesions due to cerebral metastasis, gliosis and brain abscess have been analyzed in this work as the characteristics of most of the analyzed pathologies could be assimilated to the characteristics of these lesions. From clinical point of view, the cerebral metastasis was analyzed by means of Proton Density weighted images (Fig. 1D), in which metastasis shows up as an extraneous mass of heterogeneous composition. Gliosis was studied by means of Spin Echo T1-weighted MR sequences and T2 dry (Fig. 1E), which both give good results thanks to the clear differentiation of glioma from the background. Brain abscess was studied by means of Spin Echo T1-weighted sequences with contrast medium injection (Fig. 1F), where lesion presents hyper intense contour with a hypo intense inner part and it has been segmented distinguishing the two areas of different composition. The performed study allowed the identification of a multimodal image acquisition protocol suitable for building an accurate volume conductor head model. Contrarily to imaging protocols for sole diagnostic clinical purposes, image acquisition should be performed with a spatial resolution constant in the 3 scan dimensions or at least similar, to attenuate the loss of information due to pixels’ interpolation between adjacent sections in the 3-D model building-up. CT acquisition should be performed as follows: 1) Acquisition of contiguous slices of reduced thickness (5mm, 3mm better); 2) Acquisition volume, preferably unique and uniform, extending downwards from head vertex to include at least the skull base (first cervix vertebra desirable); 3) Matrixes 512*512. For MRI acquisitions: 1) Contiguous slices of reduced thickness (2 mm or less). 2) Matrixes 512*512. Larger image matrixes and reduced gap between adjacent slices allow higher spatial resolution in models obtained.
511
(A)
(B)
(C)
(D)
(E)
(F)
Fig. 1 Examples of identification of various head structures: (A) skull in CT; (B) CSF in Inversion Recovery; (C) ) fat tissue in T1-weighted Spin Echo; (D) cerebral metastasis in Proton Density image set; (E) gliosis in T2 dry; (F) brain abscess in Spin Echo T1 with contrast medium injection.
For accurate head model geometrical definition, the proposed image acquisition protocol provides using the following multimodal scans: 1. 2. 3. 4.
CT for skull compartment; Inversion Recovery for GM, WM, cerebellum and CSF; FLAIR scans for skin and muscles; Spin Echo T1 for identification of fat tissue, para-nasal sinuses and brain lesions (e.g., brain abscess, gliosis, and cerebral metastasis); 5. Turbo Spin Echo T2 for the eyes; 6. Additional scans that might be necessary to enhance information about specific lesions with characteristics different from the lesion cases analyzed in this work. These sets of multimodal images must then be integrated for the extraction, from each of them, of the compartments that can be better identified basing on these images, to be finally integrated in the 3-D volume conductor head model. B. DT- MRI In this work, a DT-MRI acquisition protocol has been determined and finely tuned with the aim of maximizing anatomical information about brain tissue anisotropy for head modeling purpose, with a slice thickness of 2 mm and 32 directions of DT-signal coding. Parameters which resulted optimal for DT-MRI acquisitions were the following: MS Spin Echo single shot EPI, with an axial matrix of 256x256; Field of View: 210 mm; voxel dimension: 0.82 mm x 0.82 mm x slice thickness, varying from 0.5 mm to 5 mm without spacing between adjacent slices - note that DT-MRI
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
512
(A)
F. Vatta, P. Bruno, F. Di Salle, F. Meneghini, S. Mininel and P. Inchingolo,
(B)
(C)
Fig. 2 DT-MRI slice acquired with one of the 32 directions of DT-signal coding (B) and the T1 baseline (A); WM conductivity tensors visualization, extracted from raw data of diffusion tensor information, with ellipsoidal tensor representation (C). spatial resolution is different from that of other cited acquisition sequences, e.g., T2; no homogeneity correction to avoid influencing actual signal’s entity; no fold over suppression; number of echoes: 1; flip angle: 90°; b-value: 1000. Field of view and voxel axial resolution have been chosen appropriately to entirely include the skull from the skin layer on the forehead to the nape of the neck, to minimize reflection artifacts and to maximize spatial resolution compatibly with reasonable acquisition times. Three identical acquisitions have been performed in the same session to avoid the event of possible movement artifacts or field anisotropy to alter the experiment; in each of them, the number of directions of the DT-signal coding has been modified from 6 to 32. Results have been evaluated both from graphical rendering and from statistical values of the reconstructed WM fibers, i.e., mean length, fibers number, number of fibers estimated to pass through the same voxel, acquisition time. Fig. 2 shows an example of the DT-MRI images obtained with the described acquisition procedure with an example of the performed processing of WM diffusion tensor information into conductivity tensor information, which has been integrated in the anisotropic modeling of the head volume conductor. Analysis of the data obtained in the post-processing phase evidenced a clear difficulty in identifying skull’s planar anisotropy, due to the presence of adjacent voxels with mutually different preponderant eigenvalues. Studies are currently being carried on to overcome the above described limits, in cooperation with the Maastricht Brain Imaging Center (M-BIC) of the University of Maastricht (NL) for identification and testing of innovative acquisition protocols dedicated to this specific aim. IV. CONCLUSIONS In this paper a multimodal clinical imaging protocol is identified and proposed for the acquisition of patient’s data to be integrated for head model building for EEG brain activity mapping. The following multimodal imaging se-
quences should be acquired: CT for skull; MR Inversion Recovery for GM, WM, cerebellum and CSF; FLAIR T2 for muscles and skin; Spin-echo T1 for fat tissue, para-nasal sinuses and brain lesions; Turbo-spin-echo T2 for eyes. Suitable DT-MRI sequences have to be used for information about tissue anisotropy. Distance between adjacent slices should be better limited to 2-3 mm, possibly covering a volume extended from head vertex to the first cervical vertebra. DTI acquisition can be limited to the reduced volume containing the anisotropic tissue under analysis. Resolution requirements are determined by the most demanding modality (DT-MRI), while field of view (model extension) by model completeness.
ACKNOWLEDGMENT Work supported by MIUR, Italy, National project PRIN 2004 n. 2004090530, by University of Trieste and by the Interuniversity Consortium CINECA, Casalecchio di Reno (BO), Italy.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8.
9.
Vatta F, Bruno P, Inchingolo P. (2002) Improving lesion conductivity estimate by means of EEG source localization sensitivity to model parameter. J Clin Neurophysiol 19:1–15 Cuffin BN. (2001) Effects of modeling errors and EEG measurement montage on source localization accuracy. J Clin Neurophysiol 18:37-44 Vatta F, Bruno P, Inchingolo P. (2005) Multiregion bicentricspheres models of the head for the simulation of bioelectric phenomena. IEEE Trans Biomed Eng 52,:384–389 Nicholson PW. (1965) Specific impedance of cerebral white matter. Exp Neurol 13:386–401 Marin G et al. (1998) Influence of skull anisotropy for the forward and inverse problem in EEG. H Br Mapping 6:250– 269 Vatta F, Bruno P, Inchingolo P. (2001) Influence of lesion geometry estimate on EEG source reconstruction. IFMBE Proc. vol. 1, Medicon 2001, Pula, Croatia, 2001, pp. 974-977 Bruno P, Hyttinen J, Inchingolo P et al. (2006) A FDM anisotropic formulation for EEG simulation, Proc. 28th Ann. Int. Conf. IEEE-EMBS, New York, USA 2006, pp. 1121 - 1125 Inchingolo P, Beltrame M, Bosazzi P et al. (2006) O3DPACS Open-Source Image-Data Manager/Archiver and HDW2 Image-Data Display: an IHE-compliant project pushing the e-health integration in the world. Comput Med Imaging Graph 30(6-7):391-406 3D Slicer Users Guide. At http://www.slicer.org. Author: Institute: Street: City: Country: Email:
Federica Vatta DEEI - University of Trieste Via Valerio 10 Trieste Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Proposal and validation of a framework for High Performance 3D True Electrical Brain Activity Mapping S. Mininel1,2, P. Bruno1,2, F. Meneghini1, F. Vatta1,2 and P. Inchingolo1,2 1
2
Bioengineering BRAIN Unit, DEEI, University of Trieste, Trieste, Italy Higher Education in Clinical Engineering, University of Trieste, Trieste, Italy
Abstract—This paper presents an original problem solving framework named TEBAM, specifically conceived and designed to achieve high performance 3D True Electrical Brain Activity Mapping. We describe the integrated framework that has been proposed and developed, specifying TEBAM’s design characteristics, implementation and tools interconnections (pipelines). TEBAM relays on patient’s specific realistic head modeling for the EEG forward and inverse problem evaluation and is implemented and optimized with a very flexible approach to solve in short time, by means of High Performance Computing resources, the large scale computations needed. Results of 3D True Electrical Brain Activity Mapping can be visualized in TEBAM framework in different multimodal ways, combining the anatomical information with the computed results to give an optimal insight of computation output, relying also on stereographic visualization. Keywords— HPC, brain activity mapping, EEG, visualization.
I. INTRODUCTION Multimodal integration of electroencephalography (EEG) and clinical imaging data is a key point towards True Electrical Brain Activity Mapping, i.e., reconstructing and visualizing neural sources of electrical brain activity within the specific patient’s head with both high spatial and temporal resolution, as the former allow measurement of brain activity with an optimal temporal resolution while the latter are characterized by a very high spatial resolution [1]. The EEG inverse problem is the process of estimating the optimum EEG source parameters responsible for a given EEG distribution measured at the scalp electrodes. This can be achieved by means of iterative computational methods with a large number (several hundreds) of iterative EEG forward evaluations to find the optimal source parameters corresponding with the measured potentials [1]. To accomplish this non trivial task, a suitable framework should be available. First of all, a precise and realistic representation of the electrical properties of the specific subject’s head, in terms of shape and electric conductivities, is necessary to achieve an accurate EEG forward problem solution [1]. Moreover, the adopted head model should also be able to incorporate various sets of tissues with different conductivities [2]. This
is extremely important in clinical applications in which also pathological formations as brain lesions (which are characterized by a large variability in shape and conductivity) have to be included in the head model [3]. Once built, realistic head models require the use of demanding numerical computer methods for EEG forward problem solution and hence for electrical brain activity mapping [1]. A suitable, flexible and performing framework should therefore account for all these constraints. In this paper, an original problem solving framework named TEBAM (True Electrical Brain Activity Mapping) is presented. TEBAM was specifically designed and implemented to account for all the above mentioned constraints. In the following sub-sections are presented the design specification, the structure and implementation of TEBAM followed by the validation and testing of the framework. II. TEBAM
SCENARIO AND DESIGN
The EEG forward problem, which has to be iteratively solved in TEBAM’s framework for electrical brain activity mapping, is governed by Poisson’s differential equation [4] ∇ ⋅ (σ∇Φ ) = ∇ ⋅ J i = ρ
(eq.1)
where J i is the applied current density of the neural brain source (A/m2), σ is tissue electrical conductivity (Ωm)-1, and Φ is the electric potential in the problem domain. Realistic head models impose numerical computational methods for the solution of eq. 1, as the Finite Difference Method (FDM), which has been implemented in TEBAM framework thanks to its characteristics of flexibility which also allow an easy implementation of anisotropic electrical conductive domains. This typically involves the solution of a large and sparse linear algebraic equations system (Ax=b). Hence, the main characteristics of the bioelectrical problems computations in TEBAM framework are: 1) Large-scale, i.e., large memory and CPU time requirements; 2) Iterative, as electrical brain activity mapping requires EEG forward problem solution to be performed iteratively; 3) Multistep, as simulations are typically composed of a fairly complex
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 513–516, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
514
steps sequence that are arranged in pipeline and classified as modeling, simulation computing and visualization. The TEBAM pipeline is composed by 5 steps: 1) Construction of a model of the physical problem domain, in terms of shape and physical properties, given by the patientspecific volume of the head [1]; 2) Application of boundary conditions and/or initial conditions, as source modeling and specification of initial data for the iterative computations are required; 3) Computing, as EEG forward and inverse solutions can be computed by solving a linear system of algebraic equations, derived from the numerical solution of eq. 1; 4) Validation and test of the results, as during the development phase results correctness has to be checked upon simple physical test domains for which independent solutions methods are available; 5) Visualization, as simulation results have to be visualized by means of suitable Scientific Visualization tools [5]. TEBAM was designed as an integrated framework in which visualization is linked with computation and geometric design to interactively explore (steer) a simulation in time and/or space. In synthesis, the TEBAM problem solving framework has been designed to address the following issues: 1) Integration in data collection of multimodal anatomo-functional data; 2) Integration in data analysis, as modeling, simulation and visualization aspects of the problem have to be used in chorus; 3) Interactivity, to understand cause-effect relationships; 4) Extensibility, to get not a monolithic solution for one problem but possibility of reuse for solving also new problems; 5) Scalability, as although a full EEG inverse problem solution in short time requires the use of High Performance resources, tools can be run even on high-end PCs. III. STRUCTURE OF TEBAM AND HPC IMPLEMENTATION TEBAM provides an optimized dataflow programming framework, based on modules which implement components for computational, modeling and visualization tasks to build an interactive framework in which the researcher is free to change various parameters as mesh discretization, iterative solution method, neural source placements and visualization tools displayed. The main bricks of TEBAM are: 1) building of the patient-specific realistic head model; 2) numerical EEG forward and inverse problem solution, with multiple iterative forward solutions; 3) visualization of the computed results. As first step, a 3D voxel matrix is created, modelling the volume conductor of the head of the specific patient under analysis. This is done with segmentation of clinical images of the subject’s head by means of 3D Slicer [6] and then assigning a scalar or a tensorial conductivity value to each
S. Mininel, P. Bruno, F. Meneghini, F. Vatta and P. Inchingolo,
identified pixel, according to the isotropic or anisotropic conductivity of the specific head model compartment [7]. The second step implies the building and solution of the large and sparse linear algebraic equations system (Ax=b) derived from the numerical FDM discretization of eq. 1. TEBAM framework has been designed to build and solve efficiently the equations system of step 2, giving high flexibility in the choice of solution methods and being able to run with small modifying either on mono-processor PC or, in parallel, upon large High Performance Computing (HPC) Systems. HPC resources are an adequate instrument for a consistent reduction in solution time for solving of large scale problems, as the computational load is subdivided using more CPUs and inter-CPUs communication is managed by MPI (Message Passing Interface). The need for code parallelization and for the use of HPC in TEBAM was due to the magnitude of the problems addressed. In fact, a conductive head model derived from segmentation of a series of MRI images with adequate spatial resolution leads to a linear equation system with millions of unknowns for the solution of a single EEG forward problem. As the EEG inverse problem solution requires several iterative EEG forward problems solutions, HPC becomes then mandatory to reduce computation times especially for clinical applications purposes. In TEBAM a typical parallelization strategy, named “divide and conquer”, has been adopted. Each CPU solves the problem in its sub-domain and MPI is used to exchange values necessary to each CPU for contour values. A specifically designed application was written in C++, compiled in Visual C 6.0 and in gcc 3.0 frameworks to build up a multi platform application, capable of running on either windows or linux machines. Libraries rely upon wxWidgets [8], freeware and open source multiplatform library to help in creation of graphic user interfaces (GUI) and in several other tasks, VTK [5] for head model data reading and for all the interactive 3D visualization pipeline and Petsc [9] for linear system solution and parallelization issues. The solution application uses the PETSc libraries for twofold reasons: to create an open-source tool entirely based upon open-source libraries and because these libraries allow a high level of abstraction to leave “transparent” the low level calls and message exchanges between CPUs hence allowing focusing on optimization and search for stable and accurate solution methods. The third step, visualization, is described in Section V. IV. RESULTS The TEBAM framework has been validated by means of EEG forward problem solution using a spherical head
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Proposal and validation of a framework for High Performance 3D True Electrical Brain Activity Mapping
model for which analytical solutions were available [2], using the successive over-relaxation (SOR) method. Optimization analysis has been performed to improve code performance regarding both sequential solution and parallelization procedures. PETSc libraries give excellent profiling instruments that allow evaluation of the optimization degree reached by the use of several CPUs in parallel framework. Fig. 1 shows an example of optimization results related to an EEG forward problem solution with a conductive head model matrix of 64x64x115 elements computed with IBM SP5 made kindly available by the Interuniversity Consortium CINECA (Bologna, Italy) to test and validate the applications presented in this paper. PETSc libraries give also a good flexibility and easiness in the choice of suitable iterative solution methods and error tolerances. Next optimization step was then the search for solution methods and tolerances able to guarantee the best performances without sacrificing accuracy in EEG forwardd problem solution and in True Electrical Brain Activity Mapping. Tests were carried upon a conductive head model constructed out from segmentation of a set of 115 MRI sagittal 256x256 scans. The 3D conductivity matrix (the head model) obtained was sub-sampled to two volumes with lesser resolution to reduce computational load during tests. The following iterative solution methods have been tested and analyzed: Successive Over-relaxation (SOR); Symmetric SOR (SSOR); Conjugated Gradients (CG); BiConjugated Gradients (BiCG); Squared Bi-Conjugated
515
Fig. 2 Performance comparison with different iterative solution methods (cg = conjugate gradients; bicg = bi-conjugate gradients; bcgs = squared bi-conjugate gradients). Top: results for reaching a tolerance of 10-7; bottom: tolerance of 10-6.
Gradients (BCGS). Different tolerance criteria were examined as parameter for choice of stopping iterative solution, with tolerance values for relative error norm ranging from 10-6 to 10-12. Comparisons between three iterative methods are shown in Fig. 2, for an EEG forward problem solution on a 64x64x115 head model on a mono-processor system (AMD Athlon XP, 2,2 GHz). Tables show performance comparison of the three methods in terms of iterations number needed to reach the required tolerance, solution time and memory needed. In this problem the CG method converges in a larger iterations number but with less memory needs and in shorter time than BiCG. BCGS show the best performances in solution time but with larger memory requirement. The optimization and parallelization procedures lead to a large improvement in the performance, shortening computational time from 45 to less than 1 minute (forward problem solution on a 128x128x115 model).
Fig. 1 Compared performances with 1, 2, 4 or 8 CPUs on CINECA IBM SP5. Top: solution times (in seconds) and memory used (in MB). Bottom: number of floating point operations (in Mflops) by each CPU and for whole problem solution.
V. VISUALIZATION The visualization pipelines developed for TEBAM make full use of several data-fusion techniques and of 3D stereo-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
516
S. Mininel, P. Bruno, F. Meneghini, F. Vatta and P. Inchingolo,
VI. CONCLUSIONS The TEBAM original problem solving framework presented in this paper is a powerful tool to analyze brain activity with high spatio-temporal resolution and accuracy. TEBAM’s features allow overcoming many important limits of several scientific and commercial software. Qualifying features are: flexibility in computational methods, flexibility in modeling to accurately conforming to the specific patient’s head, scalability from PC to HPC, multimodal stereo visualization.
ACKNOWLEDGMENT Work supported by MIUR, Italy, National project PRIN 2004 n. 2004090530, by University of Trieste and by the Interuniversity Consortium CINECA, Casalecchio di Reno (BO), Italy.
REFERENCES 1. 2.
Fig. 3 Scalp and cortex surface with electric potential color-map (above); Tissues cut plane with potential iso-lines (bottom).
graphic rendering and have been developed using VTK libraries [5]. The hardware stereo support used for testing is an auto-stereo display DTI 2015XLS Virtual Window (Dimension Technologies Inc.) based on Parallax Illumination technology. The visualization module of TEBAM focused on visualization techniques useful to help data analysis in the context of anatomo-functional integration. The objective in developing these visualization instruments was to have a tool for a better “intuitive” understanding of the True Electrical Brain Activity Mapping procedures, both for research purpose and for future users or developers of TEBAM tools. Visualization output is divided in 4 panels (see Fig. 3), each with a different rendering showing different features. This multimodal data presentation helps understanding the link between functional and anatomical data. In all the four graphics visualizations, the user can freely “navigate” the model using the mouse to rotate, zoom and pan. The main rendering panel may be switched to stereo 3D mode to improve comprehension of complex configurations adding the depth clues. Most visualization parameters may be changed at will by the user, to allow a deep and meaningful “neuronavigation”.
3. 4. 5. 6. 7. 8. 9.
Baillet S, Mosher JC, Leahy RM. (2001) Electromagnetic brain mapping. IEEE Signal Processing Magazine 18(6):1430 Vatta F, Bruno P, Inchingolo P. (2005) Multiregion bicentricspheres models of the head for the simulation of bioelectric phenomena. IEEE Trans Biomed Eng 52,:384–389 Vatta F, Bruno P, Inchingolo P. (2002) Improving lesion conductivity estimate by means of EEG source localization sensitivity to model pa-rameter. J Clin Neurophysiol 19:1–15 Bronzino JD, Ed. (1985) Numerical methods for bioelectric field problems, Biomedical engineering handbook, Boca Raton, FL: CRC, pp. 161-188 Schroeder W, Martin K, Lorensen B. (1996) The Visualization Toolkit: An Object-oriented Approach to 3D Graphics. Prentice-Hall, NJ 3D Slicer Users Guide. At http://www.slicer.org Bruno P, Hyttinen J, Inchingolo P et al. (2006) A FDM anisotropic formulation for EEG simulation, Proc. 28th Ann. Int. Conf. IEEE-EMBS, New York, USA 2006, pp. 1121 – 1125 Smart J, Hock K, Csomor S, (2005) Cross-Platform GUI Programming with wxWidgets, Prentice Hall, NJ Balay S et al. (2002) PETSc users manual, Technical Report ANL-95/11 Revision 2.1.5, Argonne National Laboratory Author: Institute: Street: City: Country: Email:
Federica Vatta DEEI - University of Trieste Via Valerio 10 Trieste Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Quantitative EEG as a Diagnostic Tool in Patients with Head Injury and Posttraumatic Epilepsy T. Bojic1, B. Ljesevic2, A. Dragin2, S. Jovic2, L Schwirtlich2, A. Stefanovic2 1
Medical School of Belgrade, University of Belgrade, Belgrade, Serbia 2 Institute for rehabilitation “Miroslav Zotovic”, Belgrade, Serbia
Abstract— We investigated by means of quantitative EEG (qEEG) analysis EEG traces of patients with brain trauma (with and without posttraumatic epilepsy) with respect to control group. The aim of our work was to determine if there are qEEG parameters sensible for traumatic and epileptic changes of brain tissue and how these parameters change after hyperventilation (HV), one of the routine methods for cerebral activation. On 69 patients (36 patients with posttraumatic epilepsy (PTE) and 33 without PTE) and control group of healthy patients (34 subjects), EEG registration and analysis was performed before and after HV: Fast Fourier analysis was performed on 16-s segments and amplitude mean value was calculated in ten frequency ranges in four EEG projections. HV induced significant differences in F7-C3 projection in all three groups of patients, so we analysed differences for each frequency range in F7-C3 projection for all three groups of patients. Significant differences are registered between group of healthy patients and patients with PTE in intervals of low frequencies (0-3 Hz) and high frequencies (6-11 Hz). After HV statistically significant difference is observed in all frequency ranges. Factor “epilepsy” (patients with trauma without PTE vs. patients with PTE) marks significant differences in high frequency ranges (6-11 Hz). For this factor, HV expands the group of significantly different frequency ranges towards the low frequency ranges. Consequently, the most sensitive frequency ranges for factor “epilepsy” are in high frequency range. There are no significant differences in any frequency ranges for factor “trauma” (control group of subjects vs. patients with brain trauma without PTE) before HV. After HV the significant differences appear in the range of low (0-2 Hz) and high frequencies (7-11 Hz). qEEG analysis is a diagnostic tool of potentially high selectivity in differential diagnosis of patients with brain trauma with or without PTE. Keywords— qEEG, epilepsy, brain trauma.
I. INTRODUCTION Traumatic brain injury is nondegenerative, noncongenital injury of the brain tissue performed by mechanical force, which causes permanent or temporary dysfunction of cognitive, physical and psychosocial behavior with accompanying decrease or change of the consciousness level. Effect of brain injury on the tissue is through primary injury of the brain tissue and through the release of potentially toxic
neurochemical substances, like excitatory aminoacids and their analogs, catecholamines, prostaglandines, activation of pontine muscarinic cholinergic system, free radicals, calcium ions, arachidonic acid metabolites and immunologic factors. One of chronic consequences of brain trauma is posttraumatic epilepsy (PTE). PTE is pathological disturbance characterized by recurrent epileptic attacks, considered to be a consequence of brain injury. Global incidence of PTE is 5% of patients with closed head injury and 50% of patients with complicated cranial fracture and brain injury. Risk of PTE development is proportional to the level of severity of head injury; intracerebral hemorrhage is significantly correlated with PTE; probability for PTE is higher in patients with parietal and posterior frontal lesions, but can evolve from brain injuries of all cortical areas. Epilepsy is more frequent in patients with larger lesions, especially those that co involve left side hippocampus [1]. Mechanism of posttraumatic epileptogenesis is still unsuffitiently revealed. There are indications of selective hypocampal cell death after percusional head injury in rodents, which is an analogue to the bilateral volume reduction of hypocamus after brain trauma in humans. Its consequence in hypocampal sclerosis, which is also a characteristics of temporal lobe epilepsy [2]. Detection and monitoring of brain injury is an important area of research [3]. Nevertheless, there are currently no approved real-time approaches for detecting and monitoring such injury. Quantitative EEG (qEEG) analysis may provide a direct and non-invasive approach. EEG signals in the event of stroke, coma and trauma have been studied. Aim of our study was to find by means of qEEG analysis parameters correlated with clinical status of patients with brain trauma–patients with posttrumatic epilepsy and patients without posttraumatic epilepsy. Further, to investigate how these parameters are influenced by hyperventilation (HV), as one of the methods of cerebral activation.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 482–486, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Quantitative EEG as a Diagnostic Tool in Patients with Head Injury and Posttraumatic Epilepsy
II. MATERIALS AND METHODS The study was performed on two experimental groups: patients with mechanical brain injury (69, 36 of which patients with PTE and 33 without PTE) and control group of healthy patients (34 subjects). On the basis of Head Injury Interdisciplinary Special Interest Group of the American Congress of Rehabilitation Medicine criteria, we categorized mechanical brain injury in selected groups as »severe degree injury« and »medium degree injury«. On the basis of neurological status, Computer Tomography and Magnetic Resonance Imaging we categorized lesions as »left side«, »right side« and »bilateral«. A. Experimental protocol EEG registration was performed in two 1-min epochs: before hyperventilation (HV), where patients were registered relaxed and with eyes closed, and after HV, a 3-min
483
procedure of deep regular breathing. EEG was registered on digital 32-channel EEG apparatus XLTEK in dimly lit room (t=19-21˚C). Electrodes were positioned following the system 10-20. B. Data analysis On 16-s segments without artefacts from epochs before and after HV Fast Fourier Transformation was performed and mean amplitudes were calculated in frequency ranges 01 Hz, 1-2 Hz, 2-3 Hz, 3-4 Hz, 4-5 Hz, 5-6 Hz, 6-7 Hz, 7-8 Hz, 8-11 Hz and 11-30 Hz and on projections F7-C3, T5O1, F8-C4 and T6-O2. Data elaboration was performed using program package PERSYST Insight II (Persyst Development Corporation, 1060 Sandreto Drive, Suite E-2, Prescott, AZ 86305). We calculated the differenece between the states before and after HV in statistical package GRAPHPAD by ANOVA test and Bonferroni post-test.
III. RESULTS A. Quantitative EEG analysis in EEG projections F7-C3, T5-O1, F8-C4 and T6-O2 in three groups of experimental subjects (control group, group of patients without PTE and group of patients with PTE) Table 1. Mean values of amplitudes (µV2) obtained by Fast Fourier transformation in ten frequency ranges for control group of subjects in projections F7C3, T5-O1, F8-C4 and T6-O2 in registration epochs before and after hyperventilation (HV-hyperventilation) registration epoch before HV after HV
F7-C3
0-1 Hz 18.3
1-2 Hz 21.5
2-3 Hz 24.7
3-4 Hz 21.9
4-5 Hz 19.2
5-6 Hz 14.9
6-7 Hz 10.5
7-8 Hz 9.0
8-11 Hz 6.8
11-30 Hz 3.0
F7-C3
18.6
21.9
25.1
22.3
19.5
15.1
10.7
9.2
6.9
3.1
projection
Bonferroni Multiple Comparisons Test
t= 10.602,***, p<0.001
before HV
T5-O1
4.8
5.7
6.5
5.8
5.1
4.0
2.9
2.7
2.5
1.1
after HV
T5-O1
4.8
5.6
6.5
5.8
5.1
4.0
2.9
2.7
2.6
1.2
Bonferroni Multiple Comparisons Test
t= 1.053, NS, p>0.05
before HV
F8-C4
7.2
8.6
10.0
8.9
7.8
6.0
4.3
3.7
2.9
1.4
after HV
F8-C4
7.1
8.5
9.8
8.8
7.7
6.0
4.3
3.8
3.0
1.4
Bonferroni Multiple Comparisons Test
t= 0.9452, NS, p>0.05
before HV
T6-O2
3.5
4.2
4.9
4.4
3.9
3.2
2.6
2.5
2.3
0.9
after HV
T6-O2
3.7
4.4
5.1
4.6
4.1
3.4
2.8
2.6
2.4
1.0
Bonferroni Multiple Comparisons Test
t= 3.244, *, p<0.05
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
484
T. Bojic, B. Ljesevic, A. Dragin, S. Jovic, L Schwirtlich, A. Stefanovic
Table 2. Mean values of amplitudes (µV2) obtained by Fast Fourier transformation in ten frequency ranges for group of patients without posttraumatic epilepsy in projections F7-C3, T5-O1, F8-C4 and T6-O2, in registration epochs before and after hyperventilation (HV-hyperventilation) registration epoch
projection
0-1 Hz
1-2 Hz
2-3 Hz
3-4 Hz
4-5 Hz
5-6 Hz
6-7 Hz
7-8 Hz
8-11 Hz
11-30 Hz
before HV
F7-C3
18.2
21.4
24.6
21.8
19.1
14.8
10.5
9.0
6.8
3.0
after HV
F7-C3
18.3
21.6
24.8
22.0
19.3
14.9
10.6
9.1
6.8
3.0
Bonferroni Multiple Comparisons Test
t= 6.160, ***, p<0.001
before HV
T5-O1
4.9
5.8
6.7
5.9
5.2
4.1
3.1
2.8
2.4
1.1
after HV
T5-O1
5.0
5.9
6.8
6.1
5.4
4.3
3.2
2.9
2.5
1.2
Bonferroni Multiple Comparisons Test
t= 14.751, ***, p>0.001
before HV
F8-C4
7.2
8.5
9.9
8.8
7.8
6.1
4.5
3.9
3.1
1.5
after HV
F8-C4
7.3
8.7
10.1
9.0
7.9
6.2
4.5
3.9
3.1
1.5
Bonferroni Multiple Comparisons Test
t= 2.521, NS, p>0.05
before HV
T6-O2
3.6
4.2
4.9
4.4
3.9
3.6
2.4
2.3
2.1
0.9
after HV
T6-O2
3.6
4.3
4.9
4.4
3.9
3.2
2.4
2.3
2.1
0.9
Bonferroni Multiple Comparisons Test
t= 1.394, NS, p>0.05
Table 3. Mean values of amplitudes (µV2) obtained by Fast Fourier transformation in ten frequency ranges of
group for patients with posttraumatic epilepsy in projections F7-C3, T5-O1, F8-C4 and T6-O2 in registration epoches before and after hyperventilation (HV-hyperventilation) registration epoch
projection
0-1 Hz
1-2 Hz
2-3 Hz
3-4 Hz
4-5 Hz
5-6 Hz
6-7 Hz
7-8 Hz
8-11 Hz
11-30 Hz
before HV
F7-C3
18.2
21.4
24.6
21.8
19.1
14.9
10.6
9.1
6.9
3.0
after HV
F7-C3
18.5
21.7
24.9
22.2
19.4
15.1
10.8
9.2
7.0
3.1
Bonferroni Multiple Comparisons Test
t= 7.556,***, p<0.001
before HV
T5-O1
5.4
6.4
7.4
6.6
5.9
4.7
3.6
3.3
2.8
1.1
after HV
T5-O1
5.3
6.3
7.3
6.6
5.8
4.7
3.5
3.2
2.7
1.1
Bonferroni Multiple Comparisons Test
t= 7.250,***, p<0.001
before HV
F8-C4
7.3
8.6
10.0
8.9
7.9
6.2
4.6
4.0
3.2
1.4
after HV
F8-C4
7.8
9.3
10.8
9.7
8.6
6.8
4.9
4.3
3.3
1.5
Bonferroni Multiple Comparisons Test
t= 7.603,***, p<0.001
before HV
T6-O2
3.4
4.2
4.9
4.4
3.9
3.2
2.6
2.5
2.3
0.9
after HV
T6-O2
3.7
4.4
5.1
4.6
4.1
3.4
2.8
2.6
2.4
1.0
Bonferroni Multiple Comparisons Test
t= 9.968,***, p<0.001
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Quantitative EEG as a Diagnostic Tool in Patients with Head Injury and Posttraumatic Epilepsy
485
B. Effects of hyperventilation on amplitude mean value obtained by Fast Fourier Transformation in ten frequency ranges in F7-C3 projection in all three groups of subjects (control group, patients with posttraumatic epilepsy and patients without posttraumatic epilepsy) B. After hyperventilation
A. Before hyperventilation 30
30
25
25
*
*
#
*
20
m ic ro V 2
* patients without PTE before HV patients with PTE before HV control before HV
15
& &
*
10
m ic r o V 2
20
# & *
* & *
*
& *
15
& *
& *
10
*
& *
5
5
0
0
patients without PTE after HV patients with PTE after HV control after HV
# # & * & * *
0-1.
1-2.
2-3.
3-4.
4-5.
5-6.
6-7.
7-8.
8-11.
11-30.
Hz
0-1.
1-2.
2-3.
3-4.
4-5.
5-6.
6-7.
7-8.
8-11.
11-30.
Hz
Figure 1. Mean value of amplitudes in F7-C3 projection in ten frequency ranges before hyperventilation (A) and after hyperventilation (B). (*) Significant difference (p<0.5) between control group of healthy subjects and the group of patients with posttraumatic epilepsy. (&) Significant difference between group of patients without posttraumatic epilepsy and group of patients with posttraumatic epilepsy. (#) Significant difference between control group and group of patients without posttraumatic epilepsy.
IV. DISCUSSION We found that: 1. In control group HV changes mean value of amplitude in projections F7-C3 and T6-O2. 2. In group of patients with brain trauma without PTE HV changes mean value of amplitude in projections F7-C3 and T5-O1. 3. In group of patients with brain trauma and PTE HV changes mean value of the amplitude in all projections.
These results indicate that the EEG projection commune for all three groups of patients sensitive on HV is F7-C3. We selected that projection for further qEEG analysis. Before HV in F7-C3 projection significant differences between group of healthy patients and patients with PTE are registered in interval of low frequencies (03Hz) and high frequencies (6-11 Hz). After HV statistically significant difference is observed in all frequency ranges. The finding confirms that HV increases the difference between electrical activity of epileptic brain with respect to healthy brain and that the most sensitive frequency ranges for the factors “trauma and epilepsy” are
in low and high frequency ranges. In the same projection the most sensitive frequency ranges for factor “epilepsy” are high frequency ranges (6-11 Hz). For this factor, HV expands the group of significantly different frequency ranges towards the low frequency ranges. Consequently, the most sensitive frequency regions for factor “epilepsy” are in the high frequency range. For factor “trauma” before HV there are no statistically significant differences in any frequency region. After HV the significant differences for this factor appear in the range of low (0-2 Hz) and high frequencies (7-11 Hz). V. CONCLUSIONS
On the basis of our results qEEG analysis is a diagnostic tool of potentially high selectivity in differential diagnosis of patients with brain trauma with and without PTE. With new advances, qEEG might play an important role in basic research and clinical studies of brain injury and PTE.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
486
T. Bojic, B. Ljesevic, A. Dragin, S. Jovic, L Schwirtlich, A. Stefanovic
REFERENCES 1.
2.
Ljesevic B et al. (2006) EEG characteristics of patients with traumatic brain injury: comparative study of patients with pottraumatic epilepsy and subjects without posttraumatic epilepsy: VI Congress of physical medicine of Serbia and Montenegro, Vrnjacka Banja, Serbia and Montenegro, 2006, pp 60-61 Kharatishvili I, Nissinen JP, McIntosh TK et al.(2006) A model of posttraumatic epilepsy induced by lateral fluid-percussion brain injury in rats. Neurosci 140:685-697.
3.
Wallace BE and al. (2001) A history and review of quantitative EEG in traumatic brain injury: J Head Trauma Rehabil 16:165190. Author: T.Bojic Insitute: Street: City: Country: Email:
Center for Multidisciplinary Studies, Belgrade University Dr Subotica 5 Belgrade Serbia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Colorful Brain: Compact Visualisition of Routine EEG Recordings Michel J.A.M. van Putten Department of Clinical Neurophysiology, Medisch Spectrum Twente, and Institute of Technical Medicine, University of Twente, Enschede, Netherlands. E-mail:
[email protected] Abstract— Clinical EEG recordings are typically evaluated by visual analysis of the various waveforms Besides the long learning curve, it is rather subjective and prone to human error. To assist in the visual interpretation, various quantitative techniques have been proposed. Here, we describe a triplet of features that quantify the spatial distribution of the various EEG waveforms and their coherence, represented as three time-frequency plots. The technique allows compression of 20 minutes of EEG recordings into a single picture, that captures various essential elements, including anterior-posterior differentiation, reactivity to eyes opening and closing, and photic drving. In addition, it may detect disorders, including various manifestations of epileptiform discharges. Keywords— quantitative EEG,time-frequency plots, transformation, neurology.
I. INTRODUCTION Historically, clinical EEG interpretation is strongly based on visual analysis, with a rather heuristic description of the various ’grapho’-elements. This includes the evaluation of the spatial distribution of various frequencies and the reaction to a variety of stimuli, including eyes opening and closing, hyperventilation and photic stimulation. These aspects are important features of the ’background pattern’, since they contribute to the mean statistical characteristics of the EEG signal. In addition, the occurrence of various transients is noted, such as spikes, triphasic waves, or polymorphic delta activity [1]. Besides the rather long learning curve associated with adequate visual EEG interpretation in a clinical environment, classical visual interpretation may suffer from various inter- and intra-observer inconsistencies [2,3]. In addition, it is rather subjective and cannot be used for continuous EEG monitoring, for instance in ICUs [5]. These considerations motivated the development of an alternative visual representation of the EEG, especially ’designed’ for applicability in a routine clinical environment. The proposed presentation of the EEG time series also allows the extraction of various quantitative features, relevant for (computer-assisted) differential diagnosis and follow up. Here, we will focus on the transformation of essential features from the background pattern of the EEG.
The features proposed include time-frequency representations of two novel symmetry measures and a synchronization measure. This triplet captures three highly relevant aspects of the dynamics of the EEG background pattern, that correlate strongly with various neurological conditions. The first two features are related to the time-frequency distribution of spectral power across the scalp. The third feature relates to the synchronization between various recording positions. II. METHODS 2.1 Clinical Material Data were obtained from our digital EEG database, containing routine clinical EEG recordings and recordings from healthy volunteers. Recordings were approximately 20 minutes in duration, and include the standard reactivity testing procedures (eyes closed, eyes open, hyperventilation and photic stimulation). EEGs were recorded according to the international 10-20 system with Ag/AgCl electrodes, using a common average reference. This allows the reconstruction of various references, afterwards, including a small Laplacian. Electrode impedance was kept below 5kOhm to reduce polarization effects. Recording was performed using BrainLab (OSG bvba, Belgium) EEG instrumentation. The sampling frequency, fs, was either 250 or 512 Hz. 2.2 Time-frequency representation of the spatial distribution of the spectral power Although the brain is functionally asymmetric, including lateralized functions as language, memory and spatial processing, mean spectral power of the normal EEG is (nearly) left-right symmetric. This left-right symmetry is contrasted with a physiological asymmetry in the frontal to posterior direction. This frontal-to-posterior gradient is a function of frequency and ’state’ of the subject. For instance, in the eyes closed condition, the power of the frequency is typically largest over the posterior areas; in the eyes-open condition, the event- related de-synchronization reduces the power over the posterior areas. This two-dimensional spatial distribution, as a function of frequency and time (event), can now be quantified as follows.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 497–500, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
498
Michel J.A.M. van Putten
Consider a standard EEG recorded according to the international 10- 20 system, and re-referenced to a (small) Laplacian. At each electrode position, j = 1, 2, · · · ,M, with M = 19 the number of electrodes, we record a time series, Vj(t). By estimating the spectral power of each time series from each recording position, j, we obtain a number of Fourier coefficients, Ai,j , with i = 1, 2, · · · ,N, with N the number of coefficients. Each Fourier coefficient, Ai,j , is subsequently weighted with its Euclidian distance from Cz (0,0) in either the x-direction (dxj) or the y-direction (dyj ), according to M
C lr (i ) =
∑ A dx j =1
ij
M
∑A M
C ap (i ) =
∑A
ij
yj
M
∑A j =1
(1)
ij
j =1
j =1
j
(2)
1/NFFT Hz. Therefore, two time and frequency dependent functions are obtained, normalized in the range -1 to 1, that reflect the center of gravity in the left-right (x-cog) and anterior-posterior (y-cog) brain direction. 2.3 Synchrony Here, we quantify synchrony by estimating the mean coherence between all 19 electrode positions and their nearest neighbors (NN). For example, the mean NN-coherence at Cz is given by the mean value of the four coherence (coh) values coh(Cz,Fz) coh(Cz,C4), coh(Cz,C3) and coh(Cz,Pz). Values are calculated after re-referencing the EEG to a small Laplacian. From these 19 mean NN-coherence value, the maximum value will be used (mNNC). Coherence was calculated for half-overlapping windows, with length (NFFT) of 512 points (approximately 2 s), each of which was detrended and windowed with a Hanning window. This third feature, therefore, is another time-and frequency dependent function, in the range [0-1]. III. RESULTS
ij
for the x- and y-direction, respectively. The coefficients C(i) now reflect the center of gravity of the spectral power in the x- and y-direction, as a function of frequency (index i). This is illustrated in Figure 1. Spectral power was estimated using Welch’s averaged periodogram method. The signal recorded from each electrode, containing 10 s of data (fs · 10 data points), was divided into half-overlapping sections, with window length (NFFT) of 512 points (approximately 2 s), each of which was detrended and windowed with a Hanning window. The magnitude of NFFT discrete FFTs of the sections was averaged to form the spectral density, with spectral resolution
Application of the technique to an EEG recording from a healthy volunteer is presented in Figure 2. At the top (row A), four topoplots ar shown, with the mean nearest neighbor coherence in the four frequency bands. The first
Fig. 2. Normal EEG. At the top (row A) four topoplots are shown, showing
Fig. 1 Illustration of the calculation of the center of gravity. The Fourier coefficient Ai,j of O1 (indicated by the red dot) is weighted with -0.5 in the x-direction and -1 in the y-direction
the spatial distribution of the time averaged nearest neighbor coherence in the four frequency ranges. The TF-plot of the coherence (B) shows a high value in the alpha range; discontinuities indicate reactivity to eye-opening. At the same time, the time-frequency y-cog plot (C) indicates that the center of gravity of the alpha rhythm now shifts from highly posterior (blue) to more central regions (green). The two lines of dark blue dots (arrows) in the y-cog TF-curve reflect photic stimulation, where the center of gravity in the y-direction is now a function of the stimulation frequency.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Colorful Brain: Compact Visualisition of Routine EEG Recordings
time-frequency plot, directly below (row B), presents the maximumnearest neighbor coherence (mNNC), as discussed in the methods section. The coherence value is color coded in the TF-graph, and shows a clear maximum around 10 Hz, well-modulated by eye opening (EO). The two TF-plots labeled (C) and (D) shows the center of gravity for each frequency and each epoch. Dark colors (blue) indicate negative values towards occipital (middle curve, y-cog) or the left side (lower curve, x-cog). Similarly, red colors indicate a center of gravity towards the frontal area (middle curve) or the right side (lower curve). Note the significant negative values in the y-cog curve in the alpha-frequency range (blue, occipital), well-modulated by eye opening. Photic driving induces negative values at the frequency of stimulation, as well, indicating that at these driving frequencies the maximum power is again located over the posterior area (arrows). The lower curve (row D) shows the TF-plot of the hemispheric asymmetry. Here, values are near zero, in a physiological range. Figure 3 shows the results in a patient suffering from a focal status epilepticus. IV. CONCLUSIONS This study shows that transformation of various EEG features into another visual domain, using three time-frequency representations of essential features, assists in visualizing various elements of the EEG ’dynamics’ in physiological and pathological situations. It provides a compact representation of the background pattern, which is, besides transients, an essential element in the clinical evaluation of EEG re-
499
cordings. Several relevant features for the interpretation of the EEG background pattern are captured by the method, and a strong relationship with the visual interpretation is maintained. The proposed transformation of clinical EEG recordings to an alternative visual domain is strongly physiologically motivated, and has a close relationship to the classical interpretation. It visualizes various highly relevant background features, including increased or decreased synchronization and the spatio-temporal dynamics of the power distribution. We believe that these are important elements for clinical acceptance. Similar arguments were recently put forward by Piotr Durka. He stated that although a variety of new methods is proposed each year for the analysis of the EEG, ”very few have any direct relationship to the traditional visual analysis. That means that their results cannot be directly related to the most valuable knowledge base of 70 years of experience, collected by means of the visual analysis of EEG” [4]. Due to space limitations, we have only presented two cases of the proposed visualization of EEG dynamics. These examples may serve to illustrate the potential clinical utility for the analysis of the EEG background pattern. Furthermore, this transformation may contribute to a more objective interpretation, given the quantification of various EEG features. Finally, the method is not intended to replace the classical visual EEG analysis. Thus far, this latter remains essential for the final interpretation. However, in our view, in the near future quantitative EEG techniques, including transformation of the signal to an alternative visual domain, will more strongly support this process. It is even not unlikely, that, ultimately, quantitative analysis will replace some elements that are now considered the sole domain of experienced electro-encephalographers.
ACKNOWLEDGMENT G. Drost is acknowledged for the fruitful discussions during the preparation of this paper.
REFERENCES 1. 2.
Fig. 3. EEG from a patient suffering from a non-convulsive status epilepticus, with recurrent bursts of epileptiform discharges. Note the intermittent high coherence value, extending from ~ 2-10 Hz; approximately 10 episodes are present (row B).
3.
Lopes da Silva and Niedermeyer. Electroencephalography: Basic principles, clinical applications and related fields, 1999. Nuwer, M.R. Assessing digital and quantitative {EEG} in clinical settings. J Clin Neurophysiol, 1998, 15~(6), 458--463. van Putten, M.J.A.M. Extended BSI for continuous EEG monitoring in carotid endarterectomy. Clinical Neurophysiol 117 (2006) 2661– 2666
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
500 4. 5.
Michel J.A.M. van Putten Durka, P.J.,Blinowska, K.J. A unified time-frequency parametrization of EEGs. IEEE Eng Med Biol Mag , 2001, 20~(5), 47—53. van Putten, M.J.A.M. Nearest neighbor phase synchronization as a measure to detect seizure activity from scalp {EEG} recordings. J Clin Neurophysiol ( 2003), 20~(5), 320--325.
Author: Institute: Street: City: Country: Email:
M.J.A.M. van Putten MD PhD Medisch Spectrum Twente P.O. Box 50000 7500 KA Enschede Netherlands
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using ANN on EEG signals to predict working memory task response V. Logar1, A. Belic1, B. Koritnik2, S. Brezan2, V. Rutar2, J. Zidar2, R. Karba1 and D. Matko1 1
2
University of Ljubljana, Faculty of electrical engineering, Ljubljana, Slovenia University Medical Centre Ljubljana, Institute of clinical neurophysiology, Ljubljana, Slovenia
Abstract— Many authors have shown that performing working-memory tasks causes an elevated neuronal activity in several areas of the human brain, which suggests information exchange between them. Since the information exchanged, encoded in brain waves is measurable by electroencephalography (EEG) it is reasonable to assume that it can be extracted with an appropriate method. In this paper we present a method for extracting the information using an artificial neural network (ANN), which we consider as a stimulusresponse model. The EEG was recorded from three subjects while they performed a modified Sternberg task that required them to respond to each trial with the answer "true" or "false". The study revealed that a stimulus-response model can successfully be identified by observing phase-demodulated theta-band EEG signals 1 second prior to a subject's answer. The results showed that the model was able to predict the answers from the EEG signals with an average reliability of 75% for all three subjects. From this we concluded that stimulus-response model successfully observes the system states and consequently predicts the correct answer using the EEG signals as inputs. Keywords— EEG, artificial neural networks, working memory, response prediction, Sternberg task
I. INTRODUCTION Applying a simplistic approach, in this paper we consider the brain as a non-linear dynamical system that can respond to multiple external stimuli with multiple responses in parallel. We also view it as a causal and deterministic system, since in the experiment reported here, responses occur after the presentation of stimuli and elicit similar responses. Hence, it is possible to use identification methods for dynamical systems to obtain simplified mathematical models of the brain that describe the brain’s responses to simple external stimuli. These mathematical models represent an input-output mapping of the brain. According to the theory of systems, the output of the system is a function of the system’s states. Therefore, by observing the states, the output of the system can be calculated. Similarly, by measuring the states of the brain, its responses can be predicted. However, as the brain solves problems in parallel not all measurable states are related to all responses. Therefore, only relevant states need to be extracted from the measurements to predict the response.
In our study we investigated the informational integration in working memory. Working memory is a process by which the brain sustains the activity of cells whose firing represents information derived either from a brief sensory input or a readout from long-term memory [8]. It is the brain’s ability to transiently hold and manipulate goalrelated information, which is reflected in an elevated, persistent activity of the prefrontal cortex neurons, to guide forthcoming actions ([1], [3]). According to Fuster, the prefrontal cortex plays an important role in behavioural organization [2]. Many authors ([5], [6], [8]) have described the increased rhythmic coupling of different areas of the brain during working-memory tasks, and it has been proposed that this rhythmic coupling relates to information exchange [6] or informational integration. Numerous reports also suggest that brain activity in the theta frequency band is heavily involved in the active maintenance and recall of working-memory representations ([4], [8], [11], [12], [13]). Kahana [10] suggests that an important role in this process is carried out by the phase characteristics of the theta rhythm. Therefore, it is reasonable to assume that the brain states are coded in electromagnetic activity and thus measurable using electroencephalography (EEG). The relation between the stimulus and the response can thus be described as a stimulus-response model, where the EEG signals can be considered as the observations of the brain’s states. The aim of this study was to investigate whether it is possible to identify a mathematical model that would link the EEG signals with the brain responses during workingmemory tasks. An ANN was used to predict the measured responses of the subject from the EEG signal. Successful training of the ANN would support the assumption that the working-memory content encoded in the EEG signals can successfully be extracted using an ANN. II. MATERIALS AND METHODS A. Subjects and EEG data In this study we used the data from three healthy, righthanded, male subjects (informed consent), aged 23, 24 and 27 years. The EEG signal was measured while the subjects
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 501–504, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
502
V. Logar, A. Belic, B. Koritnik, S. Brezan, V. Rutar, J. Zidar, R. Karba and D. Matko
performed working-memory tasks, which were modified versions of a Sternberg paradigm [14]. Simultaneously with the EEG signal, a log file with task details, subject responses and timestamps was recorded. B. Sternberg task The main reason for choosing a Sternberg task over other mental tasks is that the periods of encoding, retention and recognition are all separated in time, which allows us to study activity development during the different stages of short-term memory processing [8]. The modified Sternberg paradigm consisted of four tasks and involved a presentation of verbal-visual and goal stimuli to the subject before and after a short retention period, respectively. The activity tasks performed were as follows: memorize-reorder (M-R), reorder (R), memorize (M) and wait (W). All four tasks required an observation of different character sets, their manipulation and response according to the task’s instruction. Randomly, after every few activity tasks, the subjects were allowed 10 seconds of relaxation. The general structure of all the tasks was the same and is presented in Fig. 1. As shown in Fig. 1 each task started with a task instruction that told the subject which type of information processing needed to be performed (memorize-reorder, reorder, memorize or wait). After the task instructions, four alphabetic characters were presented to the subject on the screen for half a second. Then, as shown in Table 1, during some tasks the characters were removed, while during the other tasks they remained until the end of the task. After that the start signal appeared which indicated the beginning of the retention period. During the 4-second retention period the subject had to mentally perform the information processing required by the task, described in Table 1. Then the probe question was presented and the subject was given 1 second for a brief thought. The probe question was of the nX form, where X was any character of the presented character set and n was the position of the character in the processed set. Afterwards, the subject had to indicate whether the answer to the probe question was true or false by pressing the left or right mouse button with his right hand. At the end of every task the subject was allowed to rest for approximately 3 seconds before a new task started.
Probe
Response (true/false)
Task instruction Set Retention Reflection Rest (M-R, R, M, W) presentation period time 0s
1.5s
2.0s
6.0s
Table 1 Differences between the tasks regarding the information processing Task
Characters removed?
Information processing
M-R
YES
remember the presented characters and reorder them alphabetically
R
NO
reorder the presented characters alphabetically
M
YES
remember the presented characters as they appeared
W
NO
observe the presented characters
C. Signal processing and ANN To find the best prediction possible, when using the ANN we used the signal processing according to the studies and suggestions in the field of working-memory EEG analysis made so far. First, 1-second intervals prior to the subject's response from all 4 tasks were selected to form an input/output data set. The reason for using the data of all 4 tasks is that the cortex activity prior to the answer was the activity of the working memory that was designated to answer the question, regardless of the task. As some authors suggest that information related to the working memory might be coded in the theta frequency band ([7], [10]) the EEG signals were band-pass filtered to obtain theta rhythms. Then, since the phase characteristics of the EEG signals could play an important role in information exchange ([9], [10]) theta rhythms were phase demodulated. Finally, after the phase demodulation we applied a principal component analysis (PCA) and used the 15 most significant components of the EEG signal for further analysis. The purpose of using PCA was to reduce the dimensionality of the input data and to reduce the linear dependency of the input signals, which leads to more efficient training of the ANN. For this study a three-layer feed-forward ANN with 10 neurons in the first layer, 2 neurons in the second layer, and one neuron in the output layer was used. The neurons in the first and second layers had a tangens sigmoidal activation function and the output neuron had a linear activation function. The neural network was trained using a scaled conjugate gradient backpropagation (trainscg) algorithm. The overall model, including signal processing and ANN, that predicts the brain's response from the EEG signals can be represented schematically, as shown in Fig. 2.
7.0s 10.0s
Fig. 1 Schematic representation of modified Sternberg task
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using ANN on EEG signals to predict working memory task response
503 Subject 2 − validation set
model Band-Pass filter
Measured Predicted response
phase demodulation
PCA
1
ANN
Fig. 2 Schematic representation of the stimulus-response model
Answers
EEG signals
III. RESULTS
0
Figures 3 to 5 show the model's response when using the validation period of the EEG signal for its input. For the validation set we chose the interval of EEG signal following the verification interval, which was not a part of the input/output training data set. Thick line represents measured subject's responses, while the thin line represents model's response prediction. The measured and predicted responses between vertical dotted lines represent a separate trial in length of 1 second. If the mean value of model's output of each trial was lower than 0.5, the response was assumed to be false and if the mean value was higher than 0.5, the response was assumed to be true. Figures 3 to 5 show that it was possible to train the stimulus-response model to predict the answers of the subjects from phase-demodulated EEG signals with an approximately 72% reliability for first subject (18 correctly predicted answers from a total of 25), a 75% reliability for second subject (15 correctly predicted answers from a total of 20) and an 80% reliability for third subject (12 correctly predicted answers from a total of 15). Several trials were made to train the ANN with different training sets for subject 1, to see if the reliability of the response prediction changes when the input/output data
0
5
10 Trials
15
Fig 4. Comparison between measured and predicted answers for subject 2 changes. The percentages of the reliabilities are very similar to the one shown in Fig. 3, and they are collected in Table 2. From this we can conclude that different training sets carry approximately the same information relevant for the prediction and do not affect the ANN's response reliability. This also eliminates the possibility of the ANN's response prediction being a result of a random event if using only one data set. Table 2 Reliability of model's response prediction when using different training and validation sets from subject 1. Training set Reliability
2 72%
3 72%
4 76%
5 76%
6 72%
mean 73.6%
IV. DISCUSSION In this study we examined an ANN based stimulusresponse model to find the best possible response prediction when using the EEG signals of three subjects performing a modified Sternberg task. Subject 3 − validation set
Subject 1 − validation set
Measured Predicted
Measured Predicted
1
Answers
Answers
1
0
0
0
5
10
15
20
25
Trials
Fig. 3 Comparison between measured and predicted answers for subject 1
0
5
10
15
Trials
Fig. 5 Comparison between measured and predicted answers for subject 3
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
504
V. Logar, A. Belic, B. Koritnik, S. Brezan, V. Rutar, J. Zidar, R. Karba and D. Matko
The ANN structure was obtained experimentally by comparing the responses to various types of network design, numbers of neurons and their activation functions with the responses recorded during the EEG sessions. The threelayer feed-forward network proved to be the most appropriate choice for this study. The signal processing used in this study was chosen according to the studies and suggestions in the field of working-memory EEG analysis made so far. The phasedemodulated signals proved to be the most suitable input selection as the results that were obtained using these signals showed that it is possible to predict the answers from the EEG signals with an average reliability of 75% for all three subjects. Since the results are comparable for all three test subjects and the response prediction is much higher than a random generation of answers, the model's output cannot be considered as a result of a random event. Considering all the simplifications that were made in the proposed model, taking into account that the brain is a permanently adaptive system, and that some correct answers might have been guessed, the prediction success of the ANN is very high. As indicated by the ANN, all the combinations of 15 principal components that can be observed at least one second prior to the answer are typical for the corresponding answer. The fact that a model can be identified to describe the relation between the EEG signals and the brain responses shows that EEG signals are indicative of the brain state estimates that are relevant to the stimulus response. Since the brain is a very complex system it is very difficult to say whether the trained ANN represents a model of working memory for the logical or the physical answer. The Sternberg task elicits the preparation of motor activity with delayed execution so the relation to the motoric activity of the hand is obvious. However, if we consider the fact that the relation exists for the whole second before the answer, this suggests that the working memory is very likely involved in the process.
REFERENCES 1. 2. 3. 4.
5.
6. 7. 8.
9. 10. 11.
12.
13.
V. CONCLUSIONS It can be concluded that using complex data like that from the EEG and a complex task like the Sternberg task, it is remarkable that a simple ANN, simplistic theta band filtering and phase demodulation can give a reasonably good prediction of the subject's performance. Phase demodulation may thus be a useful approach for analysing the EEG related to working-memory tasks. It is possible however, that phase demodulation also describes some aspects of working-memory activity in the brain itself.
14.
Durstewitz D, Seamans J K & Sejnowski T J (2000) Neurocomputational models of working memory. Nature neuroscience 3:1184-1191 Fuster J (1984) Behavioral electrophysiology of the prefrontal cortex. Trends in neurosciences 7:408-414 Fuster J (2000) Cortical dynamics of memory. International Journal of Psychophysiology 35:155-164 Gevins A, Smith M E, McEvoy L & Yu D (1997) High resolution EEG mapping of cortical activation related to working memory: effects of task difficulty, type of processing and practice. Cerebral Cortex 7(4):374-385 Howard M W, Rizzuto D S , Caplan J B, Madsen J R, Lisman J, Aschenbrenner-Scheibe R, Schulze-Bonhage A & Kahana M J (2003) Gamma Oscillations Correlate with Working Memory Load in Humans. Cerebral Cortex 13(12):13691374 Jensen O (2001) Information Transfer Between Rhythmically Coupled Networks: Reading the Hippocampal Phase Code. Neural Computation 13(12):2743-2761 Jensen O and Tesche C D (2002) Frontal theta activity in humans increases with memory load in a working memory task. European Journal of Neuroscience 15(8):1395-1399 Jensen O, Gelfand J, Kounios J & Lisman J E (2002) Oscillations in the Alpha Band (9-12 Hz) Increase with Memory Load during Retention in a Short-term Memory Task. Cerebral Cortex 12(8):877-882 Jensen O and Lisman J E (2005) Hippocampal sequenceencoding driven by a cortical multi-item working memory buffer. Trends in Neurosciences 28(2):67-72 Kahana M J, Seelig D, Madsen J R (2001) Theta returns. Current Opinion in Neurobiology 11:739-744 Klimesch W, Doppelmayr M, Schimke H & Ripper B (1997) Theta synchronization and alpha desynchronization in a memory task. International Journal of Psychophysiology 34(2):169-176 Kopp F, Schroger E & Lipka S (2004) Neural networks engaged in short-term memory rehearsal are disrupted by irrelevant speech in human subjects. Neuroscience Letters 354(1):42-45 Sarnthein J, Petsche H, Rappelsberger P, Shaw G L & von Stein A (1998) Synchronization between prefrontal and posterior association cortex during human working memory, Proceedings of the National Academy of Sciences of the USA 95(12):7092-7096 Sternberg S (1966) High-speed scanning in human memory. Science 153:652-654 Vito Logar University of Ljubljana, Faculty of electrical engineering Trzaska 25, SI-1000 Ljubljana, Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Comparison of Four Calculation Techniques for Estimation of Local Arterial Compliance R. Raamat, J. Talts and K. Jagomägi Department of Physiology, University of Tartu, Tartu, Estonia Abstract— Local arterial compliance of finger arteries C is derived from the recorded pressure and volume waveforms during cuff inflation applying four different estimation techniques. While the first algorithm Camplit =ΔV/ΔP (where ΔP is the finger arterial pulse pressure and ΔV is the corresponding pulsatile volumetric change) takes into account the whole amplitude of pulses, the other three estimates use synchronous fragments of pressure and volume waveforms in their upstroke, downstroke and top area, respectively (slope-based estimation). Finger pressure and volume waveforms are recorded by the Finapres monitor (Ohmeda, USA) and the UT9201 physiograph (University of Tartu, Estonia), respectively. Results in 13 volunteers demonstrate that the slopebased estimation gives higher compliance values and steeper peaks of the compliance vs. transmural pressure relationship compared to the amplitude-based estimation. The best results regarding the noise component were obtained by the amplitude-based estimation. Among the slope-based techniques, the top fragment based estimation introduced less fluctuation compared to the upstroke or downstroke fragment based estimation. Keywords— Local arterial compliance, vascular tone, calculation algorithms, finger pulse pressure, Finapres.
To calculate the slope-based compliance Cslope = dV/dP, external pressure or volume oscillations are usually applied to the cuff. In this case the chosen amplitude of the vibration does not exceed 20 mm Hg and frequency is set between 20–50 Hz to be easily separated from the heart pulses by linear filtering [5–6]. In the present study an alternative to the external modulation is applied: the heart-generated pressure and volume waveforms are divided into parts of a smaller amplitude by means of temporal fragmentation, and the current value of compliance (e.g. slope-based compliance) is calculated as a ratio of amplitudes of these smaller fragments of volume and pressure waveforms. It is well known that, as a rule, the volume pulse is delayed in relation to the pressure pulse. Therefore, before estimating the slope-based compliance, the time shift between the pulses should be eliminated. We apply this fragmentation technique to the pressure and volume waveforms recorded non-invasively in the finger arteries of subjects during cuff inflation at rest situation. The derived compliance vs. transmural pressure relationships allow comparison of four different techniques of estimation.
I. INTRODUCTION The most comprehensive information on the arterial compliance is obtained by applying the continuous (beat-tobeat) measurement [1–3]. In beat-to-beat implication, the compliance is very often measured by simultaneously observing the amplitude of the pulsatile blood volume and pressure changes. In this case, the amplitude-based formula Camplit = ΔV/ΔP is applied, where ΔP is the arterial pulse pressure and ΔV the corresponding change in arterial volume [1]. Since the amplitude-based estimation uses the heartgenerated pulses as test signals, this technique does not need any external pressure or volume oscillator to be connected. A drawback of this method is that due to the large amplitude (40–50 mmHg or more) of the arterial pulse pressure compared to the S-shaped non-linearity of the P–V relationship, the amplitude-based compliance curves are less sharp in comparison with those obtained by using the small test signal amplitude [4].
II. METHODS Subjects: A group of 13 volunteers, 8 females and 5 males, aged from 19 to 28, were studied. They had no history of vascular disease and gave their informed consent to participate in the study. The study was approved by the Ethics Committee of the University of Tartu. Experimental design: The subject rested in a supine position on a couch. The fingers of the right hand were used for recording pressure and volume waveforms. After an initial equilibrium period (10 minutes) a pressure scan at a rate of 5 mmHg/s was performed. The pressure in the volume-measuring cuff was raised from 20 mmHg to suprasystolic (140–160 mmHg). The experiments were carried out at room temperature 22-25 °C.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 562–565, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Comparison of Four Calculation Techniques for Estimation of Local Arterial Compliance
Beat-to-beat finger arterial pulse pressure: Beat-to-beat finger arterial blood pressure was measured by the Finapres 2300 BP monitor, Ohmeda, USA. Finapres follows the idea of the volume clamp method (dynamic unloaded arterial wall principle) introduced by Peñáz [7], and is able to record the arterial pressure waveform P(t) Beat-to-beat finger volumetric pulses: Beat-to-beat finger volumetric waveform V(t) was measured by the UT9201 physiograph, University of Tartu, Estonia. Data processing: The analog signals from the Finapres and UT9201 instruments were digitized by an ADC (16- bit accuracy, sampling rate 200 Hz) and transferred to the computer. Cardio-synchronized mean arterial pressure (MAP), systolic (Psyst) and diastolic (Pdiast) pressures as well as heart rate (HR) were also estimated from a full arterial pressure wave. The beat-to-beat finger arterial pulse pressure was calculated as the difference ΔP = Psyst – Pdiast. We applied four types of compliance estimates: • Camplit = ΔV/ΔP, where ΔP is the finger arterial pulse pressure and ΔV is the corresponding pulsatile volumetric change. • Cupstroke = ΔVup /ΔPup, where ΔVup and ΔPup are synchronous fragments of the volume and pressure waveforms taken 40 ms before the peak values of pulses (duration 20 ms). • Cdownstroke = ΔVdown /ΔPdown, where ΔVdown and ΔPdown are synchronous fragments of the volume and pressure waveforms taken 90 ms after the peak values of pulses (duration 60 ms). • Ctop = ΔVtop /ΔPtop, where ΔVdown and ΔPdown are synchronous fragments of the volume and pressure waveforms with duration of 90 ms taken near the peak values of pulses. The duration of fragments in the last three estimates was chosen in order to get a small pressure change of about 10– 20 mmHg. In this case the obtained values of compliance can be considered as the slope-based ones. It should be pointed out that a specific feature of photoplethysmographic registration is the fact that the measured volumetric pulses are in arbitrary units and their scaling factors can be considered constant only during a single experimental session in one person. To overcome this limitation, a normalization of the compliance signal was introduced. Thus it was sufficient to measure the volumetric signal in arbitrary units (a.u.) and to express the relative beat-to-beat compliance for every individual during cuff inflation as the ratio: Cnorm = C/C50, where C is the current compliance value and C50 is the corresponding value at Ptransm = 50 mmHg. In this way the compliance should have
563
been regarded further as a relative variable. The mean transmural pressure Ptransm was calculated as the difference between the mean arterial pressure and the cuff pressure: Ptransm = MAP – Pcuff. From the recorded data, the compliance vs. mean transmural pressure relationships were derived. The noise in compliance readings introduced by applying different estimation techniques was assessed by a non-linear regression. The calculated points of the compliance-pressure relationship were approximated by the 6th order polynomial. R2 was used to characterize the fluctuation in results obtained by means of different estimation techniques. To test for the presence of significant differences, the Wilcoxon matched pairs signed rank test was used. A level of significance of 0.05 was applied. The group-averaged values are expressed as a median with a 95% confidence interval for the median. III. RESULTS Fig. 1 illustrates different compliance estimates in Subject I. Fig. 2 demonstrates a procedure of the polynomial fitting for Subject G. Table 1 summarizes the individual and group-averaged results for 4 estimation methods in 13 subjects. Cmax /C50 is a ratio of the maximum compliance during the scan and the value at Ptransm = 50 mmHg. The parameter R2 characterizes the noise in every estimate.
20
C, a.u.
Camplit Cupstroke Cdownstroke Ctop
10
0 -75
-50
-25
0
25 50 75 Ptransm , mmHg
Fig. 1. Compliance vs. transmural pressure relationships in Subject I obtained by applying various calculation techniques.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
564
R. Raamat, J. Talts and K. Jagomägi 20
Camplit
C/C50 , a.u.
IV. DISCUSSION
Poly. (Camplit) 10
R2 = 0.995
(a)
0 -75 20
-50
-25
0
25
50
75
Cupstroke
C/C50 , a.u.
Poly. (Cupstroke) R2 = 0.958
10
(b )
0 -75 20
-50
-25
0
25
50
75
Cdownstroke
C/C50 , a.u.
Poly. (Cdownstroke) 10
(c)
R2 = 0.934
0 -75 10
-50
-25
0
25
C/C50 , a.u.
50
75
Ctop Poly. (Ctop)
R2 = 0.967
0 -75
-50
(d)
-25
0
25
50
75
Ptransm , mmHg Fig. 2. Approximation of readings of the normalized compliance by the 6th order polynomial in Subject G. Amplitude-based estimation (a); slope-based estimation in the upstroke area (b), downstroke area (c) and top (d).
The results show that there are differences between the results obtained by different estimation techniques. Fig. 1 demonstrates that the modifications of the slope-based estimation give higher compliance values and steeper peaks compared to the amplitude-based estimation. This finding is consistent with earlier reports [8–9]. A more detailed analysis reveals that the median of the normalized compliance Cmax /C50 for the amplitude-based calculation technique was equal to 6.6 (Table 1), while the slope-based estimates for the pulse upstroke, downstroke and top fragments equaled 10.8, 10.6 and 11.4, respectively. The difference between the amplitude-based and slopebased estimation in the top and upstroke area was statistically significant (p=0.009 and p=0.02, respectively). For the downstroke area this difference was of a borderline value (p=0.07), but may become significant when the number of subjects increases. We assessed the noise component in the compliance readings by a non-linear regression. Every plot of the normalized compliance vs. transmural pressure relationship was approximated by the 6th order polynomial (Fig. 2). R2 was used to characterize the fluctuation in the plot. The best polynomial fit (the least noise) was found for the amplitude-based compliance estimate (Table 2). The median of the goodness of fit R2 was equal to 0.99. In the case of the slope-based estimation, the best result was obtained near the top of the pulses (R2=0.97). The groupaveraged R2 values for the pulse upstroke and downstroke fragments equaled 0.95 and 0.93, respectively. V. CONCLUSIONS The slope-based estimation gives higher compliance values and steeper peaks of the compliance vs. transmural pressure relationship compared to the amplitude-based estimation. Regarding the noise component, the best results were obtained by the amplitude-based estimation. In the case of the slope-based technique, the top fragment based estimation appeared more effective than the upstroke or downstroke fragment based estimation.
ACKNOWLEDGMENT This work was supported by Grant 6487 from the Estonian Science Foundation.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Comparison of Four Calculation Techniques for Estimation of Local Arterial Compliance
565
Table 1. Individual and group-averaged data obtained by applying four different compliance estimates in 13 subjects.
Subject 1
HR, 1/min 2
MAP, mmHg 3
Slope-based estimation
Amplitude-based estimation Cmax /C50 4
Upstroke 2
R 5
2
Cmax /C50 6
R 7
Downstroke 2 Cmax /C50 R 8 9
Top Cmax /C50 10
2
R 11
A
68
72
3.0
0.990
4.6
0.956
3.0
0.982
5.1
B
64
79
3.4
0.980
16.9
0.853
14.3
0.740
18.5
0.963 0.969
C
65
79
7.3
0.988
8.4
0.636
10.6
0.767
9.0
0.804
D
55
78
5.2
0.992
38.6
0.932
12.4
0.846
10.0
0.930
E
61
71
6.6
0.981
14.0
0.831
24.8
0.951
18.9
0.974
F
82
88
6.7
0.994
4.4
0.876
6.0
0.978
5.4
0.991
G
68
72
11.0
0.995
11.9
0.958
18.6
0.934
12.7
0.967 0.956
H
63
73
3.5
0.994
10.8
0.946
17.2
0.864
11.4
I
66
93
14.1
0.988
14.7
0.976
10.4
0.913
17.3
0.974
J
83
94
7.9
0.970
9.0
0.967
5.0
0.948
11.4
0.980
K
63
71
5.2
0.991
11.6
0.962
36.7
0.962
27.3
0.966
L
66
69
2.1
0.993
2.6
0.962
2.7
0.973
2.9
0.958
M
89
79
7.3
0.980
6.3
0.928
3.9
0.823
4.4
0.879
Median
66
78
95% CI
63–82
72–88
6.6 3.4–7.9
0.99
10.8
0.95
10.6
0.93
0.980– 0.994
4.6–14.0
0.853– 0.962 p<0.01 7 vs. 5
3.9–18.6
0.823– 0.973 p<0.01 9 vs. 5
p=0.02 6 vs. 4
p=0.07 8 vs. 4
11.4 5.1–18.5 p=0.009 10 vs. 4
0.97 0.930– 0.974 p<0.01 11 vs. 5
Significance of differences is tested by the Wilcoxon matched pairs test.
REFERENCES 1. 2. 3. 4.
5. 6.
Shimazu H, Fukuoka M, Ito H et al. (1985) Non-invasive measurement of beat-to-beat vascular viscoelastic properties in human fingers and forearms. Med Biol Eng Comput 23:43–47 Yamakoshi K (1995) Volume-compensation method for non-invasive measurement of instantaneous arterial blood pressure - principle, methodology, and some applications. Homeostasis 36:90–119 Peñáz J, Honzikova N, Jurak P (1997) Vibration plethysmography: a method for studying the viscoelastic properties of finger arteries. Med Biol Eng Comput 35:633–637 Raamat R, Talts J, Jagomägi K et al. (1999) Mathematical modelling of non-invasive oscillometric finger mean blood pressure measurement by maximum oscillation criterion. Med Biol Eng Comput 37:784–788 Shimazu H, Ito H, Kawarada A et al. (1989) Vibration technique for indirect measurement of diastolic arterial pressure in human fingers. Med Biol Eng Comput 27:130–136 Drzewiecki G, Pilla JJ (1998) Non-invasive measurement of the human brachial artery pressure-area relation in collapse and hypertension. Ann Biomed Eng 26:965–974
7. 8. 9.
Peñáz J (1973) Photoelectric measurement of blood pressure, volume and flow in the finger, Digest of the 10th International Conference on Medical and Biological Engineering, Dresden, p 104 Jagomägi K, Raamat R, Talts J et al. (2005) Recording of dynamic arterial compliance changes during hand elevation. Clin Physiol Funct Imaging 25:350–356 Raamat R, Talts J, Jagomägi K (2007) Application of amplitudebased and slope-based algorithms to determine beat-to-beat finger arterial compliance during handgrip exercise. Med Eng Phys (in Press) Address of the corresponding author: Author: Raamat R Institute: Street: City: Country: Email:
University of Tartu 18 Ulikooli str Tartu Estonia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Computer Assisted Optimization of Biventricular Pacing Assuming Ventricular Heterogeneity R. Miri1, M. Reumann1, D. Farina1 , B. Osswald2, O. Dössel1 1
Institute of Biomedical Engineering, Universität Karlsruhe (TH), Karlsruhe, Germany 2 Department of Cardiac Surgery, University of Heidelberg, Heidelberg, Germany
Abstract— Reduced cardiac output, dysfunction of the conduction system, atrio-ventricular block, bundle branch blocks and remodeling of the chambers are results of congestive heart failure (CHF). Biventricular pacing as Cardiac Resynchronization Therapy (CRT) is a recognized therapy for the treatment of heart failure. The present paper investigates an automated non-invasive strategy to optimize CRT with respect to electrode positioning and timing delays based on a complex threedimensional computer model of the human heart. The anatomical model chosen for this study was the segmented data set of the Visible Man and a set of patient data with dilated ventricles and left bundle branch block. The excitation propagation and intra-ventricular conduction were simulated with Ten Tusscher electrophysiological cell model and adaptive cellular automaton. The pathologies simulated were a total atrioventricular (AV) block and a left bundle branch block (LBBB) in conjunction with reduced interventricular conduction velocities. The simulated activation times of different myocytes in the healthy and diseased heart model are compared in terms of root mean square error. The outcomes of the investigation show that the positioning of the electrodes, with respect to proper timing delay influences the efficiency of the resynchronization therapy. The proposed method may assist the surgeon in therapy planning. Keywords— Congestive heart failure, biventricular pacing, atrio-ventricular block, left bundle branch block.
I. INTRODUCTION Congestive heart failure (CHF) affects more than 15 million people in the western population and this number is expected to increase. Cardiac resynchronization therapy (CRT) has been shown to improve haemodynamics and clinical symptoms of congestive heart failure as well as survival [1]. Results from mechanistic studies, observational evaluations and randomized controlled trials have consistently demonstrated significant improvement in quality of life, functional status and exercise capacity in patients with New York Heart Association (NYHA) functional class III and IV heart failure who are assigned to active resynchronization therapy [2-5]. Although CRT has advanced to an established therapy for CHF patients today, a non-invasive strategy for optimal treatment with the biventricular pacing devices has not yet
been established. Optimization of pacing parameters such as the atrio-ventricular (A-V), intra-ventricular (V-V) delay and pacing sites are important [6-7]. However a noninvasive and automatically computed optimization strategy has not yet been proposed. Mathematical computer heart models that consider electrophysiological properties can yield this information, which otherwise cannot be obtained in a non-invasive way. This article suggests an automatic, non-invasive procedure based on a computer model of the heart that can be used to evaluate different electrode positions and timing delays pre-operatively. This model can also be used to postoperatively determine optimal A-V and V-V timing with respect to the final lead positions. II. METHODS A. Computer model The first anatomical model was derived from the Visible Man (VM) data set (National Library of Medicine, Bethseda, Maryland, USA)[8]. Its 1mm resolution yields 145×126×197 cubic voxel with isotropic side. The digital images of the cryosections have a distance of 1 mm. The second anatomical data set utilized in this research is from a patient with dilated ventricles with underlying left bundle branch block. The ventricular model was generated from three-dimensional MRI data and has a resolution of 192×163×180 cubic voxel with 1mm side length. Additionally, digital image processing techniques including filtering, segmentation and classification were used to derive the anatomical model presented in this work [9]. The fiber orientation of ventricular muscles was included in the model in order to represent the excitation propagation more precisely. The angle of fibers with respect to the circumferential vector of the spherical coordinate system is considered in the anatomical model [10]. An adaptive cellular automaton in combination with precalculated action potentials by the Ten Tusscher model for ventricular myocytes [11] is used in conjunction with a heterogeneous ventricular wall model. Action potential
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 541–544, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
542
R. Miri, M. Reumann, D. Farina, B. Osswald, O. Dössel
curves have been calculated for cells embedded into myocardial tissue using a bidomain model. The excitation propagates in a model of bundle branches where the branches are not electrically in contact with endocardium. The data set includes a conduction tree with atrio-ventricular (AV) node, bundle of His, left and right Tawara branches, the anterior/posterior left fascicle and the Purkinje fibers in both ventricles. When the excitation from the Purkinje fibers reaches the endocardium, the voxel representing myocardial tissue is activated and the excitation starts propagating through the ventricular wall. The pathologies simulated were a total atrio-ventricular block (AVB) and a left bundle branch block (LBBB) in conjunction with reduced intra-ventricular conduction velocities. The model of AVB used in this work sets the conduction properties of the AV node in such a way that the bundle of His is not excited. Similarly if the left Tawara branch is disabled for conduction, a left bundle branch block is created. The simulations were carried out with a physiological intra-ventricular conduction delay. Thus, the optimization method was applied to six different pathological set-ups in two anatomical data sets. B. Optimization and Electrode pacing A BVP device is composed of three electrodes: a sensing electrode in the right atrium and two pacing leads, one each in the left and right ventricle. After detecting an excitation in the right atrium through the sensing electrode, either the left or the right ventricle is stimulated after a programmed delay, called A-V delay. Since the left ventricle is commonly stimulated before the right ventricle, a positive V-V delay refers to left before right ventricular stimulation. A negative delay refers to the right before left stimulation in this work. The optimization strategy used is based on the assumption that the optimal cardiac output is achieved by sinus rhythm and normal electrophysiological parameters. Thus, the aim of BVP is to gain a cardiac activation as close to the sinus rhythm as possible. The isochrones, i.e. the activation time for each myocyte are computed in one cardiac cycle. The isochrones for the physiological excitation during sinus rhythm are used as a reference parameter and are compared with the computed isochrones for the pathological or paced case in terms of root mean square error (ERMS).
E RMS =
N
Thus, the ERMS delivers the deviation of the activation time of the pathology from sinus rhythm for the whole number of voxel elements N with the activation time xi of voxel i for the sinus rhythm and the activation time ei of voxel i for the pathology. By placing stimuli at the ventricles, the activation of the myocardium is forced to be closer to the physiological activation time. Thus, an optimization can be achieved by minimizing the ERMS through adjustment of the A-V and V-V delay. The algorithm can be described as follows: 1. Compute the isochrones for the physiological case. 2. Compute the isochrones for the pathological case. 3. Compute the isochrones for each electrode pair, each A-V and V-V delay. 4. Calculate the ERMS for the pathology and all pacing setups taking the isochrones for the physiological case as reference For each electrode pair, a matrix of ERMS is gained with respect to A-V and V-V delay. The smallest error yields the pacemaker timing that achieves highest resemblance to the sinus rhythm with respect to cardiac activation. The right ventricular electrodes are placed in the right ventricular apex, middle and top of the septum. The left ventricular electrodes are placed in the anterior and posterior branches of the coronary sinus as well as the left ventricular free wall (Figure 1). Lead positioning was interactively performed. Overall 36 electrode set-ups were investigated for each pathology. An A-V delay of 60-220 ms (VM) and 60-260 (patient) with 20 ms increments was used for the evaluation of the optimal timing delays. The V-V delay was adjusted for each A-V delay from -30 ms to 50 ms (VM) and -30 ms to 70 ms (patient) with 10 ms increments. The optimization is carried out automatically given the electrode positions, pathological set-up and various A-V and V-V delays. Thus, 3240 simulations for VM and 4356 simulations for patient have been executed according to each pathological set-up. C. Results A variety of conduction abnormalities with respect to the different pathologies were simulated to yield reference values for the optimization of A-V and V-V delay. The conduction velocities were set to 0%, 20% and 40% reduction of ventricular conduction velocity. The calculated ERMS for without-pacing cases and the best-achieved results containing the minimal ERMS are illustrated in Table 1 as functions of different conduction velocities reduction.
2
1 ∑ (x i − ei ) N i=1
(1)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Computer Assisted Optimization of Biventricular Pacing Assuming Ventricular Heterogeneity
543
a) Visible Man
b) Patient Fig. 1 The electrode positions respectively from left to right: right electrodes, left electrodes in the anterior and left electrodes in the posterior view
The results of pacing show an improvement of 85–97% for AVB and 31–70% for LBBB depending on the case of conduction velocity. The ERMS will not change for large A-V delays while it descends to a minimum value for small to medium A-V delays. An optimal A-V and V-V delay was found for each electrode pair. Figure 2 demonstrates the optimal ERMS for different pathologies. For VM data set, the optimization method mostly found the optimal electrode position in the anterior branches of the coronary sinus close to the base (electrode pairs RA and RB). Only the simulations of the AVB with 20% reduction in conduction delay and LBBB simulations with 40% reduction had an optimum ERMS with a left ventricular electrode in the posterior left ventricular wall. For all simulations of AVB in the patient data set, the optimal left ventricular electrode was situated at position L, which is at the posterior ventricular wall closest to the apex. In LBBB cases, the optimal electrode pair was RK where the left ventricular electrode is placed in a posterior-lateral branch of the coronary sinus between the base and apex.
III. CONCLUSIONS The presented computer model and optimization strategy compare well with the results of the clinical studies. Different pacing sites and stimuli are applied to the simulated model with respect to the pathology and the patient model to compute the activation time as a comparison factor. The procedure is carried out automatically in a noninvasive way and could be used pre- and post-operatively. A clinical evaluation of the presented optimization strategy is due to start. Future work will include a 17subdivisions model based on American Heart Association (AHA) suggestion [12].
ACKNOWLEDGMENT The first author is granted by LGFG (Landesgraduierterförderung) for her PhD work.The authors would like to acknowledge the data acquisition of the patient data set by the university of Würzburg, which was carried out in a research funded by (DFG).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
544
R. Miri, M. Reumann, D. Farina, B. Osswald, O. Dössel
Table 1 Optimal results for the simulations with VM and patient model at different pathologies Pathology AVB 0 % AVB 20 % AVB 40 % LBBB 0 % LBBB 20 % LBBB 40 %
VM Pacing lead RA RL RA RB RB RK
A-V delay 100 100 60 120 100 60
V-V delay 20 10 40 -10 0 30
Fig. 2 The ERMS in no pacing, minimal of ERMS between
ERMS 3.99 4.59 5.91 5.28 6.84 9.50
2. 3. 4. 5. 6.
7.
patient A-V delay V-V delay 260 0 260 -20 220 0 220 20 200 20 120 60
ERMS 3.21 3.27 4.44 9.90 12.62 18.43
different pacing of right ventricle according to different pathologies
REFERENCES 1.
Pacing lead RJ RL RJ S2K S2K S2K
Miske G, Acevedo C, Goodlive TW et al. (2005) Cardiac resynchronization therapy and tools to identify responders. Congestive Heart Failure 11(4): 199–206. McAlister F.A, Ezekowitz J.A, Wiebe N et al. (2004) Systematic review: cardiac resynchronization in patients with symptomatic heart failure. Ann Intern Med 141(5): 381–390. Jessup M, Bronzen S (2003) Heart failure. N Engl J Med 348: 20072018. Philippon F (2004) Cardiac resynchronization therapy: Device-based medicine for heart failure. J Card surg 19: 270-274. Cleland JG, Daubert JC, Erdmann et al. (2005) The effect of cardiac resynchronization on morbidity and mortality in heart failure, N Engl J Med 352: 1539-1549. Van Campen L.C.M, Visser F.C, de Cock C.C et al. (2006) Comparison of the haemodynamics of different pacing sites in patients undergoing resynchronization therapy: need for individualization an optimal lead localization. Heart 92(12): 1795-1800. Whinnett Z.I, Davies E.R, Willson K et al. (2006) Haemodynamic effects of changes in A-V and V-V dealy in cardiac resynchronization therapy show consistent pattern: analysis of shape, magnitude and relative importance of A-V and V-V delay. Heart 92(11): 1628-1634.
8.
Ackerman M.J (1991) Viewpoints: the visible human project, J of Biocommunication 18(2). 9. Seemann G (2005) Modeling of electrophysiology and tension development in the human heart. Universitätsverlag Karlsruhe. 10. Streeter D.D (1979) Gross morphology and fiber geometry of the heart. Handbook of physiology: The Cardiovascular System, 1: 61112. 11. Ten Tusscher K.H.W.J, Noble D, Noble P.J, Panfilov A.V (2004) A model for human ventricular tissue. Am J Physiol Heart Circ Physiol 286(4): H1573–H1589. 12. Cerqueira M, Weissman N, Dilsizian V et al. (2001) standardized myocardial segmentation and nomenclature for tomographic imaging of the heart: A statement for healthcrare professionals from the cardiac imaging committee of the council on clinical cardiology of the American heart association. Circulation 105:539-542. Author: Raz Miri Institute: Street: City: Country: Email:
Biomedical Engineering Kaiser str. 12 76131 Karlsruhe Germany
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Extracellular ATP-Purinoceptor Signaling for the Intercellular Synchronization of Intracellular Ca2+ oscillation in Cultured Cardiac Myocytes K. Kawahara and Y. Nakayama Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan Abstract— Isolated and cultured neonatal cardiac myocytes contract spontaneously and cyclically. The contraction rhythms of two isolated cardiac myocytes, each of which beats at different frequencies at first, become synchronized after the establishment of mutual contacts, suggesting that mutual entrainment occurs due to electrical and/or mechanical interactions between two myocytes. The intracellular concentration of free Ca2+ also changes rhythmically in association with the rhythmic contraction of myocytes (Ca2+ oscillation), and such a Ca2+ oscillation is also synchronized among cultured cardiac myocytes. In this study, we investigated whether intercellular communication other than via gap junctions was involved in the intercellular synchronization of intracellular Ca2+ oscillation in spontaneously beating cultured cardiac myocytes. Treatment with either blockers of gap junction channels or an un-coupler of E-C coupling did not affect the intercellular synchronization of Ca2+ oscillation. In contrast, treatment with a blocker of P2 purinoceptors resulted in the asynchronization of Ca2+ oscillatory rhythms among cardiac myocytes. The present study suggested that the extracellular ATP-purinoceptor system was responsible for the intercellular synchronization of Ca2+ oscillation among cardiac myocytes. Keywords— cultured cardiac myocytes, calcium oscillation, intercellular synchronization, gap junctions, purinoceptors
I. INTRODUCTION Isolated and cultured neonatal cardiac myocytes contract spontaneously and cyclically [1]. From the mathematical point of view, the contraction rhythm in the myocytes has the properties of non-linear oscillation, because the rhythm is entrained to the externally applied rhythmic electrical stimulation [2]. In addition, the contraction rhythms of two isolated cardiac myocytes, each of which beats at different frequencies at first, become synchronized after the establishment of mutual contacts [3], suggesting that mutual entrainment occurs due to electrical and/or mechanical interactions between two myocytes. The intracellular concentration of free Ca2+ also changes rhythmically in association with the rhythmic contraction of myocytes (Ca2+ oscillation). Such a Ca2+ oscillation was also synchronized among cultured cardiac myocytes [4]. It is generally believed that gap junctional intercellular communication
plays an important role in the intercellular synchronization of intracellular Ca2+ oscillation [5]. However, our recent preliminary study has revealed that the Ca2+ oscillation was synchronized not only among myocytes in an aggregate, but also among cells without apparent physical contact with each other, suggesting that intercellular communication other than via gap junctions was involved in the synchronization. Extracellular ATP acts as a potent agonist on a variety of different cell types, including cardiomyocytes [5], inducing a broad range of physiological responses. The cellular effects mediated by ATP are determined by the subtypes of P2 purinergic receptors expressed in the particular cell type. In cardiomyocytes, the expression of ionotropic P2X1–P2X7 receptors and metabotropic P2Y1, P2Y2, P2Y4, P2Y6, and P2Y11 receptors have been described [6]. In the single cardiomyocyte, extracellular ATP increases plasma membrane permeabilities for cations [7], intracellular calcium transients [6, 8], and the contraction amplitude [9, 10]. In addition, ATP can stimulate phospholipase C (PLC) [10]. On the organ level, ATP acts as a positive inotropic agent [9] and can induce various forms of arrhythmia [6]. In this study, we investigated whether intercellular communication other than via gap junctions was involved in the intercellular synchronization of intracellular Ca2+ oscillation in spontaneously beating cultured cardiac myocytes. Here we suggested that the extracellular ATP-purinoceptor system was responsible for the intercellular synchronization of Ca2+ oscillation among cultured cardiac myocytes. II. MATERIALS AND METHODS A. Cell culture The method of culture was described elsewhere in detail [11-14]. In short, cardiac myocytes were prepared from 1-3 day old neonatal Wistar rat ventricles removed after decapitation. The ventricles were digested with 0.1% collagenase in a 25 mM HEPES buffered minimum salt solution (MSS) at 37 °C for 10 min. The cell components were suspended in MCDB 107 containing 5% FCS, transferrin (10 μg/mL), and insulin (10 μg/mL). The isolated myocytes were cultured at a density of about 4.5×105
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 537–540, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
538
K. Kawahara and Y. Nakayama
cells/ml in a petri dish (Φ35 mm, Falcon) in an incubator (37 °C, 5% CO2, 95% air). Cardiac myocytes cultured for 4 and 7 days were used in this study.
fixation, they were incubated with an anti-LY antibody at 4 °C overnight. After washing in PBST, they were incubated in Cy3-conjugated secondary anti-rabbit IgG antibody at 4 °C overnight.
B. Image analysis The spontaneous contraction rhythm of cultured myocytes was evaluated by a video image recording method. Images of beating myocytes were recorded with a CCD camera through a phase-contrast microscope. A small area (a square of about 20 pixels) of the myocyte where brightness changed considerably with contraction was arbitrarily selected from the video image, and the video signals were digitized. A reference frame was arbitrarily selected and cross-correlograms were calculated between pixels of the reference frame and those of other frames, to represent temporal variation of brightness in the selected area corresponding to the contraction rhythm of the cardiac myocyte.
III. RESULTS Isolated and cultured neonatal cardiac myocytes started to contract spontaneously and cyclically usually after 2 to 4 days in vitro (2-4 DIV). The intracellular concentration of free Ca2+ also changed cyclically (Ca2+ oscillation) in association with the spontaneous rhythmic contraction of cardiac myocytes, and the oscillation is synchronized among cells [4]. Gap junctional intercellular communication has been considered to play a critical role in such intercellular synchronization of intracellular Ca2+ oscillation [5]. The intercellular communication via gap junctions among cardiac myocytes is schematically illustrated in Fig. 1.
C. Cellular Ca2+ measurements
Ca2+
2+
Changes in cytosolic free Ca were measured using fluo 4/AM (5 μM). Cardiac myocytes in culture were loaded with the fluo 4 during a 30 min incubation in MCDB medium. Fluo 4 was excited at 490 nm, and emission intensity was measured at 510 nm. Fluorescent images were acquired at about 200 ms intervals with a cooled CCD camera. Fluorescent intensity (F) was normalized with the initial value (F0), and the changes in the relative fluorescent intensity (F/F0 - 1) were used to assess those in cellular free Ca2+.
IP3
gap junction
cAMP
cAMP Ca2+ IP3
Ca2+ IP3
cAMP
D. Anti-Cx43 immunocytochemistry The cardiac myocytes were fixed with 4% paraformaldehyde at room temperature. The cells were then incubated with a primary anti-Cx 43 antibody overnight at 4 °C. After being washed with phosphate-buffered saline (PBS), the cells were incubated with a secondary anti-rabbit IgG antibody. Bound antibodies were detected by the avidin-biotin-peroxidase complex (ABC) method. Observation of peroxidase activity was made possible by incubation with 0.1% DAB. E. Dye transfer analysis Intracellular staining were performed using microelectrodes filled with 3% Lucifer yellow (LY) CH dissolved in 0.1 M lithium chloride. LY was injected into a cell of 4 and 7 days-cultured iontophoretically. Stained cells were then fixed in 4% paraformaldehyde at RT. After
Fig. 1 Schematic illustration of the intercellular communication among cardiac myocytes via gap junctions
In this study, we first investigated whether cyclic changes in the concentration of intracellular free Ca2+ were synchronized among cultured cardiac myocytes at 4 DIV. The oscillation was also synchronized between the myocytes in an aggregate and a remote myocyte without apparent physical contact, suggesting that the additional participation of an extracellular pathway in the intercellular synchronization of intracellular Ca2+ oscillation. Therefore, we next investigated the degree of the extent where the cells were directly coupled each other. By performing an immunocytochemical analysis on 4 and 7 DIV cultures using an anti-Cx 43 antibody to identify the expression and distribution of gap junction proteins in
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Extracellular ATP-Purinoceptor Signaling for the Intercellular Synchronization of Intracellular Ca2+ oscillation
cultured cardiac myocytes, it was revealed that Cx 43 proteins were distributed in clusters in cardiac myocytes at 4 DIV, but myocytes at 7 DIV had a dot-like Cx 43 distribution linking adjacent cells. To confirm further, the gap junctional intercellular connection was measured by the microinjection dye transfer method using Lucifer Yellow (LY). The analysis demonstrated that the injected dye was markedly transferred to many myocytes at 7 DIV, but to only a few cells at 4 DIV (Fig. 2). A1
B1
539
channels or an un-coupler of E-C coupling did not affect the intercellular synchronization of Ca2+ oscillation. In contrast, treatment with a blocker of P2 purinoceptors resulted in the asynchronization of Ca2+ oscillatory rhythms among cardiac myocytes. The present study suggested that the extracellular ATP-purinoceptor system was responsible for the intercellular synchronization of Ca2+ oscillation among cardiac myocytes. The possible mechanisms for the intercellular synchronization of intracellular Ca2+ oscillation among cardiac myocytes are schematically illustrated in Fig. 3. Ca2+ ATP
ATP
L-type Ca2+ channel
purinoceptor
ATP
(P2Y1)
A2
ATP
B2
CICR PLC
UP
IP3
UP
[Ca2+]i
UP
ATP release
IICR
Fig. 3 Possible intracellular and intercellular signaling pathways responsible for the intercellular synchronization of intracellular Ca2+ oscillation in cultured cardiac myocytes. Abbreviations: CICR, calcium-induced calcium release; IICR, IP3-induced calcium release.
Fig. 2
Dye transfer analysis on the coupling among cardiac myocytes by microinjecting LY dye into a single cell. Figures A1 and A2 indicate the fluorescent LY images in cardiac myocytes at 4 and 7 DIV, respectively. Figures A2 and B2 illustrate the fluorescent images stained with an anti-LY antibody in cardiac myocytes at 4 DIV and 7 DIV, respectively. Scale bars indicate 100 μm.
We then investigated whether extracellular signaling of the ATP-purinoceptor system was responsible for the intercellular synchronization of intracellular Ca2+ oscillation in cultured cardiac myocytes. Treatment of cultures with suramin, an antagonist of P2-purinoceptors, resulted in the asynchronization of intracellular Ca2+ oscillation between a cardiac myocyte in an aggregate and a remote myocyte without an apparent physical contact, suggesting that the extracellular ATP-purinoceptor signaling was involved in the intercellular synchronization.
ACKNOWLEDGMENT This work was supported by a grant-in-aid for scientific research from the Ministry of Education, Science, and Culture of Japan (16300145) to KK.
REFERENCES 1. 2.
IV. CONCLUSION In this study, we investigated whether intercellular communication other than via gap junctions was involved in the intercellular synchronization of intracellular Ca2+ oscillation in spontaneously beating cultured cardiac myocytes. Treatment with either blockers of gap junction
3. 4.
Harary I, Farley B (1963) In vitro studies on single beating rat heart cells; I. Growth and organization. Exp Cell Res 29:451-465 Glass L, Guevara MR, Shrier A, Perez R (1993.) Bufurcation and chaos in a periodically stimulated cardiac oscillator. Physica D. 7:89-101 Jongsma HJ, Masson-Pevet M, Tsjernina L (1987) The development of beat-rate synchronization of rat myocyte pairs in cell culture. Basic Res Cardiol 82:454-464 Nakayama Y, Kawahara K, Yoneyama M, Hachiro T (2005) Rhythmic contraction and intracellular Ca2+ oscillatory rhythm in spontaneously beating cultured cardiac myocytes. Biol Rhythm Res 36:317-326
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
540 5.
Kunapuli SP, Daniel JL (1998) P2 receptor subtypes in the cardiovascular system. Biochem J .336:513-523 6. Vassort G (2001) Adenosine 5’-triphosphate: a P2-purinergic agonist in the myocardium. Physiol Rev 81:767-806 7. Puceat M, Clement O, Scamps F, Vassort G (1991) Extracellular ATP-induced acidification leads to cytosolic calcium transient rise in single rat cardiac myocyte. Biochem J 274:55-62 8. Zhang BX, Ma X, McConnell BK et al. (1996) Activation of purinergic receptors triggers oscillatory contractions in adult rat ventricular myocytes. Circ Res 79:94-102 9. Mei Q, Liang BT (2001) P2 purinergic receptor activation enhances cardiac contractility in isolated rat and mouse hearts. Am J Physiol 281:H334-H341 10. Podrasky E, Xu D, Liang BT (1997) A novel phospholipase Cand cAMP-independent positive inotropic mechanism via a P2 purinoceptor. Am J Physiol 273:H2380-H2387 11. Kawahara K, Hachiro T, Yokokawa T et al. (2006) Ischemia/ reperfusion-induced death of cardiac myocytes: possible involvement of nitric oxide in the coordination of ATP supply and demand during ischemia. J Mol Cell Cardiol 40:35-46
K. Kawahara and Y. Nakayama 12. Kawahara K, Abe R, Yamauchi Y, Kohashi M (2002) Fluctuations in contraction rhythm during simulated ischemia/reperfusion in cultured cardiac myocytes from neonatal rats. Biol Rhythm Res 33:339-350 13. Yamauchi Y, Harada A, Kawahara K (2002) Changes in the fluctuation of interbeat intervals in spontaneously beating cultured cardiac myocytes: experimental and simulation studies. Biol Cybern 86:147-154 14. Yoneyama M, Kawahara K (2004) Fluctuation dynamics in coupled oscillator systems of spontaneously beating cultured cardiac myocytes. Physical Rev E 70:21904 •
Address of the corresponding author:
Author: Koichi Kawahara, Prof. Institute: Graduate School of Information Science and Technology, Hokkaido University Street: Kita 14, Nishi 9 City: Sapporo Country: Japan Email:
[email protected]
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Medical Plans as a Middle Step in Building Heart Failure Expert System Alan Jovic1, Marin Prcela1 and Goran Krstacic2 1
Rudjer Boskovic Institute, Department of Electronics, Laboratory for Information Systems, Bijenicka 54, 10000 Zagreb, Croatia 2 Institute for Cardiovascular Diseases and Rehabilitation, Draskoviceva 13, 10000 Zagreb, Croatia
Abstract— Knowledge acquisition and presentation are important issues when constructing medical knowledge based systems. In this work medical plans are introduced as a solution for presentation of operational medical knowledge. Plans are not meant to be directly used by the actual decision support systems, but they are created as a middle step between medical doctors’ domain knowledge and computer system representation of that knowledge. They are graphical representations of procedures in specific medical domain, typically describing diagnosis and treatment of a certain disorder. They can be created and edited by medical doctors allowing them to express their vast, but often dispersed, knowledge in a systematic way. The goal is to enable technically sound transformation of medical knowledge into the form of rules or in a guideline modeling tool. The problem of treatment of heart failure disorder is used in this work to illustrate the concepts of medical plans and their application in a difficult real world problem. Keywords— medical plans, expert system, knowledge acquisition
I. INTRODUCTION Expert systems in medicine have a long history. They are most often used in hospitals to help medical doctors in determining the patient's diagnosis. An expert system takes patient's medical details and symptoms and provides a probable diagnosis and corresponding treatment based on underlying knowledge and logic. In order to build an expert system, one must first formalize an extensive domain knowledge. An expert is a person who is characterized by superior performance within a specific domain of activity [1]. Its knowledge consists of cognitive element (individual’s viewpoints and beliefs) and a technical element (specific skills and abilities). Although medical documents, books and guidelines are exhaustive, most of the knowledge is in the heads of medical experts. Large part of their knowledge is tacit; they don’t know all they know and all they use, which makes that knowledge hard or impossible to describe [2]. Not every expert has complete knowledge about certain domain and knowledge may vary from expert to expert. Also, knowledge has a “shelf life”; it is continuously
evolving: while new facts are constantly coming to life other are made obsolete. Still, in their every day practice experts manage to successfully treat a large number of patients. Medical knowledge is characterized by time, space, and knowledge complexity. Time complexity denotes that data is collected during days and years while response is expected in the range of seconds. Space complexity refers to the fact that the data may be distributed in different parts of the health care system and in various forms. The knowledge complexity stands for the abundance of the expert knowledge for every medical sub-specialization. Even though a number of guideline modeling tools provide means to address the described complexity issues, the practice indicates that there are major difficulties in expressing knowledge in a strict form by the medical experts. In this work, a concept based on medical plan design and implementation is explored. This method is currently being used in formalizing medical knowledge for the heart failure domain. The research presented in this work is stimulated by a project aimed at realization of a knowledge based platform called HEARTFAID, that should assist in management of heart failure patients. This research is still a work in progress. The platform will have to intelligently assist in various tasks ranging from patients home monitoring to the decision support in specialized hospitals. Medical plans devised for this kind of project have to be detailed and applicable in many possible situations and they are thus very challenging. In the next section medical knowledge acquisition and representation is explained. Section III gives several examples of medical plans and discusses their application. Section IV discusses to available tools for the plan implementation. II. MEDICAL KNOWLEDGE ACQUISITION AND REPRESENTATION
Every guideline for a disorder is only a well-formed recommendation and usually does not contain detailed procedures on medical treatment. More specifically, it does not address what exactly has to be done in each case, but rather gives general instructions for the disorder based on
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 549–553, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
550
Evidence Based Medicine (EBM) approach. The significance and meaning of every symptom is usually not discussed and all of the contraindications are not presented. Therefore, a combination of medical knowledge from guidelines, medical doctor expertise, as well as from medical articles and other medical resources are required in order to properly describe the disorder and construct a reliable system. This combination of various data regarding the medical problem has to be presented in a form that is convenient for the computer interpretation, as well as for the use by the medical practitioners and sometimes, even patients and their families. There are several possible ways to acquire the medical knowledge needed for an expert system. First, a number of experts in the field are consulted. Knowledge acquisition can range from questions prepared by the engineer for the expert regarding a medical issue [3; 4], to a computer program built specifically for the purpose of gathering knowledge [5]. Guideline modeling systems (Arden Syntax, Asbru, GLIF, Proforma) [6] should provide syntax for the deployment of knowledge both in machine-readable and human-readable form. However, guideline modeling systems in general have failed to perform in practice (except Arden Syntax, which is used in hospitals). The reasons for that can be summarized to: (1) difficulties in acquisition and verification, (2) difficulties in integration in medical institutions and (3) usage denial by the clinicians [1]. The underlying reason for guideline modeling systems failure is the complexity of the knowledge extraction process. In addition to the omnipresent problem of the extraction of tacit knowledge, the syntax of guideline modeling systems is often not suitable for presenting a complex knowledge for the medical experts (unless they are computer experts at the same time). A “classic” expert system functions on a set of facts and rules. Facts are gathered static knowledge such as the names and properties of medications, diagnoses, tests, etc. Rules are used in order to gain new knowledge based on the known facts or to draw some conclusions. The presentation of facts is usually done by the use of ontology languages and tools. Although ontology modeling tools are very efficient for knowledge integration, they are still not the preferred way in which knowledge systems engineer and the medical doctors communicate. Since ontology structure can become rather complex, the medical knowledge that it comprises is not always well understood. It has thus become evident that another approach to knowledge systematization and presentation would be required. This type of visualization middle step between the guidelines and other static knowledge that has been acquired on one side, and a working expert system on the other side can be achieved by using so called “medical plans”.
Alan Jovic, Marin Prcela and Goran Krstacic
We have observed that medical doctors find this type of knowledge presentation intuitive and understandable. They are even willing do create their own plans for a specific aspect of the disorder, because the learning process for drawing these plans is fast. They find themselves more capable of expressing their knowledge through the use of the medical plans syntax. III. MEDICAL PLANS Medical plans are textual and visual presentations of the events that can occur while treating patient with a specific disorder. Unlike the guideline modeling tools, medical plans are not machine-readable. They are the middle step between the experts and the guideline modeling tools which persuade the experts to clearly state the procedure they would normally perform when facing a specific problem, and at the same time enabling the technicians to understand it and encode it in a machine readable form. The syntax of the medical plans highly resembles the traditional workflow management. The difference is that the medical plans will not be executed by machines; they are written in a almost-free graph/text form with main purpose to be fully understandable by humans and at the same time to clearly state the details of the knowledge. The main advantage of the plan is that it allows for a better systematization of the medical procedures and their interconnection than the guidelines do. The second advantage is that their visual presentation facilitates the medical staff interventions in the medical know-how part of the system. The plans can be quickly corrected and maintained. The third advantage is the facilitation of the implementation process. Fourth, they need not be created all at once, but rather can be designed and inspected one or more at the time. For the heart failure platform, medical plans describe the symptoms that can occur as events to the patient who is treated by the platform. These symptoms have assigned urgency level which corresponds to the type of response from the medical team. For example, pulmonary edema has the highest urgency level, requiring immediate admission to the hospital. An example of a symptom that has a low urgency level is cough. Cough does not require for the patient to report to the hospital, but rather if it is persistent, he should contact his general practitioner. An example of a medical plan is given in Fig. 1. This is a relatively simple plan for the assessment of high temperature if patient already has congestive heart failure. First, patient’s symptoms are inspected. If patient has
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Medical Plans as a Middle Step in Building Heart Failure Expert System
551
Fig. 1. High temperature medical plan
muscle ache, headache or feels dizzy, it is probable that he has infection and that he should be treated in respect to the type of the infection. Medical doctor has to assert that there are no contraindications for the treatment and this is implicitly assumed. If patient does not show these symptoms, three causes for the high temperature of the heart failure patient have to be considered: myocardial infarction, cerebral hemorrhage and hyperthyroidism. These disorders can be diagnosed after the following tests are performed: ECG, echocardiogram, cranial CT and thyroid function test. If the diagnosis is confirmed, it is treated. If not, the platform specifies that the cause is unknown and thus out of its scope. Types of nodes (and their function) in a plan flowchart are defined by their shape and color. For example, a diamond shape always represents plans, either the beginning (green) or the end (bright yellow) of the plan (note that colors are not used in this article). It is possible to jump from one plan to another by specifying this other plan as a node in the present plan. At the moment, the heart failure system has more than 30 interconnected plans for symptoms treatment and 10 plans for medications dosage. Some of the more serious symptoms and diagnoses presented by plans include heart attack, cerebral stroke, atrial fibrillation and pulmonary edema.
IV. MEDICAL PLANS IMPLEMENTATION It is never easy to choose which knowledge representation technique (or combination) fits the needs of a given medical application best. To make that decision it is very important to define problems that system will be used to resolve and it is very useful to be aware of the possibilities and drawbacks of each technique of knowledge representation. One way of representation may simplify the solution, while an inappropriate choice of representation makes the solution much more difficult to accomplish. The acquired medical plans should indicate which knowledge representation techniques can be used for knowledge formalizing. Within the HEARTFAID project, for representing heart failure disorder plans, we are using ontologies, workflows and rules. When knowledge base is being built it is useful to have some kind of a domain description which will contain the description of entities, concepts and terms that are in any way related to the domain. A good way to build a domain description is by means of ontology. Ontology has the possibility to describe a set of concepts and relations between them. The main purpose of an ontology is to share common understanding of the structure of information between people and/or software agents, to
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
552
Alan Jovic, Marin Prcela and Goran Krstacic
reuse the domain knowledge and to make domain assumptions explicit [7]. Structured domain description will also provide a bridge between “low level” parts of the application (electronic patient records, databases, instruments) and “high level” domain description (medical concepts, medical terms, medical actions, ...). Existence of well defined ontology somewhat eases the segregation and integration of the technical tasks and the medical tasks. Several guideline modeling tools such as GLIF and Asbru are suitable for representing and formalizing acquired medical plans. GLIF structure highly resembles the given medical plans structure. At the GLIF conceptual level guidelines are represented as flowcharts that are easily understood or edited by humans, and not interpretable by machines since the underlying platform implementation details are not formalized. However, the syntax of GLIF is more restrictive than the medical plans syntax [6]. Asbru is a guideline modeling tool that focuses on representing medical plans with high awareness of the time dimension in the medical procedures and actions. A plan in Asbru is a set of actions that is performed when defined preconditions hold. Each plan may have defined plan intentions - a high level goals of the plan. When intentions are defined, the clinicians may for some reason disregard the plan suggestion as long as the defined intentions of the plan will be accomplished. [6]. The most intuitive and by far the most exploited technique for presenting procedural knowledge are “rules”. Rule is a statement that defines which actions should be taken when certain conditions arise. Form and the syntax of rules are quite simple, but when the number of rules grows to some amount the complete picture of the knowledge in the knowledge base becomes unclear. This problem is partially handled within the Arden syntax rule-based system by attaching human-readable information to the machinereadable rules, but it remains inherent with the maintenance of the rule base. Subsequent modifications in the medical plan might cause the uncontrolled propagation of the change in the knowledge base. Most likely the processes of validation, verification and testing should be repeated on every medical plan change in order to insure the knowledge base consistency. V. DISCUSSION AND CONCLUSION The process of constructing an efficient, reliable and complex medical system is demanding. We have presented a paradigm called medical plans that is able to help the processes of knowledge acquisition and knowledge presentation. It has been used in the construction of heart
failure disorder medical platform. Its main advantages are: efficient systematization of medical procedures, facilitation of medical staff understanding of the system, facilitation of implementation process by using ontologies, rules and/or workflows. Experts are willing to participate in designing plans because of the simple graphical solution. Medical plans have been proven as an interesting solution in presenting medical domain knowledge. It remains unclear which is the best method for their implementation. This problem can be solved by manually constructing rules, ontologies or workflows, which is the current paradigm. The authors have found that the introduced medical plans made an important alleviation step in the development of a heart failure knowledge base.
ACKNOWLEDGEMENTS This research work is supported by the European Community, under the Sixth Framework Programme, Information Society Technology – ICT for Health, within the STREP project “HEARTFAID: a Knowledge based Platform of Services for supporting Medical-Clinical Management of the Heart Failure within the Elderly Population”, 2006–2009. The results presented are supported by Croatian Ministry of Science, Education and Sport project “Machine Learning Algorithms and their Application”.
REFERENCES 1. 2. 3.
4.
5.
Bradley, J.H., Paul, R., Seeman, E., “Analyzing the structure of expert knowledge”, Information and Management (2006), 43:77-91 Knowledge acquisition at http://www.epistemics.co.uk/Notes/63-0-0.htm Kawaguchi, A., Motoda, H., Mizoguchi, R., “Interview-Based Knowledge Acquisition Using Dynamic Analysis”, IEEE Expert: Intelligent Systems and Their Applications (1991), Vol. 6, 5:47-61 Mizoguchi, R., Matsuda, K., Nomura, Y., “ISAK: Interview system for acquiring design knowledge - A new architecture of interview systems using examples”, Proceedings of the First Japanese Knowledge Acquisition for Knowledge-Based Systems Workshop (JKAW) (1990), Ohmsha Ltd., 277-286 Achour, S.L., Dojat, M., Rieux, C., Bierling, P., Lepage, E., “A UMLS-based Knowledge Acquisition Tool for Rule-based Clinical Decision Support System Development”, J Am Med Inform Assoc. (2001), 8:351-360
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Medical Plans as a Middle Step in Building Heart Failure Expert System 6.
7.
De Clercq, P.A., Blom, J.A., Korsten, H.H.M., Hasman, A., “Approaches for creating computer-interpretable guidelines that facilitate decision support”, Artificial Intelligence in Medicine (2004) 31, 1-27 Gruber, T. R., “A Translation Approach to Portable Ontology Specifications”, Knowledge Acquisition (1993), 5(2): 199-220
553 Author: Alan Jovic Institute: Rudjer Boskovic Institute, Department of Electronics, Laboratory for Information Systems Street: Bijenicka 54 City: 10000 Zagreb Country: Croatia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Method for Reducing Pacing Current Threshold at Transesophageal Stimulation A. Anier1, J. Kaik2 and K. Meigas1 1
Biomedical Engineering Centre, Tallinn University of Technology, Tallinn, Estonia 2 Estonian Institute of Cardiology, Tallinn, Estonia
Abstract — In order to reduce pacing current threshold at transesophageal stimulation use of additional chest electrode was studied. The study was performed in 34 patients aged 19 to 66 years using standard transesophageal pacing equipment. Use of chest electrode lowered pacing current over standard bipolar transesophageal methods. Keywords — transesophageal pacing, threshold current, chest electrode
I. INTRODUCTION During the last three decades, following the publication of the first reports [1,2] concerning the effectiveness, safety, and cost-effectiveness of transesophageal (TOE) atrial stimulation in evaluation and treatment of supraventricular (SV) arrhythmias, this method has become widely used in cardiology. TOE pacing can yield important information in many situations, where invasive atrial stimulation is usually done, but is safe, rapid, inexpensive and can often be performed in an outpatient setting. TOE pacing can initiate and terminate SV tachycardias [3, 4], atrial flutter [5], predict the risk of potentially lethal SV arrhythmias in asymptomatic WPW syndrome patients [6]. TOE pacing has been recently invented in surgery as SV arrhythmias represent an intraoperative risk factor during general anesthesia, and the use of antiarrhythmic drugs is accompanied by intrinsic hazards, such as pro-arrhythmic and toxic effects as well as severe bradycardia immediately after the induction of anesthesia [7]. As the inducibility and causative mechanisms of arrhythmias induced by TOE atrial pacing has shown an excellent correlation to findings at a subsequent invasive electrophysiologic study, it has become a simple, predictive, and inexpensive method of testing the antiarrhythmic/arrhythmogenic properties of drugs utilized for sudden cardiac death risk stratification and prediction of antiarrhythmic drug treatment benefits/drawbacks in various heart diseases [8,9]. Numerous reports have confirmed that TOE pacing is a rapid, safe and effective means of evaluating and terminating SV arrhythmias in pediatric population including infants [10,11]. The method is frequently used as a diagnostic option in coronary artery disease patient – as a modification of stress-
test or at TOE atrial pacing stress echocardiography, which correlates well with myocardial perfusion stress scintigraphy and coronary angiography [12,13]. TOE atrial pacing is feasible because of the proximity between the esophagus and the posterior wall of the left atria. It can be obtained in more than 95% of patients. Pacing usually produces certain thorax discomfort, mainly burning chest sensation, that most patients tolerate, but nevertheless minimizing of pacing threshold is highly desirable and corresponding studies have been performed from the very early years of invention of this method [14]. The lowest threshold currents can be reached at pulse widths between 10 and 20 msec and are between 5 and 15 mA patient dependent. Widely studied methods of reducing the stimulus current are finding the optimal position to the pacing electrode and the geometrical modification of the electrode. We propose use of multiple electrodes as a possibility to control current distribution in thorax in order to reduce pacing current and patient discomfort. II. MATERIALS AND METHODS The optimal position of pacing esophageal electrode has been the subject of permanent discussion from the very first day of inventing this method. To evaluate the dependence of threshold from electrode position a special study in 18 patients was performed. We elaborated a multichannel esophageal stimulation catheter with 9 electrodes and 1 cm interval spacing . Special computer controlled switch was used to change the pac-
Fig. 1
Experimental arrangement
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 554–557, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Method for Reducing Pacing Current Threshold at Transesophageal Stimulation
ing site (Figure 1), so there was no need to relocate catheter itself during studies. The intermediate electrode was positioned at the localization of maximal unipolar atrial electrogram. By gradually increasing pacing current until stable capture of atrial pacing was confirmed at surface ECG, pacing threshold of all 9 electrode positions, were found. We evaluated use of chest electrode in 34 patients aged 19 to 66 years (22 with sick sinus syndrome, 6 with paroxysmal supraventricular tachycardia, 5 with WolffParkinsonWhite syndrome, one with unexplained syncope) who underwent routine TOE electrophysiological study in the Department of Cardiac Arrhythmias of Estonian Institute of Cardiology. Standard bipolar transesophageal catheter was inserted through mouth and. Pacing was performed at interelectrode spacing of 40mm and pulse duration of 10ms. Different interelectrode spacings and pulse durations were not studied as they have no considerable effect on pacing threshold [14]. The current output was gradually increased until stable capture of atrial pacing was accomplished, which was confirmed at surface ECG. The second part of the study included the use of external chest electrode (defibrillator electrode with 100mm diameter) placed beneath the back between scapulae. It was electrically connected to the distal electrode of the esophageal catheter and pacing threshold was determined again. In all tests the overall voltage of pulse as well as current distribution between chest electrode and distal esophageal electrode was measured using multichannel oscilloscope. III. RESULTS Our results show that optimal position of esophageal lead anode is 1 cm distal to the maximal unipolar atrial electrogram registration point (Figure 2). Anode position 0 denotes intermediate electrode positioned at the localization maximal unipolar atrial electrogram. Positive numbers represent anode positions distal to position 0 in centimeters. Negative numbers represent anode positions proximal to position 0 in centimeters. Stimulation points 1 cm to distal or proximal to optimal position also assure acceptable pacing thresholds. Removement of anode position in the proximal direction is accompanied with significant elevation of pacing threshold, the same is noticed if the anode is placed in the distal direction for more than 2 cm from maximal atrial electrogram location. Stimulation position 0 was used in further studies. The main results are presented in Table 1. Patient count denotes number of cases where given current threshold was sufficient for successful atrial capture. Success rate denotes cumulative percentage of cases
555
Fig. 2 Pacing threshold dependence on anode position where given current threshold would be sufficient for successful atrial capture. Both figures are given for both reference study i.e. using transesophageal lead only, and for simultaneous use of transesophageal lead and chest electrode. Our results show successful atrial capture at pulse current 16 mA or less was feasible in more than 80% of patients, and the current of 19 mA would have been sufficient for capture in 100% patients (Figure 3a). Using external chest electrode connected in parallel with distal esophageal electrode allowed effective atrial capture in more than 80% of patients at pacing pulse current 13mA or less, while 19 mA was required for 100% capture (Figure 3b). Table 1 Threshold
Pacing thresholds
reference
with chest electrode
mA
patient count
success rate
7
1
3%
8
0
3%
4
12%
9
1
6%
5
26%
10
1
9%
5
41%
11
5
24%
5
56%
12
6
41%
4
68%
13
5
56%
5
82%
14
4
68%
2
88%
15
2
74%
16
7
94%
17
1
97%
18 19
97% 1
100%
patient count
success rate 0%
88% 2
94%
2
100%
94%
100%
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
556
A. Anier, J. Kaik and K. Meigas
a)
Fig. 4 Pacing threshold reduction using esophageal lead with chest electrode compared to using only esophageal lead. Negative numbers denote reduction in mA (desired effect), positive numbers denote growth of pacing threshold
b)
pacing current. The voltage required dropped from 19.8 ± 5.8V in case of reference study to 14.2 ± 2.6V (M±SD) using chest electrode. V. CONCLUSIONS Application of additional (external) electrode(s) at TOE atrial stimulation can be used to reduce the pacing current threshold. Optimal number and location(s) of additional electrodes and current distribution between them is subject to further studies. Fig. 3 Pacing threshold (minimum pacing current required to perform TOE) with a) transesophageal lead only b) transesophageal lead with chest electrode
ACKNOWLEDGMENT
IV. DISCUSSION
This study was supported by Estonian Competence Centre Programme and the Estonian Science Foundation grants G6842 and G5888.
There are two main sources of discomfort at TOE electrophysiological study – the initial introduction of pacing catheter to esophagus and burning sensation caused by pacing current. The pacing threshold (and current strength) can be reduced by modifying certain parameters: position of pacing electrodes, construction of electrodes, distance between electrodes, duration and shape of pulse. We introduced simultaneous use of multiple esophageal and chest electrodes as a way to reduce pacing threshold. Utilization of additional chest electrode connected electrically in parallel with distal esophageal electrode resulted in pacing threshold reduction for 2 to 7 mA in 50% of cases (Figure 4), and more than 80% of patients acquired pacing threshold of 13mA or less (Figure 3b). It’s worth of mentioning, that during atrial stimulation the current through chest electrode formed 82 ± 13% (M±SD) of total
REFERENCES 1. 2.
3. 4. 5.
Gallagher JJ., Smith WM., Kerr CR., et al. (1982) Esophageal pacing: A diagnostic and therapeutic tool. Circulation, 65: 336-341 Pistolese M, Richichi G, Catalano V, Boccadamo R. (1975) Transitory transesophageal atrial electric stimulation. Preliminary report on 19 cases and considerations on the method, indications and results. G Ital Cardiol. 5:65-72. Brockmeier K, Ulmer HE, Hessling G. (2002) Termination of atrial reentrant tachycardias by using transesophageal atrial pacing. J Electrocardiol. 35 Suppl: 159-163 Kesek M, Sheikh H, Bastani H, et al. (2000) The sensitivity of transesophageal pacing for screening in atrial tachycardias. Int J Cardiol., 72: 239-242 Rostas, L, Antal K, Putorek Z. (1999) Transesophageal pacemaker therapy in atrial flutter after procainamide pretreatment. Am J The., 6: 237-240
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Method for Reducing Pacing Current Threshold at Transesophageal Stimulation 6.
Fenici R, Brisinda D, Nenonen J, Fenici P. (2003) Noninvasive study of ventricular preexcitation using multichannel magnetocardiography. Pacing Clin Electrophysiol. 26:431-435 7. Romano R, Fattorini F, Ciccaglioni A, e.a. (2002) Transesophageal atrial pacing in the management of re-entry supraventricular tachyarrhythmias occurring during general anesthesia. Minerva Anestesiol. 68:825-832. 8. Brembilla-Perrot B, Beurrier D, Houriez P, e.a. (2004) Transesophageal atrial pacing in the diagnostic evaluation of patients with unexplained syncope associated or not with palpitations. Int J Cardiol. 96: 347-353 9. Kaik J., Vainu M., Mahhotina V. (1990) Serial transesophageal electrophysiologic studies in drug therapy efficacy evaluation in outpatients with reentrant supraventricular tachycardias. J Intern. Med., 228, Suppl: 733-734 10. Ko JK, Ryu SJ, Ban JE, et al. (2004) Use of transesophageal atrial pacing for documentation of arrhythmias suspected in infants and children. Jpn Heart J. 45: 63-72 11. Gimovsky ML, Nazir M, Hashemi E, Polcaro J. (2004) Fetal/neonatal supraventricular tachycardia. J Perinatol. 24:191-193
557
12. Plonska E, Kasprzak JD, Kornacewicz-Jach Z. (2005)Long-term prognostic value of transesophageal atrial pacing stress echocardiography. J Am Soc Echocardiogr.18:7 49-756 13. Kobal SL, Pollick C, Atar S., et al. (2006) Stress echocardiography in octogenarians: transesophageal atrial pacing is accurate, safe, and well tolerated. J Am Soc Echocardiogr.19: 1012-1016 14. Benson DW Jr, Stanford M., Dunnigan A., Benditt DG. (1984) Transesophageal atrial pacing threshold: role of interelectrode spacing, pulse width and catheter insertion depth. Am J Cardiol. 53: 63-67
Author: Andres Anier Institute: Street: City: Country: Email:
Tallinn University of Technology Ehitajate tee 5 19086 Tallinn Estonia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Power density spectra of the velocity waveforms in Artificial heart valves A. A. Sakhaeimanesh Biomedical Engineering Group, Faculty of Engineering, University of Isfahan, Isfahan, Iran
Abstract-To find the possible frequencies induced by the vibration of the flexible membrane of the Jellyfish valve, power density spectra of the the valvular velocity waveforms were carried out. Most of the spectral energy was contained in frequencies lower than 11 Hz and all spectra exhibited pronounced peaks which implied wave motions in the preferred frequency range. Two distinct peak frequencies, 1.2 and 2.4 Hz, were observed downstream of the Jellyfish valve which qualified as the frequencies of fundamental harmony of the waveform velocity and one of its sub harmonics. Effect of oscillation on elevating turbulent shear stresses through the jellyfish and St.Vincent valves has also been investigated. Laser Doppler Anemometry (LDA) was employed to determine the velocity and shear stress distributions at various locations downstream of the valves. Comparison between two valves revealed that at 0.5D downstream of the valves the magnitude of shear stresses in the Jellyfish valve were much higher than those of the St. Vincent valve at cardiac outputs of 4, 5.5 and 6.5 l/min. The cause of high shear stresses in close proximity to the Jellyfish valve could be attributed to the oscillation of the membrane which in turn generated a wake downstream of the valve (in the core of valve chamber) and produced a wide region of disturbance further downstream. This resulted in further pressure drag and consequently, higher pressure drops across the valve and higher shear stresses downstream of the valve.
high shear stresses are exposed to a distribution of shear stresses over their entire membrane which causes the bloodcell membrane to be stretched and cause harmful changes to its essential function and eventually rupture the cells. Therefore, haematologically, it is highly desirable that a valve design shouldn’t produce excessive turbulence, which may cause haemolysis ([2], [3], [4], [5]). In this study power density spectra of the valvular velocity waveforms were carried out and the effect of oscillation on elevating turbulent shear stresses and pressure drops through the jellyfish and St.Vincent valves has been investigated. Laser Doppler Anemometry (LDA) was employed to determine the velocity and shear stress distributions at various locations downstream of the valves.
Keywords-Power density spectra, Heart valves, shear stresses, oscillation, LDA technique
S T ( f ) FFT =
I. INTRODUCTION
Prosthetic heart valves are commonly used for replacement of natural valves, in ventricular assist devices (VADs) and total artificial hearts (TAHs). In artificial heart valves, the problems of haemolysis, platelet destruction, thrombus formation, perivalvular leakage, tissue over growth and endothelial damages are directly related to the fluid dynamic characteristics of flow past artificial heart valves ([1], [2], [3]). The presence of the prosthetic valve as a stenosis disturbs the blood flow and produces regions of high turbulent shear stresses, jetting and flow stagnation which, in turn cause pathological problems such as haemolysis and thrombosis. Blood cells in the region of
II. METHOD
Power density spectra of the valvular velocity waveforms were carried out. The fast Fourier transform (FFT) was implemented by FLOware to calculate spectral estimate of the valvular velocity waveforms over the entire cycle to produce the power spectra density. Mean spectral or power spectral density can be estimated as:
ST =
N T {| ui e j 2πfti |2 } & ∑ 2 N i =1
1 M ∑S M m=1 Tm
where S T ( f ) is spectral estimate, S T ( f ) is mean spectral estimate or power spectral density, S Tm is the spectral estimate S T calculated from the mth blocks of data, M is the total number of blocks, T is duration of block during which N spherical samples occur and ui is axial velocity component of ith particle. Two valves namely Jellyfish and St.Vincent valves were selected. Jellyfish valve consist of a thin flexible membranous occluder made of Polyurethane and attached centrally to a rigid frame which have several spokes to protect against prolapse of the membrane.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 545–548, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
546
A. A. Sakhaeimanesh
A Dantec (Skovlunde, Denmark) two-component LDA system was used to determine the flow field at various locations downstream of the valve. Data was collected in continuous mode over 100 to 200 cycles, depending on the collected data rate to ensure that at least 1000 samples would be collected during every 5 ms of the forward flow phase. After collecting data over complete cycles, data from each cycle is divided into 168 sample windows, each 5 ms duration. Then data belong to nth sample window of each cycle was complied into nth bin and averaged to yield fluctuating and mean components. Mean components over 100 to 200 cycles (depending on data rate collection) can be manipulated into one representative cycle as follows: n
m
∑∑S Sn =
i =1 j =1
ij
nm
S n is the mean component of nth sample window, S i are the instantaneous components contained where
Fig. 1: Diagram of the pulse duplicator used in this study (not in scale), (1) ventricular, aortic and compliance pressure taps; (2) aortic valves; (3) mitral chamber; (4) pump piston; (5) adjustable resistance; (6) mitral valve; (7) electromagnetic flow meter probe; (8) air releaser and air pump; (9) index matching box.
A blood analogue fluid of water-saline solution was contained inside the ventricle chamber and was separated from the piston pump by the polymeric flexible ventricle. Blood analogue fluid provided a transparent and easy handling situation for velocity measurements with Laser Doppler Anemometry. In the inlet of the flexible ventricle chamber (mitral position) a Björk-Shiley tilting disc valve was installed. An electromagnetic square-wave flowmeter, which was calibrated before measurement, was installed 8D downstream of the valve so that the instantaneous flow rates could be determined. The pressure pulses were measured by three disposable and physiological blood pressure transducers in the left ventricle, downstream of the aortic valve and in the compliance chamber. Flow measurements were done at cardiac outputs of 4, 5.5 and 6.5 l/min.
within a sample window, n is the number of data points in a sample window and m is the number of cycles measured. All possible sources of error were carefully examined and having ensured that all recommendations concerning optical component alignment, seeding, filtering, signal processing, and calibration were carried out, the estimated measurement error of the mean velocity is ∼3% and that in the rms is ∼7%. Diagram of mock circulatory system is shown infigure 1 and more details of it and LDA technique are given somewhere else ([6] and [7]). III.
RESULTS AND DISCUSSION
A Spectrum analysis To find the possible frequencies induced by the vibrations of the flexible membrane of the Jellyfish valve, power density spectra of the valvular velocity waveforms were carried out. The random nature of the LDA prohibits sampling at regular and equi-spaced intervals which presents additional variability of the spectral estimator. In order to reduce this variability, spectral analysis of data was performed according to the method of direct Fourier transform of short blocks of data by FLOware by implementing the re-sampling of the signals for data that were not collected in the dead time mode. Important and useful information about dominant frequency peaks and preferred mode which exist in the flow can be derived from the spectral information. Figures 2 and 3
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Power density spectra of the velocity waveforms in Artificial heart valves
Fig. 2: power density spectra of waveform velocity estimated at 0.5D downstream of the Jellyfish valve in the region of Jetting at cardiac output of 6.5 l/min under pulsatile flow condition.
show typical energy spectra measured at downstream locations of the Jellyfish and St. Vincent valves in the regions of stagnation and jetting at cardiac output of 6.5 l/min. Most of the spectral energy was contained in frequencies lower than 11 Hz and all spectra exhibited pronounced peaks which implied wave motions in the preferred frequency range. Two distinct peak frequencies, 1.2 and 2.4 Hz, were observed downstream of the Jellyfish valve which qualified as the frequencies of fundamental harmony of the waveform velocity and one of its sub harmonics. Another distinct peak, between 3 and 4 Hz, with the power of 0.2 was observed at 0.5D of the jellyfish valve (figure 2). This frequency peak may qualify as the second sub harmonic frequency of the velocity waveform or, together with other higher frequency peaks, may qualify as the product of the membrane induced vibration peaks. The same results were found downstream of the Jellyfish valve at the other cardiac outputs. These are not presented here due to similarity in the results.
547
Behind the occluder and in the jetting region of the St. Vincent valve, two distinct peak frequencies of the fundamental and its sub harmonics of the velocity waveform motion were observed (figure 3). A third distinct peak frequency, between 3 and 4 Hz, which was observed at downstream of the Jellyfish valve did not exist at downstream of the St. Vincent valve. This can be described in terms of solidarity of the occluder of the St. Vincent valve, which does not induce vibration in the flow. Comparison between two valves revealed that at 0.5D downstream of the valves the magnitude of shear stresses in the Jellyfish valve were much higher than those of the St. Vincent valve. Furthermore, at 3D and 5D downstream of the Jellyfish valve the magnitudes of shear stresses reduced dramatically to 4 and 1 N/m2 respectively at cardiac output of 6.5 l/min. At 3 and 5D downstream of the St. Vincent valve, on the other hand, showed maximum shear stresses of the values of 49 and 16 N/m2 respectively. It is hypothesized that the cause of high shear stresses in close proximity to the Jellyfish valve was due to the oscillation of the membrane which in turn generated a wake downstream of the valve (in the core of valve chamber) and produced a wide region of disturbance further downstream. This resulted in further pressure drag and consequently, higher pressure drops across the valve and higher shear stresses downstream of the valve. This idea was supported by the results of shear stress (tables 1) and pressure drop measurements (Figure 4). Maximum and mean shear stresses at 0.5D downstream of the Jellyfish valve were about twice those of the St. Vincent valve and pressure drops across the Jellyfish valve were up to 93 % higher than those of the St. Vincent valve under steady flow rates between 10 to 26 l/min. The effect of oscillation can also be seen from the results of turbulence intensities. Maximum and mean turbulent intensities 0.5D downstream of the Jellyfish valve were as high as 723 and 255 % respectively at cardiac output of 6.5 l/min (605 and 212% at cardiac output of 5.5 l/min and 440 and 145 % at cardiac out put of 4 l/min respectively). At 3D downstream of the valve, maximum and mean turbulent intensities reduced dramatically to 17 and 8 % respectively at cardiac output of 6.5 l/min. Reduction in turbulence, and consequently in shear stress estimation in the 3D and 5D downstream measuring planes of the Jellyfish valve, indicated that the effect of the membrane oscillation decayed beyond 1D at all cardiac outputs. The consequence of such a reduction in turbulence was low shear stress estimations in these measuring planes (less than 4 N/m2) at cardiac output of 6.5 l/min.
Fig. 3: power density spectra of waveform velocity estimated at 0.5D downstream of the St. Vincent valve in the region of Jetting at cardiac output of 6.5 l/min under pulsatile flow condition.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
548
A. A. Sakhaeimanesh
Table 1: Mean and maximum values of shear stresses, turbulent intensities and r.m.s of axial velocities of the Jellyfish and St. Vincent valves at cardiac outputs of 6.5 l/min at different downstream locations. Jellyfish valve
St. Vincent valve
Distance, in diameter
0.5D
3D
5D
0.5D
3D
5D
Max shear stress, N/m2
143
4
1
82
46
16
Mean shear stress, N/m2
57
0.68
0.15
17.94
17.1
5.2
Max turbulent intensity, U %
723
17.7
16.6
100.2
35
30.5
Despite the eccentricity of the flow at downstream of the St. Vincent valve, turbulent intensities at close vicinity of the St. Vincent valve (0.5D) were much less than those of the Jellyfish valve (table 1). This can be attributed to the fact that solid occluder of the St. Vincent valve did not produce vibration in the downstream flow field which consequently resulted in low shear stress estimations. At 3 and 5D downstream of the St. Vincent valve, eccentricity of flow still existed. This meant that flow become a fully developed beyond 5D. Maximum and mean turbulent intensities of the values of 33 and 15.4 % respectively were found 5D downstream of the St. Vincent valve. These were twice the values found at 5D downstream of the Jellyfish valve (16 and 5.4 % ) at cardiac output of 6.5 l/min. Such a disturbed flow at 3 and 5D downstream of the St. Vincent valve produced maximum shear stresses in the range of 21-46 N/m2 (compare to 2-4 N/m2 in The Jellyfish valve) at 3D and in the range of 6-16 N/m2 at 5D measuring plane at different cardiac outputs.
A summary of the data are presented in tables 1 for cardiac outputs of 6.5 l/min. Similar trends and results were found at mid acceleration and deceleration but are not presented here due to their similarity with the peak systole results.
REFERENCES 1. 2.
3. 4. 5.
6. 18 16
Pressure drops across Jellyfish and St. Vincent valves under different flow rates Jellyfish valve St. Vincent valve
7.
14
Pressure drop, mm Hg
12 10
8.
8 6 4 2 0 10
15
20
26
Flow rate, l/m in
Fig. 4: pressure drops across the Jellyfish and St. Vincent valves under different flow rates.
Hanle D.D., Harrison E.C,. Yoganathan, A.P, et al. (1989)., In vitro flow dynamics of four prosthetic valves, a comparative analysis, J. Biomechanics, 22: 597-607. Reul H., Van Son Jaques A.M, et al. (1993), In vitro comparison of bileaflet aortic heart valve prostheses- St. Jude Medical, CarboMedics, Modified EdwardsDurmedics and Sorin-Bicarbon valves. J of Thoracic and Cardiovascular Surgery, 106: 412-420. Yoganathan A.P. (1995) Cardiac Valve prostheses, In: J.D. Bronzino (ed), The Biomedical Engineering Handbook (CRC and IEEE presses, 1847-1870). Ruel H. (1983) , In vitro evaluation of artificial heart valves,. In: D.N. Ghista, O. Hamilton, (eds). Advance in Cardiovascular physics (Basel New York, Karger , 5) Shim H.S., Lenker J. A. (1988) Heart valve prostheses, In: Webster J.G., (ed), Encyclopedia of Medical device and instrumentation (John Wiley and sons, Inc., 3: 14571473) Sakhaeimanesh A.A., Morsi Y.S. (1999) Analysis of regurgitation, mean systolic pressure drops and energy losses for two artificial aortic valves. Journal of Medical Engineering and Technology, 23(2): 63-68. Morsi Y. S., Sakhaeimanesh A. A (2000) Flow characteristics past jellyfish and St. Vincent valves in the aortic position under physiological pulsatile flow conditions. Artificial Organs, Volume 24(7): 564-574. Bluestein D, Einav S (1994) A modified stability diagram of pulsatile flow through heart valves based on improved spectral estimations of LDA data, American Society of Mechanical Engineers: Laser Anemometry, Advance and Application, Volume 191: 125-133. Author: A. A. Sakhaeimanesh Institute: Biomedical Engineering Group, Engineering, University of Isfahan Street: Hezar Jarib Street City: Isfahan Country: Iran Email:
[email protected]
Faculty
of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Simulation of Renal Artery Stenosis Using Cardiovascular Electronic System K.Hassani1, M.Navidbakhsh2 and M.Rostami1 1
2
Amirkabir University/Biomedical Engineering Faculty, Ph.d, Tehran, Iran Iran University of science and technology/Mechanical Engineering, Ph.d, Tehran, Iran
Abstract— This paper describes simulation of stenosis in renal artery. Using commercially softwares, it is possible to study and model the cardiovascular system and its pathologies. Geometrical data of renal artery has been extracted from the physiological texts. Locations of hypothetical stenosis is placed in the middle of the artery. The pressure drops which are caused by stenosis are calculated using CFD method. The blood pressure is assumed to be laminar, viscous and incompressible. The applied inlet velocity profile is pulsatile according to wormsley equations. Furthermore, the compliance variations which caused by stenosis are determined using approved formulas. The obtained pressure drops and calculated compliances are studied using an equivalent electronic circuit representing cardiovascular system. The results of the simulation including pressure graphs exhibit the effects of the renal stenosis on the cardiovascular system and they are compared with the relevant experimental observations and the results are in good agreement with them. Keywords— Stenosis, Renal, Simulation, CFD, Cardio vascular.
I. INTRODUCTION Cardiovascular disease is very common in patients with end stage renal stenosis. Accelerated arterial stiffening and a high prevalence of atherosclerotic lesions contribute to high cardiovascular mortality rates in the world. Hypertension is generally considered to play an important role in the patients with renal stenosis. The disease has been recognized in the last decade as a distinct cause of renal insufficiency ,especially in patients with end stage renal failure which requires entering a dialysis program, is primarily affected by stenosis. The renal stenosis tends to involve the origin artery (proximal 2 cm) and major bifurcations of the artery. Stenosis of the renal artery tends to be progressive and will eventually lead to occlusion of the vessel. The stenosis trans capillary pressures to drive glomerular filtration are maintained by a preferential increase in the efferent arteriolar resistance behind the glomerulus. The aim of this study is to determine the effects of stenosis on the renal artery. In this regard, all of the pressure drops in renal are determined by CFD method and then the compliance variations due to different sizes of stenosis are determined by using related formula. The
changes which are observed in the resistance and compliance of the artery due to different sizes of stenosis are studied in the equivalent electronic circuit of cardiovascular system. We have tried to observe the disease effects on the normal operation of cardiovascular system and analyze the pathology as well. II. MATERIAL AND METHODS A. Model Simulation Principles We have used two separate method of simulation. First using CFD method, the pressure drops of artery with respect to different stenosis sizes are obtained and the compliances are calculated as well. The diameter and length of artery have been obtained from the medical texts [1] and [2]. The stenosis with different diameters have been located in the middle of the artery because surgical observations confirm that most stenosises occur in the area which is near to middle of the artery [3], furthermore . The blood flow has been assumed to be laminar, Newtonian, incompressible, pulsatile and the blood flow is always considered to be incompressible with constant density [3]. The applied pulsatile flow Which has been calculated using experimental data [5] is according to (1) :
Q= 0.941750*sin(6.283*t+0.552)
(1)
The unit of flow (Q) is ml/s and blood flow across the renal artery has a peak equal with 20 ml/s [5]. The degrees are according to radian. Using this equation, one C program file has been written for the inlet section to enable software to apply appropriate unsteady velocity profile. By dividing flow to artery section area, the inlet pulsatile velocity profile can be calculated. In the next part, the compliance of renal in both healthy and abnormal conditions is determined. The compliance of a elastic vessel with certain diameter and length is calculated [3] according to (2): C = (3П.r3.Z)/2Eh
(2)
In this Equation, (r) is radius of the vessel, (Z) length, (E) elastic module and (h) is the thickness of the vessel. The Compliance of aorta in healthy condition can be calculated
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 533–536, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
534
K.Hassani, M.Navidbakhsh and M.Rostami
by (2). In order to determine the compliance of renal with stenosis, first the relation of (r) and (Z) shall be determined by (3 ): r(x)=ax2+bx+c
(3)
The a, b and c parameters can be determined for each size of stenosis in renal artery. The compliance of the abnormal renal is calculated by integration of (2) with respect to (3) or (4). The limits of the integral are the points in which the stenosis starts and ends on the artery. B. Model Simulation Principles Using CFD method, the pressure drops for the artery have been calculated. The Geometrical model of the arterytion (with stenosis located in the middle) have been made. The diameter of stenosis has been increased from 20% to 90% (20, 50,70 and 90 percent) for the artery and pressure drops of the section has been determined . The outlet pressure of each model is considered to be zero and inlet pressure is calculated. The walls of the vessel is non-moving and rigid. This helps us to measure the pressure drops that are only due to blood resistance and not compliance variations. The method of solving is 2D, unsteady, 1 st-order implicit, segregated and axisymmetric. Using CFD method, the pressure graphs and contours of the velocity and pressure have been obtained for each section. In the next part, the compliances in healthy and abnormal conditions are calculated for different stenosis sizes in the artery. The obtained pressure drops and compliances of the artery with stenosis are shown in Table 1. The normal value of compliance of renal is 0.0125ml/Kpa . The blood flow is considered 83.33 ml/s [4]. In our previous study [5], the equivalent electronic system of cardiovascular system is presented which shows the operation of the system in normal condition. Figure 1 show the electronic circuit of cardiovascular system. This circuit consists of three voltage suppliers which produce required current . There are six resistors, inducers and capacitors which describe the whole aorta artery sections including ascending, thoracic and abdominal. Other parts of system can be seen in the circuit. Voltage, current, charge, resistance and capacitance in the electronic circuit are equivalent to blood pressure, flow, volume, resistance and compliance in the cardiovascular system. Ground potential is equivalent to zero as a reference for voltage measurements. The frequency is 1 Hz and the correlation between electrical characteristics of the system and their mechanical counterparts are as follow: 1mmHg = 1 volt (pressure~ voltage)
1ml/Pa = 1000 μF (compliance~ capacitance) 1 Pa.s/ml = 1kΩ (resistance) 1 Pa.s2/ml =1 μH (inertia ~ inducer) Essentially the energy of systolic contraction is modeled by superposition of three voltage suppliers and diodes. These sources are amplifiers and the input of them are connected to the capacitors and resistors simulating aorta and other arteries. Thus during systole the voltage from the adjusting capacitor (C) is amplified and applied to other capacitors of the circuit. The pressure drop and the compliance of each studied section are converted to their electrical counterparts which are resistance and capacitance. The values have been applied to the renal part on the electronic circuit for studying the effects. Later, the pressure graphs of each studied section is obtained by running the circuit. Table 1 Values of pressure drops, compliances and Diastolic/Systolic pressures of renal affected by stenosis. Renal (Dia=2..6mm, L=32mm
∆P (Pa) Stenosis
C ml/Kpa) Stenosis
Diastol/Systole Pressure(mmHg)
20% Stenosis 50% Stenosis 70% Stenosis 90% Stenosis
7.46 17.1 75 3790
0.01512 0.0236 0.116 0.136
82-121 85-127 97-148 145-210
III. RESULTS AND CONCLUSIONS The pressure drops and compliances of stenosis have been determined and shown in Table 1. As it can been seen, the range of pressure drops that have calculated for each stenosis show significant difference among the obtained values especially in 90 percent case .The graph of pressuretime for stenosis of (50%) as well as its pressure contour are shown in Figure 2 and Figure3. Also Figure 4 shows the contour of (90%) stenosis. For other sections, the same pressure graphs and contours are obtained. There is not any direct experimental observation regarding local pressure drops of the renal stenosis, . The experimental observations have found a positive correlation with systolic pressure and total pulse pressure [4]. This clinical features include a wide arterial pulse pressure which reflects loss of arterial compliance, bruits over the renal and other major arteries. The data indicates that increasing pulse pressure (systolic minus diastolic) is associated with increased stenosis expansion rate which leads to hypertension [6]. As the stenosis grows, the systemic and diastolic pressures rise and our results show it clearly. The experimental data [6]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Simulation of Renal Artery Stenosis Using Cardiovascular Electronic System
535
Right.Ext.Carotid 1
2
159.8k
Left Carotid
40.4uH
Left.Ext.Carotid 1
1 9.89k
0.01277u
2 40.6uH
2 10.93uH
160.72k
0.197u
Left Subclavin II
0.0137u
1
2
0.489u
14.118k 18.61uH
Right.Int.Carotid 1
Right Carotid
2
159.8k 40.4uH 1
0.01277u 1
1 1k
Left Atrium
2 0.1uH
1 4k
8k
1u
3k
3u
2
0.489u
14.11k 18.61uH
Pulmonary Pulmonary Pulmonary Artery2 Artery3 Vein1
2
1k
27u
Mitral Valve
1
0.1uH
0.5k
10u Pulmonary Vein2
0.167u
Right Subclavin II 1
Pulmonary Artery 1
9.3uH
0.0137u
160.7k 40.6uH
Right.Ext.Carotid
2
8.14k
2
2
3
Aortic Valve 1
0.1uH
1
2
0.5k 0.1uH
10u
101u Left Ventricle
D264
D265
Dbreak
Dbreak
Ascending Aorta 0.133uH
1
2
1
Aortic Arch II
2
0.11k
7.6 2.6259u
25u
Aortic Arch I
Thoracic Aorta 1
1
0.114uH
2
0.0265k 0.245uH
0.918u
1
Thoracic Aorta 2
2
0.046k 0.374uH 0.489u
Cadjustor
1
2
0.446k 1.642uH 0.55u
0.376u
0.262u
Abdominal Aorta 1
50u D263 Dbreak
D266 Dbreak
Pulmonary Valve
0.419k 2
Radjustor V4 Hepatic
1.506k
D3065
R10000
QSCH-5545/-55C
0.749uH
7Vdc 1
0.9k
9.815uH
Gastric
D24
2
VOFF = 0 VAMPL = 20 FREQ = 1
216.45u 0.1uH
D1N4148
D25 D1N4148
V5 V7 8Vdc
v10
VOFF = 0 VAMPL = 30 FREQ = 2 VOFF = 0 VAMPL = 240 FREQ = 1
V2
Vein 1
Vein2
1
V3
Capillaries Arterioles
2
450u
1 1k
9k 210u
0.1uH
1
Left Renal
2 3.4072uH
2 1uH 71u
1 48k
6.24k
2
1
0.0906u
1 Right Femoral
2 47.533uH 0.0906u
2 10.1uH
72k
1.4u
1
1.468k
2.439uH 1
2 3.4072uH
6.24k
0.0125u
1
0.0524u Right Ext. Iliac 2 10.1uH 0.0524u
1
3.135uH 0.0577u
Inferior Mesenteric 1
68k 0.0254u 2
Left Common Iliac 1
0.353uH
2
14.058uH
2 2.867k
0.882k
2 2.244uH
0.00561u
1
2 1uH
Right Renal
Left Ext. Iliac
47.533uH 1
Abdominal Aorta 2
2
0.0125u
1 0.5k
9.817k
Superior Mesenteric
Left Femoral Right Atrium
2
0.0285u
1 2 .2 3 k
0.5k Tricuspid valve D5 VOFF = 0 VAMPL = 6 FREQ = 2
V6
60.28k
5.996uH
8 7 .7 4 k
1
2
VOFF = 0 VAMPL = 7 FREQ = 2
0.1933u
0.0108u
V1
1 2 .2 3 k
1
Splentic 1
VOFF = 0 VAMPL = 40 FREQ = 3
1
2 15.77uH
0 .0 8 2 9 u
D28 D1N4148
0.1uH
1
D261 Dbreak
8 7 .7 4 k
Right Ventricle
Pace Maker for Left Heart
20
Pace Maker for Right Heart
2
20u
25.11k
0.0168u
D262 Dbreak 145u
2
1
0.1218k
Abdominal Aorta3 0.2661uH
2 3.135uH 0.0577u
2.867k Right Common Iliac
Fig. 1 Electronic circuit of cardiovascular system
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
536
K.Hassani, M.Navidbakhsh and M.Rostami
Fig. 4 Pressure contour in 90% stenosis of Renal
Fig. 2 Pressure contour in 50% stenosis of Renal Show the incrase of systolic pressure in the stenosis of renal between 124-166 mmHg and incrase of diastolic pressure between 84-108 mm Hg. Our results are in good agreement with the data and show the risk of stenosis when it nearly blocks the artery(90%). Our aim was to study of renal stenosis using an electronic circuit which represents
the whole cardiovascular system and can show the abnormities by varying its parameters. The obtainedresults show the increase of blood pressure due to stenosis and they are in accordance with experimental data which show high hypertension in a renal stenosis case.
REFERENCES 1.
2. 3. 4. 5. 6.
K. Wilson, J. Windholt, and P. Hoskins, “The relationship between abdominal aortic aneurysm distensibility and serum markers of elastin and collagen metabolism,” European Journal of vascular and endovascular surgery., vol. 21, Aug. 2001, pp. 175–178. A.C.Guyton, Textbook of physiology. Philadelphia: W.B.Saunders, 1996, ch. 4. V.C.Rideout, Mathematical and computer modeling of physiological systems. Newyork: Prentice Hall,1991.chap.5. W. Bos, “Renal vascular changes in renal disease independent of hypertension,” Nephrol Dial Transplantation , vol.16, 2001, pp.537-541. K. Hassani,M. Navidbakhsh and M. Rostami, “Simulation of cardiovascular system using equivalent electronic circuit, ” Biomedical-papers journal, vol.150(1), 2006. Robin G.Woolfson, “Renal Artery Stenosis: Diagnosis and Management ,”Indian Journal of Heart, vol.54,2002, pp.261265.
Author: Institute: Street: City: Country: Email:
K.Hassani Amirkabir University Hafez Street Tehran Iran
[email protected]
Fig. 3 Pressure graph in 50% stenosis of Renal
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Effect of in vitro Anticoagulant Disodium Citrate on Beta-2-glycoprotein I Induced Coalescence of Giant Phospholipid Vesicles M. Frank1, M. Lokar2, J. Urbanija3, M. Krzan4, V. Kralj-Iglic3, B. Rozman1 1
Department of Rheumatology, University Medical Centre, Ljubljana, Slovenia Laboratory of Physics, Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia 3 Institute of Clinical Biophysics, Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia 4 Institute of Pharmacology and Experimental Toxicology, Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia 2
Abstract— In order to elucidate the mechanisms of blood coagulation the complex interactions between phospholipid membranes, serum protein beta-2-glycoprotein I (β2GPI), antiphospholipid antibodies (aPL) and disodium citrate were studied by observing collective interactions between giant phospholipid vesicles (GPVs) in the sugar solution. GPVs palmitoyl-oleoyl-sn-glycero-3-phosphocomposed of choline (POPC), tetraoleoyl cardiolipin and cholesterol were obtained by the electroformation method and observed under the phase contrast microscope. β2GPI or aPL acted as mediators inducing the coalescence of the vesicles. The strength of the adhesion between the coalesced vesicles was dependent on the content of cardiolipin and the species of the mediator. The addition of disodium citrate to the coalesced GPVs solution caused disintegration of the complexes of coalesced vesicles. The extend of the disintegration between coalesced vesicles was interpreted to be connected to the strength of the adhesion between GPVs. It was found that the disintegration of the GPV complexes was more pronounced in the system where the vesicles coalesced due to the presence of antiphospholipid antibodies compared to the system where the vesicles coalesced due to the presence of β2GPI. The effect of the disintegration of the coalesced GPVs was more pronounced for smaller vesicles which originated in the budding of the the membrane of larger GPVs. Keywords— Beta-2-glycoprotein I, Antiphospholipid antibodies, Giant phospholipid vesicles, Thrombosis.
I. INTRODUCTION Antiphospholipid syndrome (APS) is a complex clinical syndrome characterized by recurent vascular thrombosis, pregnancy morbidity and thrombocytopenia, which occur in the presence of antiphospholipid antibodies (aPL) (1). aPL are a heterogenous group of antibodies. Only aPL directed against phospholipid binding proteins were asociated with clinical manifestations of APS (2). Beta-2-glycoprotein I (β2GPI), a phospholipid-binding plasma protein, is the major antigen for aPL (3). The role of β2GPI and aPL in the blood coagulation is not well understood, however β2GPI was found to be anticoagulant in vivo while aPL were found to be procoagulant in vivo. Sodium citrate is used to prevent
the coalescence of blood cells in vitro. In order to elucidate the mechanisms of blood coagulation, it is of interest to better understand complex interactions between phospholipid membranes, procoagulants and anticoagulants in vivo as well as anticoagulants in vitro. It was previously shown that β2GPI, monoclonal antibodies against β2GPI (maβ2GPIs) and their combination bind to phospholipid surfaces and may induce the coalescence of giant phospholipid vesicles (GPVs) (4, 5). In our present work we wanted to estimate the strength of the adhesive interactions between GPVs mediated by β2GPI or maβ2GPIs by adding disodium citrate to the solution with coalesced GPVs. Disodium citrate causes the disintegration of the complexes formed by the vesicles. II. MATERIALS AND METHODS A. β2GPI and Monoclonal Anti β2GPI Antibodies β2GPI and maβ2GPIs antibodies (Hyphen BioMed, France) were aliquoted and stored at - 70°C. In all experiments, the final concentration of β2GPI in phosphate buffer saline (PBS) was 200 mg/L, which is the concentration of physiological β2GPI in normal human plasma (about 200 mg/L) (6,7). The final concentraction of maβ2GPI dissolved in PBS used in experiments was 200 mg/ml. B. Disodium citrate solution Disodium citrate was dissolved in water at the concentration of 200 mmol/L and further diluted for the experiments. In all experiments, the final concentrations of disodium citrate was 0,046 mol/l. C. Giant phospholipid vesicles GPVs were prepared at room temperature (23°C) by the modified electroformation method (8). The synthetic lipids cardiolipin (1,1'2,2'-tetraoleoyl cardiolipin), POPC (1palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine), and cho-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 566–569, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
The Effect of in vitro Anticoagulant Disodium Citrate on Beta-2-glycoprotein I - Induced Coalescence of Giant Phospholipid Vesicles 567
lesterol were purchased from Avanti Polar Lipids, Inc. Appropriate volumes of POPC, cardiolipin and cholesterol, all dissolved in a 2:1 chloroform/methanol mixture, were combined in a glass jar and thoroughly mixed. For vesicles containing 20% and 40% weight ratio of negatively charged cardiolipin POPC, cholesterol and cardiolipin were mixed in the proportions 3:1:1 and 2:1:2 respectively. 20 µL of lipid mixture was applied to the platinum electrodes. The solvent was allowed to evaporate in a low vacuum for 2 hours. The coated electrodes were placed in the electroformation chamber which was then filled with 3 mL of 0.2 M sucrose solution. An AC electric current with an amplitude of 5 V and a frequency of 10 Hz was applied t o the electrodes for 2 hours, which was followed by 2.5 V and 5 Hz for 15 minutes, 2.5 V and 2.5 Hz for 15 minutes and finally 1 V and 1 Hz for 15 minutes. The content was rinsed out of the electroformation chamber with 5 mL of 0.2 M glucose and stored in a plastic test tube at 4oC. The vesicles were left for sedimentation under gravity for one day and were then used for a series of experiments.
Fig. 1: The solution of giant phospholipid vesicles containing 40% weight ratio of cardiolipin.
D. Observation The vesicles were observed by an inverted microscope Zeiss Axiovert 200 with phase contrast optics and recorded by the Sony XC-77CE video camera. The solution containing vesicles was placed into the observation chamber made from cover glasses sealed with grease. The larger (bottom) cover glass was covered by two smaller cover glasses, each having a small semicircular part removed at one side. Covering the bottom glass by two opposing cover glasses formed a circular hole in the middle of the observation chamber. In all experiments the solution of vesicles (45 µl) was placed in the observation chamber. The solution containing the substance under investigation (β2GPI or maβ2GPI, followed by disodium citrate) was added into circular opening in the middle of the observation chamber. 5 µl of disodium citrate solution was repeatedly added to the solution containing coalesced GPVs 30, 45 and 60 minutes following the addition of either β2GPI or maβ2GPI. Together, 15 µl of disodium citrate solution was added to the solution of coalesced GPVs. III. RESULTS A. Addition of either b2GPI or maB2GPI to the giant phospholipid vesicles The solution of GPVs contained a heterogenous population of vesicle shapes with many vesicles exhibiting tubular protrusions (Fig. 1). Most vesicles were flaccid and
Fig. 2: The solution of giant phospholipid vesicles containing 40% weight ratio of cardiolipin few minutes after the addition of monoclonal antibodies against β2GPI dissolved in PBS to the solution with vesicles.
were fluctuating thermally. Few minutes after the addition of either β2GPI or maβ2GPI to the GPVs solution the thermal fluctuations of vesicles diminished, the tubular protrusions disintegrated into smaller vesicles and the GPVs became nearly spherical (Fig. 2). The observed shape transformation was attributed to the effect of the PBS since the same effect was observed also after the addition of PBS alone. With PBS alone the sample ultimately contained nearly spherical fluctuating vesicles. B. Addition of disodium citrate solution to the solution with coalesced GPVs β2GPI as well as maβ2GPI caused the vesicles to coalesce into two- or multicompartment complexes (Figs. 3 and 4) which adhered to the bottom of the glass slide and ceased to fluctuate. The adhession of the vesicles was stronger in the GPV-β2GPI solution compared to the GPVmaβ2GPI solution as the areas of contact between adhered
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
568
M. Frank, M. Lokar, J. Urbanija, M. Krzan, V. Kralj-Iglic, B. Rozman
Fig. 3: The adhesion of giant phospholipid vesicles containing 40% weight ratio of cardiolipin due to the addition of β2GPI (upper). The addition of 15 µl disodium citrate caused only slight disintegration of the complexes (lower).
Fig. 4. The adhesion of giant phospholipid vesicles containing 40% weight ratio of cardiolipin due to the addition of monoclonal antibodies against β2GPI (upper). Disodium citrate caused substantial disintegration of th complexes (lower).
vesicles seemed larger in the GPV-β2GPI solution compared to the GPV-maβ2GPI solution. Slight disintegration of the coalesced GPVs was observed in the GPV-β2GPI solution about 5 minutes after the addition of disodium citrate (Fig. 4) while the effect was pronounced in the GPVmaβ2GPI solution. The disintegration was most prominent for smaller vesicles, which originated in the budding from the membranes of large GPVs (Fig.4) while in GPV-β2GPI solution, the GPV complexes were preserved even after the third addition of 5 µl of disodium citrate (Fig. 3). It was also found that the adhesive interactions between GPVs are stronger for higher cardiolipin content (not shown).
The mechanisms causing the adhesion between phospholipid membranes may be of different origins. In β2GPI, a domain carrying a net positive charge and a domain that can by virtue of hydrophobic interactions insert into the phospholipid layer were identified. Such configuration of the β2GPI molecule can be involved in a bridging mechanism thereby establishing attractive interaction between like-charged membranes in close contact. In the case of antibodies, dimeric structure of antibodies can give rise to spatially distributed charge that can be described as a quadrupole. Quadrupolar ordering of the antibodies in the gradient of the electric field of the membrane may lower the free energy of the system. Further, this particular configuration of charge can be effectively described as direct interaction between charges in the electric double layer model which yields an attractive interaction between the like-charged membranes. We have shown that there is a difference in the disodium citrate effects in the solutions of coalesced vesicles with different interaction mediators. However, the mechanism underlying the effect of the disodium citrate on the coalesced complexes remains obscure.
IV. DISCUSSION GPVs are a convenient in vitro system for studying the interactions between phospholipid membranes and proteins. The observation of membrane adhesions between GPVs could contribute to the better understanding of the role of plasma proteins and anticoagulant drugs in thrombosis and haemostasis.
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
The Effect of in vitro Anticoagulant Disodium Citrate on Beta-2-glycoprotein I - Induced Coalescence of Giant Phospholipid Vesicles 569
V. CONCLUSIONS Disodium citrate caused disintegration of coalesced GPV complexes. The effect was stronger if the contact between the vesicles and between the ground support was weaker.
5.
6.
REFERENCES 1.
2.
3. 4.
Wilson WA, Gharavi AE, Koike T, Lockshin MD, Branch DW, Piette JC, Brey R, Derksen R, Harris EN, Hughes GR, Triplett DA, Khamashta MA. (1999) International consensus statement on preliminary classification criteria for definite antiphospholipid syndrome: report of an international workshop. Arthritis Rheum. Jul;42(7):1309-11. Matsuura E, Igarashi Y, Fujimoto M, Ichikawa K, Suzuki T, Sumida T, Yasuda T, Koike T. (1992) Heterogeneity of anticardiolipin antibodies defined by the anticardiolipin cofactor. J Immunol. Jun 15;148(12):3885-91. Roubey RA. (2000) Antiphospholipid syndrome: antibodies and antigens. Curr Opin Hematol. Sep;7(5):316-20. Review. Tang D, Borchman D, Harris N, Pierangeli S. (1998) Lipid interactions with human antiphospholipid antibody, beta 2-
7. 8.
glycoprotein 1, and normal human IgG using the fluorescent probes NBD-PE and DPH. Biochim Biophys Acta. Jun 24;1372(1):45-54. Ambrozic A, Cucnik S, Tomsic N, Urbanija J, Lokar M, Babnik B, Rozman B, Iglic A, Kralj-Iglic V. (2006) Interaction of giant phospholipid vesicles containing cardiolipin and cholesterol with beta2-glycoprotein-I and anti-beta2-glycoprotein-I antibodies. Autoimmun Rev. Nov; 6(1):10-5 McNally T, Mackie IJ, Isenberg DA, Machin SJ. (1993) Immunoelectrophoresis and ELISA techniques for assay of plasma beta 2 glycoprotein-1 and the influence of plasma lipids Thromb Res. Nov 15;72(4):275-86 Polz E, Kostner GM. (1979) The binding of beta 2-glycoprotein-I to human serum lipoproteins: distribution among density fractions. FEBS Lett. Jun 1;102(1):183-6 Angelova MI, Soleau S, Meleard P, Faucon JF, Bothorel P. (1992) Preparation of giant vesicles by external AC electric field. Kinetics and applications. Prog Colloid Polym Sci, 89:127-31 Author: Mojca Frank Institute: Street: City: Country: Email:
Department of Rheumatology, University Medical Centre, Vodnikova 62, Ljubljana Slovenia,
[email protected]
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
User–centered system to manage Heart Failure in a mobile environment E. Villalba1, D. Salvi1, M. Ottaviano1, I. Peinado1, M. T. Arredondo1, M. Docampo2 1
Life Supporting Technologies, Technical University of Madrid, Spain Healthcare and Wellbeing, Philips Design, Eindhoven, The Netherlands
2
Abstract— In Western world, the prevalence of chronic diseases is highly increasing due to the increase in the life expectancy. Besides, cardiovascular diseases (CVD) are the leading source of death, causing 45% of all deaths and Heart Failure (HF), the paradigm of CVD, mainly affects people older than 65. This paper focuses on the latest advances in the design and development of the user interaction system to manage Heart Failure Management in a mobile environment, based on daily monitoring of Vital Body Signals, with wearable and information technologies, for the continuous assessment of this chronic disease. Keywords— user interaction, wearable systems, health monitoring, usability test, personalized applications.
I. INTRODUCTION CVD cause 45% of all death in Western World. Heart Failure, which is consider the paradigm of cardiac chronic diseases, mainly affects people older than 65 [1]. The increase on the life expectancy in the developed countries has resulted in an increase in the number of hospitalizations due to chronic diseases, as well as in a potential decrease in the quality of life of the aging population. Within this context Myheart project was funded, with the mission of empowering citizens to fight CVD by means of preventing lifestyle and early diagnosis. One of the applications developed in Myheart is The Heart Failure Management System (HFMS). HFMS makes use of the latest technologies to monitor heart condition, both with wearable garments (to measure ECG and Respiration); and portable devices (such as Weight Scale and Blood Pressure Cuff) with Bluetooth capabilities. [2] HFMS aims to decrease the mortality and morbidity of the HF population. The system also focuses on improving the efficiency of the healthcare resources, maximizing the cost-benefit rate of the heart failure management. The system consists of three main elements: The Frontend, the User Interaction System and the Back-end. The Front-end is composed of the different garments, textile sensors and electronics for the recording of the vital signals that are required by the application (ECG, Respiration and Activity). The User Interaction System (UIS), which is based on a personal digital assistant device that receives data from the monitoring devices, processes it and encour-
Fig. 1 HFMS overview [2] ages the patients in the daily care of their heart. Besides, it enables the communication and synchronization with the Back-end. This last element includes the processing server and the databases to manage all patients’ data. Professionals can visualize and manage all data through a web access provided by a portal. The following figure sketches the system: All the daily routine data are processed and used in the detection of functional capacity, heart failure worsening and other complications. The timing tendency of data is automatically assessed in order to enable an early detection of: a) possible clinical decompensations (clinical destabilization warning signs), b) continuous “out of hospital” arrhythmia risk stratification and, c) evaluation of the HF progression. On the other hand, motivation strategies must be taken into account in order to provide patients with pertinent and relevant information, according to their physical and psychological status. The next section presents the methods used for the design and development of the UI System. II. METHODS The methodology applied was adapted from GoalDirected Design [3], adding new phases for the system lifecycle. The whole process is divided into seven iterative phases: Research, Modeling, Requirements, Framework, Design, Development and Validation. A. Research phase In order to determinate the right direction to design and develop the user interaction for the HFMS, a preliminary
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 558–561, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
User–centered system to manage Heart Failure in a mobile environment
mock-up system was implemented and tested in a set of interviews. Within these interviews a preliminary mock-up system (specially implemented for the aim of this research) was thoroughly validated during three months. The interviewees were HF patients as users, cardiologists and business managers related to chronic disease area of several hospitals. The validation was based on open and close-ended questions followed by a system demonstration. The demonstration consisted on using the mock-up to allow the users to assess the usability and comfort of the system. In total 26 people were interviewed: 10 end users (9 men and 1 woman, 80% of them above 60 years), 6 business managers and 10 cardiologists [4]. These interviews were designed by Philips Design Eindhoven in the scope of the MyHeart. The overall attitude towards the system was very positive. Most interviewees found it useful and a good solution for the long term treatment of chronic disease. They addressed the need of mobility so the system has to be implemented in a PDA that is both mobile and can be used in an intuitive way through a touch screen. The system provides a sense of security and confidence in people with HF. Nevertheless some issues where addressed as weakness. For instance, the system needs the user interaction with a technical device, which reduces the number of people that could be incorporated to the follow up of the program. Besides it forces the users to follow a fixed routine considered as a burden to their lifestyles. Hence, the system should be designed to incorporate a higher modularity, being able to offer different solutions to a diverse range of users, in function of their necessity and capabilities. B. Modeling phase Once the Research phase was completed, the Modeling phase generated both domain and user models, taking as input the results from the previous phase. Domain models included workflow diagrams. User models, or personas [3], are user archetypes that represent behavior patterns, goals and motivations. The persona of HFMS is Carlos Gómez, 72 years old. He is retired. He is aware of his heart condition and he is proactive to take a better care. Furthermore, he is able to handle an electronic device, following a very intuitive system. He does not have any special need in terms of accessibility (e.g. blind people). He used to smoke more than 20 cigarettes per day. 2 years ago he suffered ischemia, and that obliged him to retire. His father died due to a stroke when he was 19, so he was really worried. Visiting his cardiologist very three months is not enough for him. Thus, he decided to join a
559
special program that could help him to manage his health status and communicate with his doctor and other health professionals. Within his main goals, the most important one is the reassurance and confidence when performing his daily routine. He really needs to feel under control and to lose his fear to die because of a sudden death. He also aims to be in control of his own health evolution, self managing his health care being independent. But he does not want to be remained every day he is a chronic patient, so it is of vital importance to give him a system that is not intrusive and no-one notices that he is under treatment. C. Requirements phase The Requirements phase employed scenario-based design methods. End users were prompted to follow a daily routine divided into morning, exercise and before sleeping contexts. A context is a set of tasks (also named activities) to be performed together by the user at the same temporary term. For instance, a task or activity is the measurement of the blood pressure. And it can be done together with the weight measurement and the morning questionnaire during the morning. Thus, they all together can form up the morning context. The scenarios detected within this system are two. They consist of a set of measurements, making use of the wearable garments and portable devices at home. Besides, the user answers two questionnaires defined by the medical team. These two contexts form the indoors scenario. The exercise scenario or context proposes a short walk to promote a healthy lifestyle and to improve cardiovascular capacity, which composes the outdoors scenario. The professional will check the status of all patient though the portal. For the aim of this user design, only the scenarios for the final user are developed. Once the scenarios were addressed, the main user requirements were detected. The interaction comprises: • • • • • •
Very simple and intuitive start when needed Easiness of usage Very fast performance of daily activities, always ready when needed Easy to follow guidance to perform activities and help functionalities Adaptability to personal routines Error prevention and recovery
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
560
E. Villalba, D. Salvi, M. Ottaviano, I. Peinado, M. T. Arredondo, M. Docampo
D. Framework phase The analysis of the different scenarios is carried through an iterative refined context scenario from the study of “day in the life” of the persona during this phase. The daily routine is flexible and configurable for each particular patient. That is to say, the professional together with his patient can create a particular scenario or routine depending on their needs and preferences. To test the system, a default routine for the user is fixed, in order to illustrate the functionality of the system. Carlos wakes up at 8:00 a.m. every morning. He decided together with his cardiologist to perform all daily activities during the morning context. Thus he answers a short questionnaire, and then he measures his weight and blood pressure. Afterwards, he puts on his garment to measure his ECG and respiration. He has to wait then 2 hours before exercising. Hence, he performed other personal tasks until the PDA informs him that it is time to do the exercise. Then, he prepares and goes for a short walk under control. The remaining day he does not need to use the system, so he feels he is free but under control. Before going to sleep he answers a short questionnaire about his general wellbeing during the day. All gathered data, processed raw signals and notifications are sent to the Back-end, for further processing and management. The professional accesses all data through the portal. The first information that can be seen is an outline of every user emphasizing the most important events. The professionals can also consult and edit the information related to a particular user and compare the current tendencies with those of previous weeks and months.
• •
•
The user interaction device needs a mobile connection and a touch screen to allow a natural interaction. The selected device is a QTek S-200 series The user application needs to be highly user friendly, intuitive and easy to use. Besides, it needs to implement different use cases workflows in runtime, such as: daily vital signal monitoring and feedback, questionnaires filling, reminders, etc. Moreover, the system should support multilingualism. The user interface should guide the user during the whole routine, assisting him when errors occur and motivating him to compliance with the treatment. III. RESULTS
A. Resulted system: User Interaction System The resulted User Interaction System addresses all the requirements for the user interaction, the communication needs for the sensors readings and the sending of all data to the back-end. Besides, this system implements the daily routine to be performed by users in order to adherent to their heart failure care protocol. Thus, the user interaction system is divided into five main areas, described next: •
•
The communication module implements the protocol stack for the communication to the Front-end devices. Besides it enables the communication towards the Back-end system. The middleware toolset provides means to work in a comfortable and reliable programming framework.
E. Design phase Thereafter the four previous phases, the design one finalizes with a detailed documentation about all the requirements and specifications for the whole system. It includes the behavior of the interaction. The next list summarizes the main specifications: [5] •
•
Monitoring sensors and devices specifications: The vital signals needed are electrocardiogram (ECG), respiration, activity, weight and blood pressure. ECG and respiration will be measure with a wearable sensor. To measure weight and blood pressure commercial devices with communication modules are used All sensors and devices for measurements need communication capabilities through a Bluetooth link with a serial port profile. The communication with the Professional Platform and Back-end implements a secure GPRS or UMTS connection
Fig. 2 UIS modules overview
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
User–centered system to manage Heart Failure in a mobile environment
• •
The data management module provides the application database, and several tools for its processing and storage. Finally, the user interaction module includes all formularies that implement the user interface and the application controllers to user events and interactions.
The application has been developed under Microsoft .NET framework using C-Sharp, which is widely compatible with many Pocket-PC compliant devices. [6] B. Preliminary user tests results As far as now, a heuristic evaluation [7] has been performed, detecting a lack of control during the whole routine. To solve this issue, the skip functionality was implemented, as well as the help functionality, giving the users clear information. During November 2006, 5 users have been interviewed in Eindhoven, The Netherlands. This validation was performed in 2 rounds with 3 users, following a protocol similar to the one explained in the research phase. All users were over 65 and had some chronic illness related to cardiovascular pathologies or rheum. A woman was interviewed in both rounds, showing a great evolution in the understanding of the interaction. Although all users were able to handle the device, they had some problems in understanding the instructions, especially when there was a metaphor or symbol. Thus, there are no metaphors or icons in the interface and all instructions are given in text in a very explicit way. A next step comprises a global validation in Madrid, Spain and Basel, Switzerland with 20 patients using the system for 1 or 2 weeks. The design of this validation is currently ongoing.
561
This entails an in depth study of diverse behavior components towards e-health in order to create a tailored communication framework, which boosts motivation of patients to use such systems and truly incorporate them to their daily activities [8].
ACKNOWLEDGMENT This work has succeeded thanks to the close collaboration with Hospital San Carlos of Madrid, Spain and ITACA, Valencia, Spain, PRL in Aachen, Germany; and PD in Eindhoven, The Netherlands. The HFMS is an integrated product of the MyHeart project, “Fighting Cardiovascular Diseases by prevention and early diagnosis” (IST-2002507816); partly funded by the European Commission.
REFERENCES 1. 2.
3. 4.
5. 6. 7. 8.
IV. CONCLUSIONS As far as the system has been validated, the positive results encourage us to continue with our work and development. Current work focuses on improvement of usability and minimizing interaction requirements giving the system more and more contextual awareness. The preliminary results show promising in terms of the interaction modality implemented. However, a detailed analysis in order to enhance individuals experience and incorporate this system into their routine is still lacking.
World Health Organization. “The Atlas of Heart Disease and Stroke”. Edited by J. Mackay and G. Mensah, 2004 Villalba E, Arredondo MT, Moreno A, Salvi D, Guillen S, “User Interaction Design and Development of a Heart Failure Management System based on Wearable and Information Technologies” presented at 28th IEEE EMBS Annual International Conference, New York City, USA, Aug 30 – Sep 3, 2006. Pag. 400-404. ISBN: 1-4244-0033-3/06 Cooper A, “About Face 2.0 The Essentials of Interaction Design”. Edited by Wiley Publishing, Inc. 2003. ISBN: 0-7645-26413 Villalba E, Arredondo MT, Martinez A, Guillen S, Bover R, Martin M, “Preliminary Results for acceptability evaluation of a Heart Failure Monitoring System based on wearable and information technologies” presented at ICADI 2006. January, 2006. Page 301. ISBN 09754783-0-4 Deliverable 19: “Technical Progress”, MyHeart IST-2002-507816, July 2006 Rubin E, Yates R, “Microsoft .NET Compact Framework” Edited by Sams Publising, 2003. ISBN: 0-672-32570-5 Dumas JS, Redish JC, “A practical guide to usability testing”, Edited by Intellect Books, 1999. ISBN: 1-84150-020-8 del Hoyo-Barbolla E, Arredondo MT, Ortega M, Fernández N, Villalba-Mora E. “A new approach to model the adoption of e-health. Proceedings” 13th IEEE Mediterranean Electrotechnical Conference. Benalmádena (Spain) May 2006. ISBN: 1-4244-0088-0 Author: Elena Villalba Mora Institute: Street: City: Country: Email:
Technical University of Madrid Ciudad Universitaria s/n Madrid Spain
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A critical step in gene electrotransfer: the injection of the DNA F.M. André1,2 and L.M. Mir1,2 1
CNRS UMR 8121, Institut Gustave-Roussy, Villejuif, France 2 Univ Paris-Sud, UMR 8121
Abstract— In gene electrotransfer, DNA must be injected before the electric pulses delivery. This is a critical step in the success of the gene transfer. Because plasmid molecule has a very high molecular weight, even large amounts of plamid (in mg) correspond to a few number of molecules that are highly diluted if injection is made intravenously. Therefore, injections are often made in the target tissue. We show the way in which that this intramuscular, intratumoral or intrahepatic injection is performed can largely impact on the result of gene transfer, even (and mainly) in the absence of electric pulses (1). Indeed, the simple injection of DNA into muscles is known to result in the expression of the injected genes, even though at low and variable levels. We report that this variability in DNA expression is partly dependent on the injection speed. The acceleration of the injection speed from values around 2 µl/s up to ones around 25 µl/s (depending on the tissue) results in a significant increase in gene expression in skeletal muscle (280 times on an average) and in liver (50 times) and a non-significant sevenfold increase in tumors. Heparin, which inhibits the spontaneous uptake of the injected DNA, also inhibits the increases related to the injection speed. However, at the highest injection speed, this inhibition is not total because very fast injections provoke a direct permeabilization of the cells. This “hydroporation” could be similar to the permeabilization found in the hydrodynamics method based on the fast intravascular injection of a huge volume of DNA. Neither the “hydroporation” nor the heparin-inhibitable uptake mechanism induce histologically detectable lesions. There is a limited muscle cell stress independent of the injection speed. Heterogeneity in the injection speed might thus be an explanation for the variability in DNA expression after simple injection. Our data stress the importance of the DNA injection step in the electrotranfer procedures. This conclusion is important because gene electro-
transfer is gaining momentum both in the preclinical and clinical stages (2) as a safe and efficient non-viral method for gene therapy. Keywords—Non-Viral Gene Therapy, DNA injections, gene electrotransfer, endocytosis, hydroporation.
ACKNOWLEDGMENTS The authors thanks the EU commission for funding the project Cliniporator (QLK3-1999-00484) in the frame of the 5th FP. Work was also supported by grants of the CNRS, the IGR and the AFM (association française de lutte contre les myopathies).
REFERENCES 1. 2.
André FM, Cournil-Henrionnet C, Vernerey D, et al. (2006) Variability of naked DNA expression after direct local injection: the influence of the injection speed. Gene Ther 13:1619-1627 André F and Mir LM (2004) DNA electrotransfer: its principles and an updated review of its therapeutic applications. Gene Ther 11:S33S42. Author: Institute: Street: City: Country: Email:
Lluis M. Mir UMR 8121 CNRS, Institut Gustave-Roussy, Univ Paris-Sud 39 rue C. Desmoulins Villejuif – F-94805 France
[email protected]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 623, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A numerical model of skin electroporation as a method to enhance gene transfection in skin N. Pavselj1, V. Preat2 and D. Miklavcic1 1
2
University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia Universite Catholique de Louvain, Department of Pharmaceutical Technology, Brussels, Belgium
Abstract— Electroporation is an effective alternative to viral methods to significantly improve DNA transfection after intradermal and topical delivery. We performed a series of in vivo experiments on rat skin using external plate electrodes. The experiments showed that skin layers below stratum corneum can be permeabilized in this way. In order to study the course of skin tissue permeabilization by means of electric pulses, a numerical model was built, with COMSOL Multiphysics, using the finite element method. The model is based on the tissue-electrode geometry and electric pulses from our in vivo experiments. We took into account the layered structure of skin and changes of its bulk electric properties during electroporation, as observed in the in vivo experiments. We were using tissue conductivity values found in literature and experimentally determined electric field threshold values needed for tissue permeabilization. The results obtained with the model were then compared to the in vivo results of gene transfection in rat skin and a good agreement was obtained. With the model presented we used the available data to try to explain the mechanism of the tissue electropermeabilization propagation beyond the initial conditions dictated by the tissue initial specific conductivities. Keywords— electroporation, electropermeabilization, gene transfer, skin, numerical modeling, finite elements
I. INTRODUCTION Cell membrane is, in general, impermeable for larger molecules. The application of electric pulses to cells or tissue causes the electroporation of cell membrane, increasing its permeability and making it possible for larger molecules, such as drug molecules or DNA, to enter the cell [1]. When pulsing ceases, cell membrane reseals provided the applied voltage was high enough to cause cell membrane permeabilization but still low enough not to cause permanent damage. Electroporation can be used in applications such as gene transfection [2], electrochemotherapy [3] or transdermal drug delivery [4]. Electric field distribution in a tissue and consequently induced cell transmembrane potential depend on cell and tissue parameters (tissue conductivity, cell size, shape and distribution) [5,6] and pulse parameters (duration, amplitude and number of pulses) [7].
Electroporation is currently one of the most efficient and simple non-viral method of gene transfer in vivo. Skin is an attractive target tissue for gene therapy for a variety of reasons. Its accessibility facilitates in vivo gene delivery. Skin is also an excellent target organ for DNA vaccination because of the large number of potent antigen presenting cells, critical for an effective immune response. Skin consists of different layers (roughly epidermis, dermis and underlying fat and connective tissue). Its outermost layer, the stratum corneum, although very thin (from 10 µm up to 1 mm), presents a formidable barrier for the entry of ions and molecules into the body. Its conductivity is very low and three to four orders of magnitude lower than the conductivities of deeper skin layers and tissues [8-10]. We made a numerical model of skin during electroporation to try to describe the process, taking into account the layered structure of skin and changes of tissues' bulk electric properties during electroporation, as observed in the in vivo experiments [11]. Using voltage and current measurements recorded during pulse delivery, we aimed at making the response of the model as close as possible to the real system. From the electrical aspect, applying electric pulses on skin means that almost entire voltage drop rests across the stratum corneum, causing a very high electric field in that layer, while the electric field in deeper layers of the skin stays too low for a successful electropermeabilization. However, experiments show a successful DNA electrotransfer into the skin can be achieved [12] and is probably due to the rise in the conductivity of the stratum corneum (and other skin layers) during electroporation. As a result, the electric field “penetrates” deeper into the skin and permeabilizes the target cells. II. MATERIALS AND METHODS A. In vivo experiments To prove that skin layers below stratum corneum can be permeabilized using external plate electrodes, we performed a series of in vivo experiments. The reporter gene used in the study was pCMVGFP and was injected intradermally into male Wistar rat skin before the application of the electric pulses (animals were anaesthetized). For pulse delivery,
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 597–601, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
598
N. Pavselj, V. Preat and D. Miklavcic
we used the square-wave generator Cliniporator (IGEA, Carpi, Italy) and two parallel, stainless-steel plate electrodes (4 mm distance). During the electric pulse, the actual current delivered and the applied voltage were acquired by the Cliniporator and stored on the computer. Two days after the electroporation, the rats were sacrificed and the electroporated areas of the skin were excised. To assess the pCMVGFP expression, the epidermal and dermal sides of the skin were observed with a confocal microscope. B. Numerical calculations The numerical model was made by means of commercially available computer program COMSOL Multiphysics 3.3 (COMSOL, Los Angeles, CA, USA) based on the finite elements method. This method solves partial differential equations by dividing the model into smaller elements where the quantity to be determined is approximated with a function or is assumed to be constant throughout the element. Finite elements can be of different shapes and sizes, which allows modeling of intricate geometries. Nonhomogeneities and anisotropies can also be modeled and different excitations and boundary conditions can be applied easily. III. RESULTS
The location of GFP expression shows that we successfully permeabilized deeper layers of skin (dermis and epidermis). B. Geometry of the numerical model We made a numerical model of a skin fold with geometry as close to the in vivo experimental tissue-electrode set-up as possible. Four layers of skin were modeled: stratum corneum, epidermis, dermis and the subcutaneous layer of fat and connective tissue. The electric pulses were applied on the skin with plate electrodes pressed against the skin and were modeled as a boundary condition. The distance between the electrodes was 4 mm and the area in contact with skin was 1 cm2. However, to account for the presence of the conductive gel used in experiments in order to assure good contact between the skin and the electrodes, the voltage boundary condition was set somewhat beyond the size of the electrodes. The geometry of our numerical model is shown in Figure 2. By using symmetry, only one fourth of the geometry can be modeled as to avoid numerical problems due to the complexity of the model and computer memory limitations (Figure 2a). Therefore the boundary conditions on the section planes cut through the middle of the geometry had to be set as shown in Figure 2b. The thickness of the stratum corneum in the model is set larger than in real skin (around 6 times). Namely, due to the
A. In vivo experiments To localize the expression of the gene in the skin after intradermal injection of the plasmid, a plasmid coding for GFP was used as a reporter gene. To assess the expression of the gene in the skin, we used a confocal microscope. The results in Figure 1 show some autofluorescence of the hair follicles in the control groups where no electroporation was used (a). However, the expression of GFP was enhanced by electroporation (b).
Fig. 1 Expression of GFP in the skin after ID injection of 50 µg of a plasmid coding for GFP; (a) no electroporation; (d) 400 V 100 µs plus 56 V 400 ms (high voltage + low voltage pulse)
Fig. 2 a) Geometry of the skinfold finite elements model made in COMSOL. Only one fourth of the skinfold was modeled to avoid numerical problems and save computer time. The boundary conditions were therefore set as shown. b) A drawing of the whole skinfold, showing the directions in which the model was cut in order to get only one fourth of the geometry.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A numerical model of skin electroporation as a method to enhance gene transfection in skin
C. Parameters of the numerical model During the finite model analysis, tissue specific conductivities were changed in certain areas according to the electric field distribution throughout the model. The process of tissue electropermeabilization was thus modeled in discrete time steps. In each step, the current solution was used to look for the areas where the electric field was above the predefined threshold. The conductivity of those areas was changed (increased) and the next step of the modeled electropermeabilization process was calculated. This process was repeated until the electric field distribution reached its steady state. That is, when there were no more areas where the electric field was above the predefined threshold. Also, once the conductivity was increased in a certain area, it could not be changed back to its lower value in the following steps, even if the electric field strength should drop below the threshold due to changed conductivities. The resealing process during and after the pulse was not modeled. We got the electric conductivity values and their changes during electroporation from the literature and experiments, as well as the electric field values above which tissues are permeabilized [8-11,13,14]. It is difficult to get the exact values of the electric field thresholds and tissue conductivities, due to the lack of measurements in this field. However, we used the data found in literature, as well as our own experiments to set those parameters. Exactly how tissue conductivities (σ) change with electric field (E) is another unknown of tissue electropermeabilization. The simplest of the dependences is the step function, where, once an electric field threshold is reached, the conductivity changes from its low to its high value. However, a kind of a gradual increase of the conductivities seems more logical. Namely, due to the non-uniformity of the cell size and shape in the tissue, not all the cells are permeabilized at the same time once the threshold electric field is reached. In our model, the conductivities were increased from their low to their high values in four steps. The conductivity steps for all the skin layers followed an expo-
nential dependence between 60000 and 140000 V/m of the electric field strength. Figure 3 shows the σ(E) dependence used to model the electropermeabilization of the dermis and the epidermis (excluding its outermost layer, the stratum corneum). The dependences used for the stratum corneum and the subcutaneous layer are not shown. However, the initial and the final conductivities of skin layers modeled are summarized in Table 1. D. The model – in vivo experiments comparison In our in vivo experiments we tried different pulse amplitudes. Five different voltages were used to permeabilize the skin – 160, 280, 400, 520 and 700 V. During the pulse, the voltage between the electrodes and the current through the skinfold were measured. The finite element model of skin electropermeabilization was run at all five voltages used and the reaction currents obtained from the model were compared with our experimental data. Figure 4 shows the reaction currents of the model compared to the currents measured in vivo during the pulse. As we can see, the currentvoltage dependence of the model seems to agree well with the current-voltage dependence we got from experiments. 1 0,9
dermis, epidermis
0,8 conductivity (S/m)
large differences in layer thicknesses, numerical problems can occur and can make the calculation impossible. To make up for the stratum corneum being modeled thicker than it is, the specific conductivity of this very resistive layer was also set higher (6 times). As the experiments showed, the epidermis and the dermis were transfected, which means the electric field strength was high enough in those layers, due to the rise in electric conductivity of the stratum corneum. Therefore, we made a nonlinear mathematical model where the electric field distribution depends on the changes in the electrical conductivity of the tissues involved.
599
0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 40000
60000
80000 100000 120000 electric field (V/m)
140000
Fig. 3 Four steps of the dermis and the epidermis (stratum corneum excluded) conductivity increase. The dotted line shows the exponential function used as a basis for the conductivity increase
Table 1 Parameters used in discrete model of the electropermeabilization process in skin Tissue Subcutaneous layer Dermis, epidermis Stratum corneum
σ0 (S/m) 0,05 0,2 0,0005
σ1 (S/m) 0,2 0,8 0,5
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
600
N. Pavselj, V. Preat and D. Miklavcic 7
ductivities. However, with the model presented we used the available data to try to explain the mechanism of the tissue electropermeabilization propagation beyond the initial conditions dictated by the tissue initial specific conductivities
in vivo experiments the model
6 5
I (A)
4
ACKNOWLEDGMENT
3 2
This research was supported by the European Commission under the 5th framework under the grant Cliniporator QLK-1999-00484 and the Slovenian Research Agency.
1 0 0
100
200
300
400
500
600
700
800
REFERENCES
U (V)
Fig. 4 Currents measured during the pulse, compared to the currents given by the model, with respect to the applied voltages.
IV. CONCLUSIONS The location of GFP expression shows that we successfully permeabilized deeper layers of skin (dermis and epidermis); even though the ratios of the conductivities of the skin layers suggest that the highest voltage drop rests across the highly resistive stratum corneum. We constructed a numerical model describing the nonlinear process of tissue conductivity changes during electroporation due to tissue permeabilization, using finite elements method. The output of the model was compared with the current and the voltage measured during in vivo experiments and a good agreement was obtained. Also, when looking at the electric field distributions given by the model (data not shown) thus comparing the voltages needed for a successful electropermeabilization as suggested by the model, with voltages achieving good in vivo gene transfection, good agreement can be observed. Further, a comparison of our results with already published findings on skin electropermeabilization showed that the voltage amplitudes suggested by the model are also well in the range of the voltage amplitudes reported by other authors to cause skin permeabilization. Parameters such as specific conductivities of the tissues before and after electropermeabilization and the electric field thresholds were taken from the literature and experiments. However, data on everything, except the specific conductivities before the electropermeabilization, is very scarce, sometimes nonexistent. Namely, the subject of tissue conductivity changes due to electroporation is still a rather unexplored area. Also, due to different measuring circumstances, measuring techniques and species used by different researchers, large discrepancies can be found in the reported data on tissue con-
1.
Mir LM (2000) Therapeutic perspectives of in vivo cell electropermeabilization. Review article, Bioelectrochemistry 53: 1-10 2. Cemazar M, Golzio M, Sersa G et al. (2006) Electricallyassisted nucleic acids delivery to tissues in vivo: Where do we stand? Curr Pharm Design 12: 3817-25 3. Sersa G (2006) The state-of-the-art of electrochemotherapy before the ESOPE study; advantages and clinical uses. EJC Suppl 4: 52-9 4. Denet A-R, Vanbever R, Preat V, (2004) Skin electroporation for transdermal and topical delivery, Adv Drug Deliv Rev 56: 659-674 5. Pavlin M, Pavselj N, Miklavcic D (2002) Dependence of induced transmembrane potential on cell density, arrangement and cell position inside a cell system. IEEE Trans Biom Eng 49(6): 605-612 6. Valic B, Pavlin M, Miklavcic D (2004) The effect of resting transmembrane voltage on cell electropermeabilization: a numerical analysis. Bioelectrochemistry 63: 311-315 7. Wolf H, Rols MP, Boldt E et al. (1994) Control by Pulse Parametes of Electric Field - Mediated Gene Transfer in Mammalian Cells. Biophysical Journal 66: 524-531 8. Pliquett U, Langer R, Weaver JC (1995) Changes in the passive electrical properties of human stratum corneum due to electroporation. BBA 1239: 111-121 9. Yamamoto T, Yamamoto Y (1976) Electrical properties of the epidermal stratum corneum. Med Biol Eng 14(2): 151-158 10. Yamamoto T, Yamamoto Y (1976) Dielectric constant and resistivity of epidermal stratum corneum. Med Biol Eng 14(5): 494-500 11. Pavselj N, Bregar Z, Cukjati D et al. (2005) The course of tissue permeabilization studied on a mathematical model of a subcutaneous tumor in small animals. IEEE Trans Biomed Eng 52(8): 1373-1381 12. Pavselj N, Preat V (2005) DNA electrotransfer into the skin using a combination of one high- and one low-voltage pulse. J Cont Rel 106: 407-415
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A numerical model of skin electroporation as a method to enhance gene transfection in skin 13. Gabriel C, Gabriel S, Corthout E (1996) The dielectric properties of biological tissues: I. Literature survey. Phys Med Biol 41: 2231-2249 14. Gabriel S, Lau RW, Gabriel C (1996) The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz to 20 GHz. Phys Med Biol 41: 2251-2269
Author: Institute: Street: City: Country: Email:
601
Natasa Pavselj University of Ljubljana, Faculty of Electrical Engineering Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
An endoscopic system for gene & drug delivery directly to intraluminal tissue. D.M. Soden1, M. Sadadcharam1, J. Piggott1, A. Morrissey2, C.G. Collins1 and G.C. O’Sullivan1. 1
Cork Cancer Research Centre, Mercy University Hospital, Grenville Place, Cork, Ireland. 2 Tyndall National Institute, University College Cork, Prospect Row, Cork, Ireland.
Abstract— Electrochemotherapy has been established in preclinical and clinical studies as an effective therapy; however, the currently available technology for delivery of this treatment is limited to surface tumours and is reliant on macroelectrodes such as callipers and needles. Internal cancers are not currently amenable to electrochemotherapy. If it were possible to deliver permeabilising electric pulses to intraluminal gastrointestinal or urinary tract tumours endoscopically, or to intra-abdominal tumours via the laparoscopic approach, many cancers which are now deemed inoperable or which are unresponsive to conventional therapies would become accessible to electrochemotherapy. Tumour reduction or regression would be a feasible aim, facilitating the achievement of palliation of symptoms, improved quality of life, prolonged survival and ultimately cure. Keywords— Cancer, Electroporation, Electrochemotherapy, Gene Therapy, Endoscope.
I. INTRODUCTION Electrochemotherapy is the local application of pulses of electric current to tumor tissue that renders the cell membranes permeable to otherwise impermeant or poorly permeant anticancer drugs (such as bleomycin), thereby facilitating a potent localized cytotoxic effect [1,2]. The modality has been demonstrated to be extremely effective in local tumour ablation [3], however the currently available technology for delivery of this treatment is limited to application to surface tumours and is reliant on macroelectrodes such as callipers and needles and so internal cancers are not currently amenable to electrochemotherapy. Our aim was to develop an endoscopic system capable of delivering electric pulses safely to intraluminal tissue. We have developed a delivery head, ‘Endovac’, suitable for attachment to an endoscope. It has been designed to enable tissue to be
drawn into the chamber head under vacuum pressure where the two electrodes are enclosed. Plasmid DNA or chemotherapeutic solutions can be injected via a needle catheter directly into the tissue. The system has been used successfully in large animal (Pig) studies with successful intraluminal gene and drug delivery demonstrated. Successful deployment of the Endovac greatly expands the therapeutic potential of electroporation.
ACKNOWLEDGMENT This research was funded in part by Science Foundation Ireland, The Health Research Board, The Mercy University Hospital and The Leslie Quick family.
REFERENCES 1. 2.
3.
Mir, L.M., M. Belehradek, C. Domenge, et al. [Electrochemotherapy, a new antitumor treatment: first clinical trial]. C R Acad Sci III 1991;313:613-8 Heller, R., M.J. Jaroszeski, et al. Phase I/II trial for the treatment of cutaneous and subcutaneous tumors using electrochemotherapy. Cancer 1996;77:964-71 Heller, R., M.J. Jaroszeski, D.S. Reintgen, et al. Treatment of cutaneous and subcutaneous tumors with electrochemotherapy using intralesional bleomycin. Cancer 1998;83:148-57
Author: Institute: Street: City: Country: Email:
Declan Soden Cork Cancer Research Centre Mercy University Hospital, Grenville Street Cork Ireland
[email protected]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 628, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
An experimental and numerical study of the induced transmembrane voltage and electroporation on clusters of irregularly shaped cells G. Pucihar, T. Kotnik, and D. Miklavcic University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana, Slovenia Abstract— Despite the increasing use of electroporation its mechanisms are still not completely understood. This is especially the case in tissues, due to its complicated structure. To study the electric field interaction with tissues on a single cell level, we performed our study on cell clusters. We calculated the induced transmembrane voltage on numerical models of cell clusters, compared the calculations with measurements of the induced voltage, and monitored the course of electroporation. Our results show that cells in clusters can behave differently when exposed to electric field, depending on the parameters of the field. During the measurements of the voltage (long, low voltage pulses), cells in clusters behaved as one giant electrically connected cell. In contrast, during electroporation (short, high voltage pulses), cells behaved as electrically insulated and were electroporated individually. Different responses of cells in clusters to the electric field exposure could be attributed to the changes in the properties of gap junctions. Keywords— numerical model, cell clusters, electroporation
I. INTRODUCTION When a cell is exposed to an external electric field, the induced transmembrane voltage (ITV) forms on its membrane. In the regions of the membrane where the ITV exceeds several hundred mV the permeability of the membrane transiently increases - a phenomenon termed electroporation [1, 2]. Despite increasing use of electroporation in different areas of biology and medicine, the events on the molecular scale that result in the increase in membrane permeability are still not completely understood. For biological cells, the mechanisms of electroporation are better understood in isolated cells or diluted cell suspensions than in tissues, where irregular shapes of cells, their mutual shielding, and perhaps connections between them (e.g. gap junctions) could play a role. The complexity of tissue structure is the main reason why researchers in their studies often use simplified models of tissues, such as cell pellets, multicellular spheroids, or dense cell suspensions [3-5]. For the same reason, the numerical models of tissues are often macroscopic, where the average or bulk electric properties (e.g. bulk conductivity and/or bulk permittivity) are assigned to different types of tissues in the model, while a detailed cell
structure is not considered [6, 7], or in case of microscopic models, the models are constructed using simple geometrical shapes (hemi-spheres, cubes). Probably the only numerical study where detailed cell structure was considered was performed by Gowrishankar and Weaver [8]. To better understand how the electric field interacts with tissues on a single cell level, which in turn determines the macroscopic behavior of tissue, we decided to perform our study on monolayer clusters of irregularly shaped cells. Regarding the shape, density, and connections between cells, such cell clusters are in their complexity close to tissues and could provide new insights into tissue electroporation. Besides, they could provide findings otherwise not accessible to macroscopic models. We began our study by constructing a numerical model of a cell cluster. The ITV was then calculated on this model and compared with measurements of ITV on the same cells, from which the model was constructed. Finally, the course of electroporation was monitored and the results were compared with measurements and calculations of the ITV. II. MATERIALS AND METHODS A detailed description of the model construction, the measurements of the induced transmembrane voltage and monitoring of electroporation can be found in [9]. A short summary of the methods is given below. A. Construction of the model Three-dimensional models of cell clusters were constructed from a sequence of microscopic fluorescence images representing cross-sections of a CHO cell cluster attached to the cover glass. Fluorescence images were obtained by staining the cells with a fluorescent dye di-8-ANEPPS (Sigma, Saint Louis, USA). The images, acquired with a CCD camera (VisiCam 1280, Visitron, Germany) mounted on a fluorescence microscope (AxioVert 200, Zeiss, Germany), were processed to obtain contours of the cell edges. The contours of individual cells were transformed to solid planes, combined into 3D objects, imported to the FEMLAB workspace (COMSOL Inc., Burlington, MA, USA), and merged together to form a model of a cluster (Fig. 1).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 639–642, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
640
G. Pucihar, T. Kotnik and D. Miklavcic
D. Measurements of induced transmembrane voltage
A
B
Fig. 1 Numerical model of a cluster of two CHO cells shown in Fig. 2. (A) The three-dimensional geometry of the model of a cluster constructed from three parallel cross-sections. The dimensions of the box were 84×84×27 µm. The grey-shaded faces are the electrodes, one set to 0.84 V and the other to the ground (electric field 100 V/cm). (B) Different side views of the model. B. Modeling the cell membrane Direct incorporation of a realistic cell membrane into the model is problematic. If the events inside the membrane are not of interest, the membrane can be replaced by a surface to which a boundary condition is assigned [9]: J = σ m (Vo − Vi ) / d . Here, J is the current density, σm is the specific membrane conductivity, d is the membrane thickness and Vo, Vi are the electric potentials at the outer and inner surface of the membrane, respectively. In a model constructed in this way, the mesh of finite elements is generated without difficulty, as very small elements corresponding to the membrane are avoided [9]. C. Settings of the model and calculations of the induced transmembrane voltage The calculations were performed in FEMLAB using the static current density application mode. The specific conductivity of the cell interior was set to 0.3 S/m and that of the cell exterior to 0.14 S/m [9]. The opposite vertical faces of the block were modeled as electrodes, which was done by assigning fixed electric potentials to both electrodes to obtain the electric field of 100 V/cm. The remaining faces of the block were modeled as insulating. The mesh was generated and the electric potential was computed using finite elements method. The ITV on a cluster was then calculated as the difference between electric potentials inside and outside the outermost membranes of the cluster and plotted as a function of the relative arc length. Cells in the cluster were modeled either as electrically connected or electrically insulated. This was done by assigning a conductivity to the contact surface between cells, which was in the former case 1000 times higher than the membrane conductivity, and in the latter case one half of the conductivity of the membrane (due to its double thickness).
Induced transmembrane voltage was measured using a potentiometric fluorescent dye di-8-ANEPPS [10]. CHO cells were grown on cover glasses in culture medium. When clusters formed, the culture medium was replaced by SMEM medium containing 30 μM of the dye. After staining for 12 min at 4°C, the SMEM was replaced by an isoosmotic pulsing buffer [9]. Cells were then exposed to a 40 V, 100 ms electric pulse delivered via two parallel wire electrodes with a 4 mm distance between them. Before and during the pulse, the fluorescence image of the lowermost level of a cell cluster was acquired, with excitation at 490 nm and emission detected at 605 nm. The control image was then subtracted from the pulse image. Using a calibration curve (6%/100 mV, [9]), fluorescence changes were transformed to the values of ITV, which were plotted on a graph as a function of the normalized arc length. The images were acquired using the same imaging system as described in Section A. E. Monitoring the course of electroporation Cells were grown and prepared as described in Section D. Prior to experiments, the culture medium was replaced by pulsing buffer, which contained 100 μM of membraneimpermeant fluorescent dye Propidium Iodide (PI, Sigma, Saint Louis, USA). The fluorescence of the dye increased considerably after entering the electroporated cell. A 400 V, 200 μs rectangular pulse was delivered to the electrodes as described in Section D, and fluorescence from cells was monitored in 100 ms time steps. III. RESULTS AND DISCUSSION The fluorescence images of a cluster of two CHO cells, acquired during the exposure to a 100 V/cm electric field, are shown in Fig. 2A. After subtraction of these images from the control image, the changes in fluorescence due to electric field exposure became noticeable (Fig. 2B). As the figure shows, the fluorescence of the left cell in a cluster decreased (dark - hyperpolarization), while the fluorescence of the right cell increased (bright depolarization). The fluorescence changes, measured along the outermost membranes of a cluster, were transformed to values of ITV (solid curves in Fig. 2C). As the result show, the ITV on a cell cluster varies between approximately 200 mV and -200 mV.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
An experimental and numerical study of the induced transmembrane voltage and electroporation on clusters of irregularly shaped cells 641
A
E
B
C
Fig. 2 Measurements of induced transmembrane voltage (ITV) on a cluster of two CHO cells. (A) The 8-bit fluorescence images of a cluster stained with di-8-ANEPPS and acquired during the exposure to a 100 V/cm, 100 ms rectangular pulse. Bar represents 10 µm. (B) Changes in fluorescence of a cluster obtained by subtracting the control image (not shown) from the image with pulse and shifting the greyscale range by 50%. The brightness of the image was automatically enhanced. White arrow shows the path along which the ITV was measured. (C) ITV as a function of normalized arc length measured along the outermost membranes of a cluster. Solid curve – measured values, black dashed curve – numerically calculated values for the case of electrically connected cells, red dashed curve – numerically calculated values for the case of electrically insulated cells (see below). The changes in fluorescence were transformed to ITV using a calibration curve (6%/100 mV).
Fluorescence images of the cluster, acquired from three parallel imaging planes, were used to construct a 3D model. Cells in the cluster were modeled either as electrically connected or electrically insulated. The calculated distribution of the electric potential for the cluster of electrically connected cells is, for the lowermost layer, shown in Fig. 3A1. While the potential outside the cluster varies from 0.84 V to 0 V, the potential inside remains at a constant value of 0.436 V, reflecting the electrical connection of both cells. For a comparison, when cells in cluster are modeled as electrically insulated, the potential outside the cluster varies similarly as in the case of connected cells, while the potential inside the cluster is now different for each cell: 0.495 V and 0.375 V for the left and the right cell, respectively (Fig. 3 B1). The ITV calculated for the case of electrically connected cells varies between approximately 195 mV and -190 mV and is in good agreement with the measured ITV (Fig. 2C – black dashed curves). Red dashed curves in the same figure show ITV calculated for the cluster of electrically insulated cells. Because in this case, each cell in a cluster has a different intracellular electric potential, the course of ITV along the outermost membranes is disconnected and is considerably different from the measured ITV (Fig. 2C). In the presence of PI, the same cells were then electroporated with a single 400 V, 200 µs electric pulse. An increase in fluorescence was observed on both sides of each cell in a cluster, denoting the regions where electroporation occurred (Fig. 4). These regions are in agreement with calculations of ITV for electrically insulated cells (Fig. 3B2) rather than for electrically connected cells in a cluster (Fig. 3A2). It appears that cells in clusters become electroporated individually, which is different from what could be expected from ITV measurements, where two cells in a cluster behaved as one giant cell (cf. Figs. 2B and 4A).
A1
B1
E
A2
B2
Fig. 3 The calculations of induced transmembrane voltage for electrically connected (A) and electrically insulated cells (B). (A1, B1) Distribution of the electric potential in the x-y plane for the lowermost cross-section of the cluster. Black curves represent the equipotentials and the arrow marks the path along which the ITV was measured (presented in Fig. 2C – black dashed curve). The scale is in volts. (A2, B2) Regions of the membrane where the absolute value of ITV is the highest.
Three minutes after pulse delivery the cluster was completely fluorescent, with two bright regions indicating the cell nuclei (Fig. 4C). Similar experiments were performed on five additional clusters with increasing complexity in their shape and increasing number of cells in cluster. The results were qualitatively the same as presented here. During the measurements of ITV (long, low voltage pulses) the cells in clusters behaved as one giant cell, while electroporation (short, high voltage pulses) occurred on each cell in a cluster individually (results not shown).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
642
G. Pucihar, T. Kotnik and D. Miklavcic
100 ms
A
1500 ms
B
3 min
C
E Fig. 4 Monitoring the electroporation of a cell cluster shown in Fig. 2 (A) 100 ms, (B) 1500 ms, and (C) 3 min after pulse delivery. The cluster was exposed to a single 400 V (1000 V/cm) rectangular unipolar pulse (200 µs). Propidium Iodide was added to suspension before the pulse to visualize the electroporated regions. Bar represents 10 µm.
Different behavior of cells in clusters after the electric field exposure was attributed to the changes in the properties of the connecting pathways between cells in a cluster (i.e. gap junctions). The presence of gap junctions for the cells used in our study was confirmed with the scrape-loading test [11]. Opening and closing of gap junctions is a stochastic process, occurring on a time interval of hundreds of miliseconds to seconds [12]. It is therefore likely that the average conductivity of these channels on a shorter time interval (such as during electroporation) is different than the average conductivity on a longer time interval (such as during measurements of ITV). Besides, higher pulse amplitude in electroporation experiments could change the structure of gap junctions, by rendering them less conductive. Increased conductivity of gap junctions (open channels) would, therefore, electrically connect the individual cells in a cluster, while a decreased conductivity (closed channels) would electrically insulate them. IV. CONCLUSIONS In our present study, we demonstrated different response of cells in clusters to the electric field exposure with respect to pulse duration. When exposed to long pulses of low amplitudes, cells in clusters behaved as one giant cell. Specifically, cells facing the cathode were completely depolarized, while cells facing the anode were completely hyperpolarized. In contrast, when cells in clusters were exposed to short pulses of high amplitude, e.g. during electroporation, cells behaved as electrically insulated and were electroporated individually. Different responses of cells in clusters to the electric field exposure could perhaps be attributed to the changes in the properties (e.g. average conductivity) of gap junctions.
ACKNOWLEDGEMENTS This work was supported by the Slovenian Research Agency.
REFERENCES 1. 2.
Tsong T Y (1991) Electroporation of cell membranes. Biophys J 60:297-306 Teissie J, Eynard N, Gabriel B et al. (1999) Electropermeabilization of cell membranes. Adv Drug Deliver Rev 35:3-19 3. Abidor I G, Li L H, Hui S W (1994) Studies of cell pellets: II. Osmotic properties, electroporation, and related phenomena: membrane interactions. Biophys J 67:427-435 4. Pavlin M, Pavselj N, Miklavcic D (2002) Dependence of induced transmembrane potential on cell density, arrangement, and cell position inside a cell system. IEEE T Bio-Med Eng 49:605-612 5. Canatella P J, Black M M, Bonnichsen D M et al. (2004) Tissue electroporation: quantification and analysis of heterogeneous transport in multicellular environments. Biophys J 86:3260-3268 6. Pavselj N, Bregar Z, Cukjati D et al. (2005) The course of tissue permeabilization studied on a mathematical model of a subcutaneous tumor in small animals. IEEE T Bio-Med Eng 52:1373-1381 7. Sel D, Cukjati D, Batiuskaite D et al. (2005) Sequential finite element model of tissue electropermeabilization. IEEE T Bio-Med Eng 52:816–827 8. Gowrishankar T R, Weaver J C (2003) An approach to electrical modeling of single and multiple cells. Proc Natl Acad Sci USA 100:3203-3208 9. Pucihar G, Kotnik T, Valic B et al. (2006) Numerical modeling of induced transmembrane voltage induced on irregularly shaped cells. Ann Biomed Eng 34:642-652 10. Gross D, Loew L M, Webb W (1986) Optical imaging of cell membrane potential changes induced by applied electric fields. Biophys J 50:339-348 11. El-Fouly M, Trosko J E, Chang C C (1987) Scrape-loading and dye transfer. A rapid and simple technique to study gap junctional intercellular communication. Exp Cell Res 168:422-430 12. Brink P R, Cronin K, Ramanan S V (1996) Gap junctions in excitable cells. J Bioenerg Biomembr 28:351-358 Author: Gorazd Pucihar Institute: Street: City: Country: Email:
University of Ljubljana, Faculty of Electrical Engineering Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Analysis of Tissue Heating During Electroporation Based Therapy: A 3D FEM Model for a Pair of Needle Electrodes I. Lackovic1, R. Magjarevic1 and D. Miklavcic2 1
University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia 2 University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
Abstract— Cancer electrochemotherapy and electro gene therapy are emerging methods in molecular medicine. Both are based on electroporation mediated introduction of foreign molecules (drugs, DNA) into target cells. Depending on the application (electrochemotherapy or DNA electrotransfer), pulse delivery protocols and electrode geometry are chosen to achieve above the permeabilization threshold electric field in the target tissue volume. In this study we present the analysis of tissue heating as a potential side effect of strong electric pulses. The analysis is based on a 3D finite-element model of tissue in which a pair of needle electrodes was inserted. By setting the appropriate boundary conditions, we simulated driving of electrodes with short, high voltage, electropermeabilizing pulses and with longer, lower voltage, electrophoretic pulses. Time dependent solutions for electric field and temperature distribution were obtained by FEM solver. The results show localized tissue heating near the electrodes. This is predominately due to the sharp radial decrease of the electric field around the needles. For a given electrode geometry, the extent of thermal effects strongly depends on tissue electrical conductivity and parameters of electric pulses (number of pulses, pulse amplitude and duration). When comparing a pair of needles to parallel plate electrodes, the results show, that with needle electrodes, it is harder to avoid tissue heating and to achieve, in a larger tissue volume, local electric field between reversible and irreversible permeabilization threshold. The results of this numerical study are in agreement with previous in vivo experiments. Keywords— bioelectric phenomena, electroporation, electrochemotherapy, modeling, finite-element method
I. INTRODUCTION When cells are exposed to external electric field, charging of cell membranes occurs and induced component of transmembrane voltage develops. If the external electric field is high enough, structural changes in the cell membrane occur and increase of membrane transport is observed. The phenomenon is known as membrane electroporation [1]. Since membrane permeability to ions and molecules increases (electropermeabilization), external electric fields at electroporative filed strength can be applied for introduction of foreign molecules into the cytosol. This is especially interesting for enhancing the delivery of che-
motherapeutic drugs like bleomycin or cis-platinum to cancer cells (electrochemotherapy) or for the delivery of foreign genes (electro gene transfer) [2]. In cancer electrochemotherapy a train of short high voltage pulses is most often used (i.e. 8 square-wave pulses of 100 μs delivered at the repetition frequency of 1 Hz, with a voltage-to-distance ratio of up to 1500 V/cm). For the transfer of DNA a train of long low voltage pulses is much more effective (i.e. 8 rectangular pulses of 50 ms delivered at the repetition rate of 1 Hz, with a voltage-to-distance ratio up to 250 V/cm). The reason for using longer pulses is to enhance electrophoretic transport which is essential for transfer of charged macromolecules such as DNA. Electroporation, as intrinsically non-thermal phenomenon, is reversible up to certain level of electric field. At higher levels, electroporation becomes irreversible. Aside from electroporation, during delivery of strong electric pulses high current density develops in the tissue causing Joule heating. Similarly to irreversible electroporation, excessive Joule heating can also lead to cell death. Thermal damage of tissue can occur at field intensities considerably lower than required to induce electroporation, especially if longer pulses are used. For the overall success of electroporation based therapy, it is important to reversibly permeabilize the tissue without significant damage either due to irreversible electroporation or excessive heating. Several authors already investigated tissue heating during electroporation, both for needle and plate electrodes [3, 4]. However, those studies were based on two-dimensional geometrical models with additional simplifications regarding the modeling of pulse trains. In our previous publication we presented a three-dimensional finite-element analysis of tissue heating for parallel plate electrodes [5]. In this contribution we extend our previous work to the case when electric pulses are applied to tissue through a pair of needle electrodes. II. METHODS Mathematical model of tissue heating during electroporation is based on two partial differential equations: one describing DC current conduction and the other describing
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 631–634, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
632
I. Lackovic, R. Magjarevic and D. Miklavcic
heat transfer. For the chosen geometry of tissue and electrodes, we developed the 3D finite element model. We numerically solved the model for electrodes driven by different pulse trains. We used commercial software environment for finite element modeling FEMLAB 2.3 (Comsol AB, Sweden), with additional functions that we implemented in MATLAB 6.5 (The MathWorks, Inc., USA). All calculations were performed on a PC with Pentium 4 processor at 2.4 GHz, with 1 GB of RAM, running Windows 2000 Professional Edition. A. PDE formulation of the problem Assuming that the electric current density in tissue is divergence-free, the electric potential ϕ satisfies: ∇ ⋅ ( σ∇ϕ ) = 0
(1)
where σ is the electric conductivity. Including the Joule heating term JE in the bioheat equation we obtain [5]: ρc
∂T = ∇ ⋅ ( k ∇T ) − ρb ωbcb (T − Tb ) + Qm + JE (2) ∂t
Here T is the temperature, t is the time, ρ , c and k are the density, the heat capacity and the thermal conductivity of tissue, ωb is the blood perfusion, ρb and cb are the density and the heat capacity of blood, Tb is the temperature of the arterial blood, Qm is the metabolic heat, E = −∇ϕ is the electric field and J is the current density. These two PDEs mathematically model the electric and thermal behavior of tissue exposed to electric pulses. The equations are coupled and have to be solved simultaneously. Namely, Joule heating term: JE = σ E 2
(3)
which can be obtained from the solution of the first equation acts as a distributed heat source in the heat equation (2). Additional coupling between equations (1) and (2) exist due to temperature dependence of tissue electrical conductivity: σ = σ0 [ 1 + α (T − T0 ) ]
(4)
where α is the temperature coefficient. B. Geometry modeling and meshing Geometrical model of tissue with a pair of needle electrodes is shown in Fig. 1. The size of the tissue block, interelectrode distance, diameter of the needles and the insertion depth are the same as in the previous work [6]. The difference is only in the shape of the needle tip, which is conical
Fig. 1 Geometrical model of tissue and electrodes. The length (x), the width (y) and the thickness (z) of tissue segment: 32 mm × 32 mm × 16 mm. Needle diameter: 0.7 mm; interelectrode distance: 8 mm; insertion depth of electrodes: 7 mm. Dashed lines represent symmetry planes.
in the present model. The other difference is meshing of just one forth of the entire geometry due to two symmetry planes. Due caution was taken to obtain high quality mesh near the needles, where steep change of electric potential is expected. The final mesh of one quarter of the geometry shown in Fig. 1 consisted of approximately 23000 linear tetrahedral elements. C. Tissue properties and boundary conditions Physical properties of the tissue (rat liver) and the electrodes (stainless steel) were taken from the literature [7]. Nominal electrical conductivity of rat liver that we used in this study was 0.126 S/m. The results of measuring the electrical conductivity during high voltage pulses indicate that tissue electrical conductivity increases due to electroporation [8]. To analyze how this affects the temperature rise in the tissue we run the simulations also for two and four times higher electrical conductivity than the nominal conductivity (e.g. 0.252 S/m and 0.504 S/m). For all boundaries we set appropriate electrical and thermal boundary conditions. We modeled the driving of electrodes with pulse trains by setting the time dependent boundary condition at the electrodes. Two different protocols were simulated: a train of 8 short high voltage (HV) pulses (voltage-to-distance ratio 1500 V/cm, pulse duration 100 μs, and a train of 8 longer (50 ms) low voltage (LV) pulses (250 V/cm). Pulse repetition frequency was 1 Hz in all cases. The first protocol is typical for electrochemotherapy, while the second is effective for gene transfer. Actual
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Analysis of Tissue Heating During Electroporation Based Therapy: A 3D FEM Model for a Pair of Needle Electrodes
voltage amplitudes were calculated for the given interelectrode distance (8 mm) to obtain the desired voltage-todistance ratios (i.e. 1200 V corresponds to 1500 V/cm and 200 V to 250 V/cm). These particular voltage-to-distance ratios were chosen as in experimental studies [2, 8] and in our previous modeling study for plate electrodes [5]. It is important to notice that, in contrast to parallel plate electrodes, voltage-to-distance ratio is not a good estimate of local electric filed in the tissue around the needle electrodes.
633
We selected three characteristic points in the xy plane to track the time course of temperature during pulse delivery: 1 – in the tissue, exactly in the middle between electrodes, at the depth of 3.5 mm from the tissue surface; 2 – in the tissue, 2 mm from the electrode surface (in the direction of the opposite electrode) at the same depth of 3.5 mm; and 3 – exactly at the contact between electrode and tissue, again at the depth of 3.5 mm. The points are marked in Fig. 2. HV pulses: 1500 V/cm, 8×100 μs, 1 Hz
III. RESULTS The comparison of calculated temperature distributions immediately after the last pulse in the train is shown in Fig. 2. HV pulses and LV pulses are compared, and the influence of tissue electrical conductivity (0.126 S/m nominal vs. 0.504 S/m - increased) is shown. Due to the symmetry, solutions in other parts of the geometry are easily obtained by mirroring. HV pulses: 1500 V/cm, 8×100 μs, 1 Hz
LV pulses: 250 V/cm, 8×50 ms, 1 Hz
LV pulses: 250 V/cm, 8×50 ms, 1 Hz
Fig. 3 Time course of temperature in selected points during a train of HV Fig. 2 Temperature distribution immediately after the last pulse in the train of eight voltage pulses for two values of tissue electrical conductivity 0.126 S/m and 0.504 S/m for short HV pulses (1500 V/cm, 8×100 μs, 1 Hz) and long LV pulses (250 V/cm, 8×50 ms, 1 Hz). These temperature distributions correspond to time step indicated with a marker (↑) in Fig. 3. Note different temperature scales.
pulses (1500 V/cm, 8×100 μs, 1 Hz) and a train of LV pulses (250 V/cm, 8×50 ms, 1 Hz) for two values of tissue electrical conductivity 0.126 S/m and 0.504 S/m. Marker (↑) indicates the time step corresponding to the temperature distributions shown in Fig. 2. The location of the selected points is shown in Fig. 2. Note different temperature scales.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
634
I. Lackovic, R. Magjarevic and D. Miklavcic
Time course of temperature in these three points is shown in Fig. 3. Again HV and LV pulse trains are compared, for the nominal electrical conductivity of liver (solid lines) and for the four times higher conductivity (dashed lines). IV. DISCUSSION AND CONCLUSIONS The results for both pulsing protocols show highly localized tissue heating with maximal temperature rise in the bulk near the electrode tip. The region of increased temperature is formed around the needles where electric field and current density are the highest. This region overlaps with regions where electroporation occurs [6, 8]. For electrochemotherapy protocol (HV pulses: 1500 V/cm, 8×100 μs, 1 Hz) tissue heating exists but is not critical (Fig 3) even if we assume considerably (4×) increased tissue conductivity. We found the same result in the numerical study for plate electrodes [5]. Both studies are in agreement with experiments and prove the safety of standard electrochemotherapy pulsing protocol (up to 1500 V/cm, 8×100 μs, 1 Hz) with respect to tissue thermal damage. Results of present study also show that successive HV pulses cumulatively increase the bulk tissue temperature. Namely, there is no cooling of the bulk tissue between the pulses although pulses are much shorter than the pauses between them (100 μs vs. 1 s). We see from the results that Joule heating strongly depends on tissue electrical conductivity, and on parameters of electric pulses - pulse duration, pulse amplitude and the number of pulses. Interestingly, repetition frequency of electropermeabilization pulses can safely be increased without the fear of overheating. This result was validated in the most recent electrochemotherapy study on patients where, for the first time, pulse repetition frequency of 5 kHz was used and has proven to be safe and even more effective in tumor treatment than standard repetition frequency of 1 Hz [9]. The simulated train of electrophoretic pulses (LV pulses: 250 V/cm, 8×50 ms, 1 Hz) is likely to cause localized thermal damage, especially if we assume highly conductive tissue (Fig. 3). This numerical result can explain tissue damage observed in some in vivo electrotransfection studies. For the success of electrotransfection, and minimization of thermal damage, optimization of pulsing protocol is necessary. This is particularly important if needle electrodes will be used.
To conclude, when comparing a pair of needles to parallel plate electrodes, with needles it is harder to avoid tissue heating and to achieve, in a larger tissue volume, local electric field between reversible and irreversible permeabilization threshold. This is an important guideline for clinicians when considering appropriate electrode type for cancer electrochemotherapy or electro gene transfer.
ACKNOWLEDGMENT This work was funded within the program of bilateral scientific cooperation between the Republic of Croatia and the Republic of Slovenia, and by national research grants.
REFERENCES 1. 2. 3. 4. 5. 6.
7. 8.
9.
Neumann E, Sowers AE, Jordan CA (1989) Electroporation and electrofusion in cell biology. Plenum Press, New York. Mir LM (2000) Therapeutic perspectives of in vivo cell electropermeabilization. Bioelectrochem 53:1-10 Davalos RV, Rubinsky B, Mir LM (2003) Theoretical analysis of the thermal effects during in vivo tissue electroporation. Bioelectrochem 61:99–107 doi:10.1016/j.bioelechem.2003.07.001 Pliquett U (2003) Joule heating during solid tissue electroporation. Med Biol Eng Comput 41:215–219 Lackovic I, Magjarevic R, Miklavcic D (2005) Analysis of tissue heating during electroporation based therapy: A 3D FEM model for plate electrodes, IFMBE Proc. vol. 8, Tsukuba, Japan, 2005 Miklavcic D, Semrov D, Mekid H, Mir LM (2000) A validated model of in vivo electric field distribution in tissues for electrochemotherapy and for DNA electrotransfer for gene therapy. Biochim Biophys Acta 1523:233–239 Duck FA (1990) Physical properties of tissue: A comprehensive reference book. Academic Press, London Sel D, Cukjati D, Batiuskaite D, Slivnik T, Mir LM, Miklavcic D (2005) Sequential finite element model of tissue electropermeabilization. IEEE Trans. Biomed. Eng. 52: 816–827, doi: 10.1109/TBME.2005.845212 Marty M, Sersa G, Garbay JR, Gehl J, Collins CG, Snoj M, Billard V, Geertsen PF, Larkin JO, Miklavcic D, Pavlovic I, Paulin-Kosir SM, Cemazar M, Morsli N, Soden DM, Rudolf Z, Robert C, O’Sullivan GC, Mir LM (2006) Electrochemotherapy – An easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: Results of ESOPE (European Standard Operating Procedures of Electrochemotherapy) study. Eur J Cancer Suppl 4, 3-13 doi: 10.1016/ j.ejcsup.2006.08.002 Author: Igor Lackovic Institute: University of Zagreb, Faculty of Electrical Engineering and Computing Street: Unska 3 City: Zagreb Country: Croatia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Antitumor effectiveness of electrotransfer of p53 into murine sarcomas alone or combined with electrochemotherapy using cisplatin M. Cemazar1, A. Grosel1, S. Kranjc1, and G. Sersa1 1
Institute of Oncology/Department of Experimental Oncology, Zaloska 2, Ljubljana, Slovenia
Abstract— The aim of our study was to evaluate feasibility and therapeutic potential of electrotransfer of p53 alone or combined with electrochemotherapy using cisplatin on two murine sarcomas with different p53 status. Antitumor effectiveness of three consecutive electrotransfer of p53 was more effective in wild-type LPB tumor than mutated SA-1 tumors, resulting in 21.4% of tumor cures in LPB tumor and 12.5% in SA-1 tumors. Pretreatment of tumors with electrotransfer of p53 enhanced chemosensitivity of both tumor models treated by electrochemotherapy with cisplatin. After only one application of this treatment combination in LPB tumor model, specific tumor growth delay was prolonged in combined treatment group compared to electrotransfer of p53 or electrochemotherapy with cisplatin alone, whereas in SA-1 tumors this treatment combination resulted in 31.6% of cured animals. Results of our study show that electrotransfer of p53 alone or combined with electrochemotherapy is feasible and effective treatment of tumors. The combination of electrotransfer and electrochemotherapy after only one application resulted in complete regression of tumors. Keywords— electrotransfer, solid tumors, p53, electrochemotherapy, cisplatin.
I. INTRODUCTION Electroporation of various tissues facilitates delivery of naked DNA, resulting in increased transfection efficiency of either reporter or therapeutic genes compared to injection of plasmid DNA alone [1,2]. Based on the results of preclinical studies of electrogene therapy in treatment of cancer and other diseases it can be foreseen as alternative to viral gene therapy in humans. Several studies demonstrated that reintroduction of wildtype p53 induces regression of tumors with different histologies in vitro and in vivo [3,4]. Based on these results, several clinical studies of gene therapy with p53 on different types of tumors have been initiated with limited success [5]. In order to improve efficacy, gene therapy with p53 was combined with other cytotoxic therapies, which induce apoptotic cell death, predominantly with cisplatin and ionizing radiation [6]. Electroporation can be used for gene therapy in electrogene therapy with p53 as well as in electrochemotherapy with cisplatin [7-9]. Our hypothesis was that by increased
DNA damage due to increased cellular concentration of cisplatin by electroporation e.g. electrochemotherapy, pretreatment of these tumors by electrotransfer of p53 could result in improved antitumor effect. Therefore, we tested effectiveness of combined therapy in two murine tumor models with different p53 status. II. MATERIALS AND METHODS A. Plasmids and drug Plasmid DNA (pDNA), pCMV Neo-Bam vector (pCMV) and pC53-SN3 human wild-type p53 cDNA inserted in pCMV Neo-Bam vector (p53) were gift from B. Vogelstein (John Hopkins University, Baltimore, MO, USA). pDNAs were isolated from bacterial host strains with Quiagen Endo-free Maxi Kit (Qiagen, Hilden, Germany). Cisplatin (Cisplatyl, Aventis, France) was dissolved in 0.9% NaCl solution at concentration 0.5 mg/ml and stored in aliquots at -20˚C until use. B. Animals and tumor models C57Bl/6 and A/J (8-10 weeks, both sexes) were purchased from the Institute of Pathology (Medical Faculty, University of Ljubljana, Slovenia). All animal experiments were carried out in accordance with the guidelines for animal experiments of the EU directives and the permission was obtained from the Ministry of Agriculture, Forestry and Food of the RS (approval No. 323-245/2002). Two murine fibrosarcomas LPB and SA-1 were used. For subcutaneously induced tumors, suspension of 1.3 x 106cells/0.1ml LPB and 5x105 cells/0.1ml SA-1 cells were injected dorsolaterally into C57/Bl6 or A/J mice. When the tumors reached approx. volume 40 mm3 mice were randomly divided into experimental groups consisting of at least 6 mice. C. Electrotransfer of p53 alone and combined with electrochemotherapy Plasmid DNA was injected intratumorally (50 μg/tumor). One minute after injection, eight square-wave electric
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 582–585, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Antitumor effectiveness of electrotransfer of p53 into murine sarcomas alone or combined with electrochemotherapy using cisplatin
583
pulses of 600 V/cm (amplitude/electrode distance ratio) and 5 ms duration at repetition frequency 1 Hz were deliver by a Jouan GHT 1287 (Jouan, St. Herblain, France). Electrotransfer was performed three times every 48 h. In case of combined treatment, electrotransfer was performed 24 h before electrochemotherapy. For treatment of tumors with electrochemotherapy cisplatin was injected intravenously (i.v., 4 mg/kg in 50 μl/10g) three minutes prior to delivery of electric pulses. Eight electric pulses of 1300 V/cm and 100 μs duration at repetition frequency 1 Hz were delivered in the same manner as in electrotransfer protocol. Response of tumors to different therapies was followed by measuring three orthogonal diameters with Vernier caliper. Tumor volume was calculated by the formula for ellipsoid. Animals with tumors in regression were checked for the tumor at 4-5 days interval up to 100 days. After that, if no tumor regrowth occurred they were considered cured (complete response). Experimental data were analyzed by one-way ANOVA followed by Holm-Sidak method for multiple comparisons using SigmaStat software (SPSS Inc., Chicago, USA). III. RESULTS A. Antitumor effectiveness of electrotransfer of p53 Electrotransfer of p53 was effective therapy in both tumor models as demonstrated by tumor growth curves and tumor curability (Fig. 1). In LPB tumors electrotransfer of p53 (EGTp53) delayed tumor growth for 6.8 days compared to the control group and in SA-1 tumors for 9.7 days. Electrotransfer of p53 resulted in 21.4% of LPB and 12.5% of SA-1 tumors cured. Analysis of interaction of p53 gene therapy and electroporation of tumors demonstrated that p53 gene therapy was synergistically potentiated by electroporation of the LPB tumors, whereas additive interaction was observed in SA-1 tumors. B. Antitumor effectiveness of electrotransfer of p53 combined with electrochemotherapy Pretreatment of LPB tumors with electrotransfer of p53 enhanced chemosensitivity of tumors to electrochemotherapy resulting in 6-fold potentiation of the effect; from 0.5 days specific tumor growth delay in tumors treated by p53 injected i. t. and electrochemotherapy to 2.9 days in combined treatment with electrotransfer and electrochemotherapy (Fig. 2a, Fig. 3). Furthermore, combination of electrotransfer of p53 and electrochemotherapy resulted in 2-fold potentiation compared to treatment with electric pulses
Fig.1. Tumor growth curves of LPB (a) and SA-1 (b) tumors treated by electrogene therapy with p53 or pCMV. In the parentheses the number of tumor free animals is given.
alone and electrochemotherapy (EP(EGT)ECT) and less than 2-fold potentiation compared to electrotransfer of p53 alone. In SA-1 tumors there was no effect of electrotransfer of p53 compared to therapy with p53 combined with electrochemotherapy, as the treatment with electrochemotherapy alone was very effective (21.7 days specific growth delay) (Fig. 2b, Fig. 3). However, electrotransfer of p53 combined with electrochemotherapy resulted in 3-fold potentiation compared to treatment with electric pulses alone and electrochemotherapy (EP(EGT)ECT) and more than 6-fold potentiation compared to electrotransfer of p53 alone. In addition, the combined treatment with electrotransfer and electrochemotherapy as well as combined treatment with gene therapy and electrochemotherapy in SA-1 tumors resulted in tumor cures (31.6% and 26.3%, respectively), while in LPB tumors no complete responses of the tumors was observed. Furthermore, in SA-1 tumors treated by p53 injected i. t. the specific growth delay was increased (2.34),
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
584
M. Cemazar, A. Grosel, S. Kranjc and G. Sersa
Fig.3. Specific tumor growth delay of LPB and SA-1 tumors treated by combined therapy of electrogene therapy with p53 and electrochemotherapy with cisplatin.
Fig.2. Tumor growth curves of LPB (a) and SA-1 (b) tumors treated by combined therapy of electrogene therapy with p53 and electrochemotherapy with cisplatin.
while in LPB tumors treatment of tumors did not results in any antitumour effectiveness, indicating that SA-1 tumors are more susceptible to gene therapy with p53. IV. DISCUSSION Electrically-assisted gene delivery in treatment of cancer has already been employed for delivery of various therapeutic genes, such as Stat3, endostatin, IL-2, IL-12, IL-18, IFNα, HSVtk/GCV, GM-CSF and p53 (reviewed in [2]). These studies demonstrated the feasibility of this approach and also demonstrated antitumor effectiveness on different tumors as well as on metastases. Antitumor effectiveness of electrotransfer of p53 has been tested in two human xenografts so far with suppression of tumor growth during the course of the treatment [9]. We and the others showed that electrotransfer of p53 has antitumor effect in PC-3 tumors grown in nude mice. However, due to the differences in the sequence of DNA injection with regard to the application of electric pulses, our study
resulted in better antitumour effectiveness. Intratumoral injection of the wild-type p53 gene into p53-mutated esophageal xenografts followed by electroporation also suppressed tumor growth [8]. In the present study electrotransfer of p53 delayed growth of LPB tumors for 5 days and SA-1 tumors for 3.3 days compared to tumors treated with electrically-assisted delivery of pCMV. Therefore, this study demonstrated that introduction of human p53 gene into both, LPB and SA-1 murine tumors using electricallyassisted gene delivery has pronounced antitumor effectiveness. Several other studies using human p53 gene deliver by adenoviral vector or liposomes on rodent tumors demonstrated the same effect [10-13]. Electrotransfer of p53 resulted in synergistic interaction in LPB tumors, while in SA-1 tumors it was only additive. Nevertheless, treatment with intratumoral injection of p53 alone resulted in pronounced antitumour effectiveness in SA-1 tumor, indicating that SA-1 tumors are more susceptible to gene therapy. The status of endogenous p53 protein differs between the two tumors, it is wild type in LPB tumors and mutated (homozygous) in SA-1 tumors. We can speculated that the reason for synergistic effect of electroporation and intratumoral p53 injection in LPB tumors is that the electroporation-induced addition of exogenous wild type p53 protein to endogenous one contributed to increased cell death. Besides demonstration of feasibility and antitumor effectiveness of electrotransfer our aim was to evaluate feasibility and antitumor effectiveness of electrotransfer of p53 in combination with electrochemotherapy with cisplatin. Numerous studies have been already done in preclinical and clinical level combining gene therapy with p53 with different anticancer drugs [13]. Two exclusive points regarding the effect of p53 on chemosensitivity exist: p53 could in-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Antitumor effectiveness of electrotransfer of p53 into murine sarcomas alone or combined with electrochemotherapy using cisplatin
crease chemosensitivity due to apoptosis or decrease chemosensitivity because of growth arrest and DNA repair [15]. In our study combination of electrotransfer of p53 with electrochemotherapy with cisplatin resulted in increased antitumour effectiveness, especially in SA-1 sarcoma, where tumor curs were achieved after single treatment. Electrotransfer of other therapeutic genes (IL-12, GMCSF, IL-2, MBD2) was already tested with electrochemotherapy using bleomycin in different tumor models [15-17]. Combination of electrochemotherapy with bleomycin or nedaplatin and electrogene therapy with p53 synergistically suppressed tumor growth of esophageal xenografts [8]. Our results also demonstrated that antitumor effect was the highest when both therapies were combined and resulted in 31.6% of tumor cures in SA-1 tumors. However, in LPB tumor, tumor growth was significantly prolonged compared to combination of electrotransfer of p53 or cisplatin or electrochemotherapy alone, but no tumor cures were obtained. This indicates on variability of the response of different tumor types to this treatment combination, which might be in relation to the status of endogenous p53. As already mentioned, p53 can either increase or decrease chemosensitivity. In our experiments in both tumor models, increased chemosensitivity was observed. In the case of SA-1 tumors (mutant p53), due to the higher susceptibility of these tumors to gene therapy as well as to electrochemotherapy, the chemosensitivity was more pronounced and resulted in tumor cures after single treatment with combined therapy. In LPB (wild type p53) tumors the increase sensitivity after one treatment was not observed. The reason for that is probably to low amount of wild type p53 (endogenous and exogenous) present in the tumors, since in the case of repetitive treatment, electrotransfer resulted in tumor cures. In conclusion, this study demonstrates feasibility and good tumor effect of combined therapy of electrotransfer and electrochemotherapy. Electrotransfer is on the verge of entering clinical trials especially because electricallymediated gene delivery has proven to be safe in clinical environment.
2. 3. 4. 5. 6.
7.
8.
9. 10. 11.
12.
13. 14. 15.
16.
ACKNOWLEDGMENT The authors acknowledge the financial support from the state budget by the Slovenian Research Agency (project No. J3-7044 and P3-0003).
REFERENCES 1.
Cemazar M, Golzio M, Rols MP, Sersa G, Teissie J. Electricallyassisted nucleic acid delivery in vivo: Where do we stand? Review. Current Pharmaceutical Design 2006; 12 (29): 3817-3825.
17.
585
Andre F, Mir, LM (2004). DNA electrotransfer: its principles and an updated review of its therapeutic applications. Gene Ther. 11 Suppl 1, S33-S42 Baker SJ, Markowitz S, Fearon ER, et al. (1990). Suppression of human colorectal carcinoma cell growth by wild-type p53. Science 249, 912-915. Roth JA, Swisher SG, Meyn RE. (1999). p53 tumor suppressor gene therapy for cancer. Oncology (Huntingt) 13, 148-154. Roth JA, Nguyen D, Lawrence DD, et al (1996). Retrovirus-mediated wild-type p53 gene transfer to tumors of patients with lung cancer. Nat. Med. 2, 985-991. Nguyen D, Spitz F, Yen N, Cristiano R, Roth J. (1996). Gene therapy for lung cancer: Enhancement of tumor suppression by a combination of sequential systemic cisplatin and adenovirus-mediated p53 gene transfer. J. Thorac. Cardiovasc. Surg. 112, 1372-1377. Cemazar M, Grosel A, Glavac D, et al. (2003). Effects of electrogenetherapy with p53wt combined with cisplatin on survival of human tumor cell lines with different p53 status. DNA Cell Biol. 22, 765775. Matsubara H, Maeda T, Gunji Y, et al. (2001). Combinatory antitumor effects of electroporation-mediated chemotherapy and wildtype p53 gene transfer to human esophageal cancer cells. Int. J. Oncol. 18, 825-829. Mikata K, Uemura H, Ohuchi H, et al. (2002). Inhibition of Growth of Human Prostate Cancer Xenograft by Transfection of p53 Gene: Gene Transfer by Electroporation. Mol. Cancer Ther. 1, 247-252. Kralj M, Pavelic,J. (2003). p21WAF1/CIP1 is more effective than p53 in growth suppression of mouse renal carcinoma cell line Renca in vitro and in vivo. J. Cancer Res. Clin. Oncol. 129, 463-471. Li Z, Rakkar A, Katayose Y, et al. (1998). Efficacy of multiple administrations of a recombinant adenovirus expressing wild-type p53 in an immune-competent mouse tumor model. Gene Ther. 5, 605-613. Hsiao M, Tse V, Carmel J, et al. (1997). Intracavitary liposomemediated p53 gene transfer into glioblastoma with endogenous wildtype p53 in vivo results in tumor suppression and long-term survival. Biochem. Biophys. Res. Commun. 233, 359- 364. Horowitz J. (1999). Adenovirus-mediated p53 gene therapy: overview of preclinical studies and potential clinical applications. Curr. Opin. Mol. Ther. 1, 500-509. Blagosklonny MV, El-Deiry WS. (1998). Acute overexpression of wt p53 facilitates anticancer drug-induced death of cancer and normal cells. Int. J. Cancer 75, 933-940. Kishida T, Asada H, Itokawa Y, et al. (2003). Electrochemo-gene therapy of cancer: intratumoral delivery of interleukin-12 gene and bleomycin synergistically induced therapeutic immunity and suppressed subcutaneous and metastatic melanomas in mice. Mol. Ther. 8, 738-745. Heller L, Pottinger C, Jaroszeski MJ, Gilbert R, Heller R. (2000). In vivo electroporation of plasmids encoding GM-CSF or interleukin-2 into existing B16 melanomas combined with electrochemotherapy induces long-term antitumour immunity. Melanoma Res. 10, 577583. Ivanov MA, Lamrihi B, Szyf M, Scherman D, Bigey P. (2003). Enhanced antitumor activity of a combination of MBD2-antisense electrotransfer gene therapy and bleomycin electrochemotherapy. J. Gene Med. 5, 893-899. Author: Institute: Street: City: Country: Email:
Maja Cemazar Institute of Oncology Ljubljana Zaloska 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Bases and rationale of the electrochemotherapy L.M. Mir1,2 1
CNRS UMR 8121, Institut Gustave-Roussy, Villejuif, France 2 Univ Paris-Sud, UMR 8121
Abstract— Electrochemotherapy (ECT) is a non-thermal tumour ablation modality. ECT is safe and effective on any type of solid tumour. ECT is based on the achievement of in vivo tumor cell electropermeabilization by means of electric pulses locally delivered to the tumors. ECT is also based on the use of non-permeant drugs possessing high intrinsic cytotoxicity (such as bleomycin), or low-permeant drugs with known efficacy (such as cisplatin), which act directly on the cellular DNA. These drugs have to be injected before the electric pulses delivery. Cell electropermeabilisation, a physical procedure that affects all tumor cell types, allows these anticancer drugs to enter the cells, thus magnifying their cytotoxicity by orders of magnitude. However, there are other reasons explaining why this treatment is simple, safe and efficient. Selectivity towards the dividing tumour cells and safety of the procedure are due to the fact that, at least for the bleomycin injected intravenously, treatment causes a mitotic cell death that rapidly kills the dividing tumor cells and spares the neighboring non-dividing normal cells. Safety is also due to the vascular effects of the electric pulses: ECT provokes a transient vascular lock which prevents further bleeding, and even stops previous bleeding in the case of hemorrhagic nodules. As for efficacy, ECT efficacy is also sustained by a response of the host immune system, probably due to the type of cell death caused by the ECT. ECT is very well tolerated by the patients, and why its efficacy is very high on the treated nodules, whatever the tumour histological origin. Its use is presently standardized to skin and subcutaneous localisations. Keywords— Electropermeabilization, bleomycin, cisplatin, antitumor treatment.
ACKNOWLEDGMENTS The author thanks the EU commission for funding the projects Cliniporator (QLK3-1999-00484) and ESOPE (QLK3-2002-02003) in the frame of the 5th FP. The author also thanks his colleagues within and outside these two projects. Work was also supported by grants of the CNRS, the IGR and the AFM, and by bilateral exchanges (PICS, Proteus …).
REFERENCES 1. 2.
3.
Mir LM (2006) Bases and rationale of electrochemotherapy Eur J Cancer Supplement 4:38-44 Marty M, Sersa G, Garbay JR, et al. (2006) Electrochemotherapy – an easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: results of ESOPE (European Standard Operating Procedures of Electrochemotherapy) study. Eur J Cancer Supplement 4:3-13 Mir LM, Gehl J, Sersa S, et al. (2006) Standard Operating Procedures of the Electrochemotherapy. Eur J Cancer Supplement 4:14-25 Author: Institute: Street: City: Country: Email:
Lluis M. Mir UMR 8121 CNRS, Institut Gustave-Roussy, Univ Paris-Sud 39 rue C. Desmoulins Villejuif – F-94805 France
[email protected]
electroporation,
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 622, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Cell membrane fluidity at different temperatures in relation to electroporation effectiveness of cell line V79 Masa Knaduser1, Marjeta Sentjurc2 and Damijan Miklavcic1 1
University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, Ljubljana, Slovenia 2 Institute Jozef Stefan, Jamova 39, Ljubljana, Slovenia
Abstract— When cell is exposed to short electric pulses with high amplitudes its membrane is transiently permeabilised. Characteristics of the cell play important role in this process. In the present study the effect of cell membrane fluidity on electroporation was investigated. To obtain significant differences in cell membrane fluidity cell suspension was exposed to different temperatures for five minutes before and during the pulse application. To exclude the effect of the temperature on cell membrane resealing only a small droplet of cell suspension was used for cell membrane permeabilization assay as it reached room temperature in few seconds after it was removed from electroporated sample. It was found that the decrease in cell membrane fluidity caused by exposure of cells to low temperature during electric pulse application significantly reduces electroporation effectiveness of the cell line V79. Keywords— electroporation, in vitro, cell membrane fluidity, order parameter, temperature.
I. INTRODUCTION Application of short electric pulses with high amplitudes causes transient cell membrane permeabilization of the treated cell. This method known as electroporation is widely used in medicine and biotechnology for the introduction of non-permeant molecules into the cell interior [1, 2, 3]. For the best efficiency of the method electric pulse parameters should be chosen properly in order to permeabilize cell membrane and at the same time preserve cell viability. Besides electric pulse parameters characteristics of the cell exposed to electric field play an important role in the process [4, 5]. Among different cell characteristics that affect electroporation effectiveness, membrane fluidity could play an important role. Its effect on electroporation was studied previously using two different experimental approaches; cell membrane fluidity was altered by addition of different chemical substances [6], or different cell lines were used and the differences in their cell membrane fluidity were related with the electroporation effectiveness [7]. In both cases the cell membrane fluidity could not be clearly related to electroporation. In case of membrane fluidity alteration by addition of chemical substances [6] other cell processes could be affected, while when comparing different cell lines [7] the differences in cell membrane fluidity were not pro-
nounced enough and we should bear in mind that when using different cell lines there could be other biological factors that affect their behavior. Those were the main reasons for which we decided to further investigate the influence of cell membrane fluidity on electroporation in single cell line. The aim of the present study was to choose the conditions where differences in cell membrane fluidity were pronounced and to relate them to electroporation effectiveness. To obtain significant differences in cell membrane fluidity cell suspension was exposed to different temperatures for five minutes before and during the electric pulse application. It was shown previously that the incubation temperature of cell suspension before electroporation has no effect on the process. To exclude the effect of the temperature on cell membrane resealing only a small droplet of cell suspension was used for cell membrane permeabilization assay as it reached room temperature in few seconds after it was removed from electroporated sample. This experimental design allowed us to study the effect of cell membrane fluidity caused by temperature during the electric pulse application on electroporation effectiveness. II. MATERIALS AND METHODS Chinese hamster lung fibroblasts V79 cell line was grown in Eagels minimum essential medium supplemented with 10% fetal bovine serum and antibiotics at 37°C and 5% CO2 for three to four days to reach confluence. Cells were then harvested by trypsinization and resuspended in electroporation medium in concentration 2x107 cells/ml. Spiner modification of Eagel minimum essential medium, with ph 7,4, that does not contain calcium was used as electroporation medium. Cell suspension was exposed to a train of eight rectangular electric pulses with repetition frequency 1 Hz and amplitudes 500, 700, 900 V/cm in electroporation cuvettes with incorporated aluminum electrodes. Electric pulses were generated with Cliniporator device. The volume of cell suspension was 400 μl. This volume was required to maintain the desired temperature of the sample during the pulse application. The desired temperature was reached within five minutes of incubation at 37°C, 25°C or 4°C, that took place im-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 570–573, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Cell membrane fluidity at different temperatures in relation to electroporation effectiveness of cell line V79 100 0
80
% bleomycin uptake
mediately before electric field application. The temperature of cell suspension during the pulse application did not change significantly. In order to avoid the effect of temperature on cell membrane resealing after the electric pulse application electroporated cell suspension was placed at room temperature immediately. A 50 μl droplet of electroporated cells was placed on multiwell plates and diluted with 950 μl of electroporation medium at 25°C. Cell membrane permeabilization was determined by bleomycin method described previously in detail by Kotnik et al. The method is based on the effect of 5 nM bleomycin, which is cytotoxic only if it gains the access to cell interior where it provokes double brakes in DNA molecule. In our case this situation occurred only when cell membrane was electroporated. Cytotoxic effect of bleomycin caused by cell membrane electroporation was determined by clonogenic test [8]. Cell membrane fluidity was measured by electron paramagnetic resonance (EPR) spectroscopy using methyl ester of 5-doxylpalmitate (MeFASL(10,3)) as a spin probe. For this purpose MeFASL(10,3) was prepared as a thin film on glass tube. Cell suspension was incubated with the spin probe for 15 minutes at room temperature at constant shaking. Cell pellet was obtained by centrifugation and measured at 4°C, 25°C, 37°C with X-band EPR spectrometer (Bruker ESP 300). Data on cell membrane fluidity characteristics were obtained by computer simulation of the experimental EPR spectra by EPRsim 2.6 program [9, 10].
571
37 C 250C 40C
60
40
20
0 500 V/cm
700 V/cm
900 V/cm
electric field
Figure 1: Electroporation of the cell membrane at 37°C, 25°C and 4°C determined by bleomycin uptake. Eight rectangular electric pulses with repetition frequency 1Hz, duration 100 μs and amplitude 500, 700 and 900 V/cm were applied. Immediately after the pulse application electroporated cells were maintained at room temperature. The values are means of at least four independent experiments ± standard deviation. determined primarily with order parameter and proportion of certain type of membrane domain. In membranes of cell line V79 at 37°C and room temperature the order parameters of three coexisting domains are comparable (Tab. 1), while the proportion of the two ordered do-
III. RESULTS Cell membrane electroporation, determined as percentage of bleomycin uptake at different temperatures is presented in Fig. 1. At 37°C and 25°C cell membrane electroporation increases with increase of the pulse amplitude. When 900V/cm is applied 80% of membrane electroporation is achieved at both temperatures. When the temperature is reduced to 4°C the cell membrane electroporation is drastically reduced and at 900 V/cm only 25% of cells are electroporated (Fig. 1). The data on cell membrane fluidity was obtained by computer simulation of the experimental EPR spectra. The best fit of the calculated spectra to the experimental one presented on Fig. 2. The Fig. 2. shows only a spectra measured at 37°C since the spectra obtained at 25°C and 4°C were processed in the same manner. To obtain a good fit it should be taken into account that the spectrum is superimposition of three spectral components which characterize three types of membrane coexisting domains (domain 1, 2, and 3) with different ordering and dynamics (Fig. 2). The data obtained by computer simulation of experimental EPR spectra show that cell membrane fluidity is
Figure 2: Experimental spectra of V79 cell membrane recorded at 37°C and its best fit obtained by computer simulation with EPRsim 2.6, together with the corresponding spectral component characterizing three types of membrane domains. MeFASL (10.3) was used as a spin probe that incorporates into the cell membrane and cell pellets were measured at 37°C. The spectra obtained at 25°C and 4°C were processed in the same way.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
572
Masa Knaduser, Marjeta Sentjurc and Damijan Miklavcic
mains increases only slightly when the temperature is decreased form 37°C to 25°C. However, when the temperature is further decreased to 4°C a pronounced increase in order parameters of all three domain types is observed. At the same time the proportion of the most ordered type of domain (domain 3) is increased. The most ordered domain, the domain three represents 48% at 37°C, 56% at 25°C and it finally predominates representing 85% of all membrane domains in the cells exposed to 4°C (Tab. 1). This means that the average membrane fluidity decreases with temperature and a pronounced decrease is observed at 4°C in comparison to 25°C and 37°C. Table 1: Order parameters of three coexisting types of membrane domains and proportion of each domain type at 37°C, 25°C and 4°C in cell line V79. Data obtained by computer simulation of experimental spectra using EPRsim 2.6. 37°C order parameter domain 1 0,14 ± 0,1 domain 2 0,28 ± 0,3 domain 3 0,52 ± 0,4 proportion of domain domain 1 36% domain 2 16% domain 3 48 %
25°C
4°C
0,12 ± 0,01 0,33 ± 0,01 0,59 ± 0,01
0,36 ± 0,9 0,38 ± 0,2 0,72 ± 0,1
27% 17% 56%
3% 12% 85%
IV. DISCUSSION Cell membrane fluidity was related to electroporation of cell membrane in cell line V79. The changes in cell membrane fluidity which are caused by increase in the order parameter of all three domain types and by increase of the proportion of the domain with the highest order parameter, have as a consequence the significant reduction in cell membrane electroporation i.e. from 80% at 37°C to 25% at 4°C (Fig. 1). This effect is not due to reduced bleomycin uptake caused by slower diffusion of molecules at lower temperatures during pulse application as the cell suspension was placed at room temperature few seconds after the pulse application. Cell membrane resealing in this cell line reaches 50% in twenty minutes after the pulse application [7] and in the time available after pulse application when cells are maintained at room temperature the bleomycin is still able to enter the cytosol. The changes in cell membrane fluidity affected by temperature were consequence of the changed order parameters and proportion of the coexisting domains in the cell membrane. It was shown by EPR spectroscopy and computer simulation of the EPR spectra that the spin probes incorporated into the cell membranes of living cells experience two
[11], or more [7, 12] types of environment with different ordering and dynamics. It should be stressed that the lateral motion of the spin probe is slow on the time scale of EPR spectroscopy. Therefore an EPR spectral component describes only the properties of a spin label’s immediate environment, which corresponds to the domains on nm scale. All the domains with the same mode of lipid motion define a certain domain type and are reflected in one spectral component. The number of different coexisting domain types as measured by EPR could depend on the spin probe used, but could depend also on the cell line. The position of the spin probe used in our experiments (MeFASL(10,3) is less defined as is the position of the spin labeled phosphatidylcholine used in the work of Swamy et al. [11], what could be the reason for different number of domain types registered by the spin probes. Also the approximations used in computer simulation procedures, which are different [9, 11] could produce the discrepancies. Irrespective of that order parameters obtained with spin labeled phosphatidylcholine with doxyl group attached to different carbon atoms on the acyl chains were from 0,1 to 0,25 for liquid disordered phase and from 0,3 to 0,6 for liquid ordered phase [11] and seem comparable to our results i.e. the liquid disordered domain would correspond to domain 1 and liquid ordered domain to domain 3 at 37°C and 25°C (Tab. 1). At 4°C the proportion of domain 1 decreases to only 3% and its order parameter increases to 0.36 (Tab. 1), indicating the disappearance of liquid disordered phase. Besides, the order parameter of liquid ordered phase (domain 3) increases significantly. This could be explained by lipid phase transition from less ordered to more ordered lipid phase and the corresponding decrease in membrane fluidity. The temperature of phase transition in this temperature range would be in good agreement with the results obtained in human platelets. It was reported that the temperature at which phase transition occurs is in the range from 1°C to 16°C [13]. Nevertheless the disappearance of liquid disordered phase at 4 °C could be a good reason for drastic reduction of in cell membrane electroporation (Fig.1) In conclusion, the decreased cell membrane fluidity, caused by low temperature, which was the consequence of disappearance of liquid disordered domain and increased order parameters, decreased membrane electroporation of the cell line V79 cultured in vitro. This is in accordance with the observation in alga Valonia where voltage needed for electroporation increases with decreasing temperature [14].
ACKNOWLEDGMENT This research was supported by Slovenian Research Agency.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Cell membrane fluidity at different temperatures in relation to electroporation effectiveness of cell line V79
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8.
Neumann E, Kakorin S, Toesing K (1999) Fundamentals of electroporative delivery of drugs and genes. Bioelectroch Bioenerg 48: 3-16. Mir LM (2000) Therapeutic perspectives of in vivo electropermeabilization. Bioelectrochemistry 53: 1 –10. Teissie J, Eynard N, Vernhes MC et al. (2002) Recent biotechnological developments of electropulsation. A prospective review. Bioelectrochemistry 55: 107-112 Rols MP, Teissié J (1992) Experimental evidence for the involvement of the cytoskeleton in mammalian cell electropermeabilization. Biochim Biophys Acta 1111: 45-50. Cemazar M, Jarm T, Miklavcic M et al. (1998) Effect of electric-field intensity on electropermeabization and electrosensitivity of various tumor cell lines. Electromagnetobiol 17: 263-272 Rols M P, Dahhou F, Mishra K P, Teissié J (1990) Control of electric field induced cell membrane permeabilization by membrane order. Biochemistry 29: 2960-2966. Kanduser M, Sentjurc M, Miklavcic D. (2006) Cell membrane fluidity related to electroporation and resealing. Eur Biophys J 35: 196-204. Kotnik T, Macek-Lebar A, Miklavcic D et al.(2000) Evaluation of cell membrane electropermeabilization by means of non-permeant cytotoxic agent. Biotechniques 28: 921-926.
9. 10. 11. 12. 13. 14.
573
Filipic B, Strancar J (2001) Tuning EPR spectral parameters with genetic algorithm. Applied Soft Computing 1: 83-90. Stancar J, Sentjurc M, Schara M. (2000). Fast and accurate characterization of biological membranes by EPR spectral simulation of nitroxides. J Magn Reson 142: 254-265. Swamy MJ, Ciani L., Ge M et al., (2006) Coexisting domains in the plasma membrane of live cells characterized by spin-label ESR spectroscopy. Biophys. J. 90: 4452-4465. Koklic T, Pirs M, Zeizig, R et al (2005): 1. Cell density influences on lateral domain structure of tumor cell membranes. J. Chem. Inf. Model. 45, 1701-1707. Crowe JH, Tablin F, Tsvetkova et al. (1999) Are lipid phase transitions responsible for chilling damage in human platelets. Cryobiology 38: 180-191. Zimmermann U (1982) Electric field-mediated fusion and related electrical phenomena. Biochim Biophys Acta 694:227-277. corresponding author: Author: Institute: Street: City: Country: Email:
Masa Knaduser University of Ljubljana, Faculty of Electrical Engineering TrZaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Electrochemotherapy in treatment of solid tumours in cancer patients G. Sersa1 for the ESOPE group 1
Institute of Oncology Ljubljana, Department of Experimental Oncology, Zaloška 2, Ljubljana, Slovenia
Abstract— Electrochemotherapy consists of chemotherapy followed by local application of electric pulses to the tumour to increase drug delivery into cells. Drug uptake can be increased by electroporation for only those drugs whose transport through the plasma membrane is impeded. Among many drugs that have been tested so far, only bleomycin and cisplatin found their way from preclinical testing to clinical trials. This local drug delivery approach is aimed at the treatment with palliative intent of cutaneous and subcutaneous tumour nodules of different histology. In clinical studies electrochemotherapy has proved to be highly effective and safe treatment approach for the treatment of cutaneous and subcutaneous tumour nodules. Treatment response for various tumours was 75% complete and 10% partial responses of the treated nodules. The main advantages of electrochemotherapy are its high effectiveness on tumours with different histology, simple application, minimal side effects and the possibility of effective repetitive treatment. Therefore, electrochemotherapy can provide clinical benefit for patients with advanced cutaneous and subcutaneous metastases, as alternative to standard treatment approaches such as surgical excision. Keywords— electrochemotherapy, bleomycin, cisplatin, electroporation, cutaneous tumours
I. INTRODUCTION The first clinical study on electrochemotherapy was published in 1991, reporting good treatment effectiveness of electrochemotherapy on cutaneous tumour nodules of head and neck tumours [1]. The results of this study by the group from the Institute Gustave Roussy, have stimulated other groups to initiate their own clinical studies. The first clinical centres which performed electrochemotherapy were Villejuif and Toulouse in France, the group in Tampa in USA, and our group at the Institute of Oncology Ljubljana in Slovenia. Recently, also new centres reported clinical experience on electrochemotherapy, e.g. Copenhagen in Denmark, Mexico City in Mexico, Chicago in USA, Vienna in Austria, Matsumoto and Jamagata in Japan, Sydney in Australia and Cork in Ireland [2]. In all clinical studies, 288 patients were included; 782 tumour nodules were treated by electrochemotherapy with bleomycin and 398 tumour nodules were treated by electrochemotherapy with cisplatin. The majority was malignant melanoma patients, and also the patients with metastases in head and neck region, mammary carcinoma, skin cancer,
ovarian cancer, Kaposi sarcoma and chondrosarcoma were treated by electrochemotherapy. The results of the studies can be summarized as supporting the assumption that electrochemotherapy has good antitumour effectiveness either using bleomycin or cisplatin, resulting in ~80% objective responses of the treated tumour nodules [2,3]. Based on these results, the European project that was aimed at developing and producing electric pulses generator was launched. In the CLINIPORATOR project, this electric pulses generator was developed and is now commercially available for those who would like to perform electrochemotherapy. This generator under the same name as the project - CLINIPORATOR™ (IGEA S.r.l., Carpi, Italy) is certified and is therefore appropriate for clinical use. Along with the development of the electric pulse generator, also plate and needle electrodes were developed. In the latest clinical study published by a consortium of four cancer centres gathered in the ESOPE project funded under the European Commission's 5th Framework Programme by using the same protocol, treatment response after electrochemotherapy according to the tumour type, drug used, route of its administration and use of the type of electrodes has been tested [4]. Besides, Standard Operating Procedures (SOP) of electrochemotherapy were prepared [5]. This was a prerequisite step to bring electrochemotherapy into standard clinical practice. II. TREATMENT PROCEDURE The treatment procedure is as follows: based on SOP, tumour nodules can be treated by electrochemotherapy with injection of bleomycin intravenously or intratumourally and by electrochemotherapy with cisplatin given intratumourally [5]. The choice of the chemotherapeutic drug in not based on tumour histology, but depends on the number and size of the nodules. After drug injection the tumour nodules are exposed to electric pulses. The interval between the intravenous drug injection and application of electric pulses is 8-28 min, and after the intratumoural injection, as soon as possible. Different sets of electrodes are available for application; plate electrodes for smaller tumour nodules and needle electrodes for the treatment of larger (3 cm) and thicker tumour nodules. The treatment can be performed in one-session or can be repeated in case of new emerging
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 614–617, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Electrochemotherapy in treatment of solid tumours in cancer patients
nodules or on those nodules that relapsed in some regions not well treated in the first treatment. Electrochemotherapy does not induce side effects due to chemotherapeutic drugs since the drug dosage is very low. However, the application of electric pulses to the tumours induces contraction of the underlying muscles. For electroporation, square wave electric pulses of the amplitude over distance ration of 1000-1300 V/cm, duration of 100 µs, frequency 1 Hz or 5 kHz are used. These muscle contractions are painful, but the pain dissipates immediately after electric pulses application. Nevertheless, in SOP, the procedures for alleviating the pain by local anaesthesia or by general anaesthesia in case of treating multiple nodules are also described. III. TREATMENT RESULTS The results of this study confirmed the results of all the previous studies [2-4] (Table 1) and can be summarised as follows: Table 1 Clinical response to electrochemotherapy in ESOPE clinical trial and previous clinical studies Electrochemotherapy Before ESOPE study ESOPE study
•
•
• •
Patients
Nodules
Response % PD
NC
PR
CR
OR
247
1009
6
11
19
64
83
41
171
5
10
11
74
85
An objective response rate of 85% (73.7% complete response rate) was achieved for electrochemotherapytreated tumour nodules, regardless of tumour histology and drug or route of administration used. At 150 days after treatment, the local tumour control rate for electrochemotherapy was 88% with bleomycin given intravenously, 73% with bleomycin given intratumourally and 75% with cisplatin given intratumourally, demonstrating that all three approaches were equally effective in local tumour control. Electrochemotherapy was equally effective, regardless of tumour type and size of the nodules treated. Side effects of electrochemotherapy were minor and tolerable (muscle contractions and pain sensation). IV. DISCUSSION
Electrochemotherapy is used for the treatment of cutaneous and subcutaneous tumour nodules of any type of malig-
615
nancies. The treatment advantages and clinical uses for electrochemotherapy can be summarized: A. Effectiveness in tumour nodules of different histologies Melanoma was the predominant tumour type in the clinical studies [4,6], however there are several reports demonstrating effectiveness of electrochemotherapy on other types of recurrent tumours or metastases, like breast carcinoma, head and neck tumours, squamous cell carcinoma, basal cell carcinoma and others in more sporadic reports [2]. The objective response rate of non-melanoma tumours was the same as of melanoma tumours 81%, which indicates on equal effectiveness of electrochemotherapy on different tumour types [4]. The rationale for the obtained results is simple, either bleomycin or cisplatin, when reaching intracellular targets, exert their cytotoxic action, if sufficient amount of the drug is present in the cells. Since electric pulses induce electropermeabilization of the cells, in electrochemotherapy more drug is able to reach its intracellular targets, the cell DNA, which explains the higher efficacy of these drugs in association with application of electric pulses to the tumours. Besides this principal underlying mechanism of antitumour effectiveness of electrochemotherapy, other mechanisms are also involved. Based on the preclinical and clinical data, electrochemotherapy can be used in treatment of single or multiple tumour nodules of different histology in the cutaneous and subcutaneous tissue [7]. B. Minimal side effects Electrochemotherapy is easy and quick (~ 25 min) to perform, in majority of cases on out-patient basis (as clearly stated in the SOP). Therefore, it has minimal burden to the patients, since in most case its effectiveness was demonstrated after a single treatment. However, it could be repeated with equal antitumour effectiveness, if the tumour nodules recur or if new tumour nodules emerge. After treatment no specific care or dressing of the treated nodules is required. All these aspects, and the fact that electrochemotherapy can be performed also in patients with contraindications for surgical treatment or radiation therapy and in elderly patients, provide evidence that electrochemotherapy has substantial impact on quality of life in cancer patients with progressive disease. Furthermore, electrochemotherapy is performed with low doses of bleomycin or cisplatin therefore no systemic side effects were observed [4]. C. Simple application Electrochemotherapy can be performed in general or local anaesthesia. In either of the procedures it is simple pro-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
616
cedure that can be in the case of local anaesthesia performed in out patient basis. No extra technical skills are needed to perform the treatment, a one day training session is sufficient to perform the treatment according to the prepared SOP [5]. Therefore, it is a procedure that can be performed also in developing countries and small hospitals, where other standard treatments are not readily available. In comparison with the complexity of other local or regional treatments like radiotherapy, isolated extremity perfusion and infusion, it is much simpler. D. Repetitive treatment Electrochemotherapy is effective treatment when sufficient drug concentration is obtained in the tumour nodules and the whole tumour volume is adequately covered by application of electric pulses, so that most of the tumour cells are electroporated. In the case of bigger tumour nodules that are not covered by electric field in single run of electric pulse application, several applications of electric pulses are required. In such cases viable tumour cells may remain, therefore recurrent or remaining tumour mass must be retreated. Electrochemotherapy is very effective in repetitive treatments as demonstrated in several clinical cases [6,8]. The treatment can be repeated in 3-6 weeks interval with the same treatment effectiveness as the in previous treatment. In our study, in the case of squamous cell carcinoma in the neck region, the exophytic part of the tumour was treated in 8 sessions over a three week period. Six months after the start of the treatment complete response of the treated nodule vas obtained and cytologically confirmed. This indicates also that even after eight sessions, electrochemotherapy with cisplatin did not result in acquired cisplatin resistance in the treated area. Therefore, these results demonstrate that electrochemotherapy can be repeated resulting in good antitumour effectiveness without development of resistance to chemotherapeutic drugs.
G. Sersa for the ESOPE group
F. Palliative treatment In cancer patients with progressive disease (stage IV melanoma), due to the lack of suitable treatment that would prolong overall survival, the objective of the treatment should be improving quality of life during terminal phase. Especially, for the in–transit metastases in extremities, the surgical treatment sometimes requires amputation of the limb, which is causing severe disability and physiological burden to the patient. Radiotherapy and chemotherapy options are also absent or very limited because of the number of metastases, low effectiveness or previous treatment. Palliative treatments in these cases are very scarce and mainly represented by isolated limb perfusion and infusion. Both of these treatments require highly skilled surgeons as well as adequate facilities that can not be easily translated to other cancer centres where this experience does not exist. Therefore, electrochemotherapy might represent an alternative to other standard treatments, for example surgery, radiation therapy due to its simplicity, short duration of the treatment and tissue preservation [2,4]. G. Neodjuvant treatment Electrochemotherapy can be used as neoadjuvant treatment in form of cytoreductive therapy before conventional treatment. A case of anal melanoma was reported where two repetitive electrochemotherapy treatments were used as cytoreductive treatment enabling surgical resection of anal melanoma with organ and function sparing effect [9]. In addition, treatment of digital chondrosarcoma demonstrated that electrochemotherapy enabled resection of the tumour and bone grafting to fill the bone defect, rescuing the finger from amputation [10]. So far, these are the only cases reported in the literature, but several similar cases can be foreseen where electrochemotherapy could be used as cytoreductive treatment before conventional treatment.
E. Effectiveness in tumours emerging in pre-treated areas
H. Organ and function sparing treatment
Electrochemotherapy was so far tested in patients with progressive disease, where other standard treatment procedures have failed or were exhausted. In some cases the recurrent tumour nodules were in previously irradiated areas or in the area of the surgical field of the previously removed nodules [4]. Clinical data demonstrated effectiveness in most of such cases, regardless being in previously irradiated areas or in previously resected areas, or in the skin flap. In such cases standard interventions are no longer possible and electrochemotherapy provides treatment of choice for these tumours.
Electrochemotherapy can be performed on all parts of the body including the skull, face, oral cavity and anal sphincter. In certain parts of the body, surgery or radiation therapy can not be performed with organ and function sparing effect. The reports demonstrated that treatment of basal cell carcinoma of the skin on the face, especially in the ears, nose and lips has good antitumour and less disfiguring effect than excisional surgery, therefore being a tissue preserving procedure [9]. The report on treatment of recurrent perineal melanoma has demonstrated that electrochemotherapy provides a means for organ sparing, instead of surgical urethrectomy, which should have been performed with
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Electrochemotherapy in treatment of solid tumours in cancer patients
urinary diversion. All these reports and other indications that may be foreseen demonstrate that electrochemotherapy can be used as organ sparing and function saving treatment. I. Treatment of haemorrhagic and painful tumour nodules The application of electric pulses to the tissues induces a transient, but reversible reduction of blood flow, as reported in preclinical studies. The restoration of the blood flow in normal tissue is much faster than of that in tumours. As the results of that it provides prolonged drug action in the tumours and prevents bleeding. The latter was reported in treatment of hemorrhagic nodules of malignant melanoma where electrochemotherapy was suggested as treatment of choice for the palliation of haemorrhaging skin metastases. Furthermore, it was demonstrated that electrochemotherapy alleviated the pain around the tumour location (squamous cell carcinoma of supraglotis) to the extent that the patient no longer required analgesics. Another report on skin metastases originating from bladder cancer reported that electrochemotherapy alleviated pain in painful nodules besides preventing their bleeding. Besides these reports, similar observations were also made in the ESOPE study [4].
ACKNOWLEDGMENT The authors acknowledge the financial support from the EU funded project ESOPE (QLK-2002-02003) and the state budget of the Slovenian Research Agency (programme No. P3-0003; project No. J3-7044).
617
REFERENCES 1.
Mir LM, Belehradek M, Domenge C, et al. (1991) Electrochemotherapy, a new antitumor treatment: first clinical trial. C R Acad Sci III 313: 613-618. 2. Sersa G. (2006) The state-of-the-art of electrochemotherapy before the ESOPE study; advantages and clinical uses. EJC Suppl 4: 52-59. 3. Byrne CM, Thompson JF. (2006) Role of electrochemotherapy in the treatment of metastatic melanoma and other metastatic and primary skin tumors. Expert Rev Anticancer Ther 6: 671-678. 4. Marty M, Sersa G, Garbay JR, et al. (2006) Electrochemotherapy – An easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: Results of ESOPE (European Standard Operating Procedures of Electrochemotherapy) study. EJC Suppl 4: 3-13. 5. Mir LM, Gehl J, Sersa G, et al. (2006) Standard operating procedures of the electrochemotherapy: Instructions for the use of bleomycin or cisplatin administered either systemically or locally and electric pulses delivered by the CliniporatorTM by means of invasive or noninvasive electrodes. EJC Suppl 4: 14-25. 6. Byrne CM, Thompson JF, Johnston H, et al. (2005) Treatment of metastatic melanoma using electroporation therapy with bleomycin (electrochemotherapy). Melanoma Res 15: 45-51. 7. Mir LM. (2006) Bases and rationale of the electrochemotherapy. EJC Suppl 4: 38-44. 8. Sersa G, Stabuc B, Cemazar M, Miklavcic D, Rudolf Z. (2000) Electrochemotherapy with cisplatin: clinical experience in malignant melanoma patients. Clin Cancer Res 6: 863-867. 9. Snoj M, Rudolf Z, Cemazar M, Jancar B, Sersa G. (2005) Successful sphincter-saving treatment of anorectal malignant melanoma with electrochemotherapy, local excision and adjuvant brachytherapy. Anti-Cancer Drugs 16: 345–348. 10. Shimizu T, Nikaido T, Gomyo H, et al. (2003) Electrochemotherapy of digital chondrosarcoma. J Ortop Sci 8: 248-251.
Author: Institute: Street: City: Country: Email:
Gregor Serša Institute of Oncology Ljubljana Zaloška 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Electrochemotherapy in veterinary medicine Natasa Tozon1 and Maja Cemazar2 1
I University of Ljubljana, Veterinary Faculty, Small Animal Clinic, Cesta v Mestni log 47, SI-1000 Ljubljana, Slovenia 2 Institute of Oncology Ljubljana, Department of Experimental Oncology, Zaloška 2, SI-1000 Ljubljana, Slovenija
Abstract— Electrochemotherapy is a treatment that combines electroporation, i.e. application of electric pulses to the tumors, that induces under suitable conditions reversible permeabilization of cell membrane and administration of nonpermeant or poorly permeant chemotherapeutic drugs with intracellular targets, whose entry into the cells is facilitated by electroporation. In veterinary medicine, the predominant chemotherapeutic drug used in electrochemotherapy is cisplatin, followed by bleomycin. In this review, the results of the studies performed at the University of Ljubljana, Veterinary faculty are presented. Spontaneous tumors of different origin in dogs, cats and horses were treated with electrochemotherapy using either cisplatin or bleomycin. Different electroporation protocols were used and the results on antitumor effectiveness compared. The results demonstrated that electrochemotherapy is highly effective and safe local treatment regardless of tumor histology, chemotherapeutic drug or electroporation protocol used. The advantages of this therapy are its simplicity, short duration of treatment sessions, low chemotherapeutic doses, and insignificant side effects, as well as the fact that that the patient does not have to stay in the hospital. Keywords— veterinary medicine, electroporation, dogs, cats, horses, cisplatin.
Incidence of malignant tumours in companion animals is constantly increasing. As in the human cancer treatment, in veterinary medicine standard treatment strategies do not always give satisfactory results. Therefore, new treatment modalities are introduced and tested for more effective treatment of companion animals’ malignancies. One of the perspective treatments is electrochemotherapy, which was already tested and proved efficient in treatment of human tumours of different histologies. Based on preclinical as well as clinical studies on electrochemotherapy, this treatment was used in the treatment in veterinary medicine already in 1997. Mir and colleagues used electrochemotherapy with bleomycin for treatment of cats with large soft-tissue sarcomas that suffered relapse after treatment with conventional therapies. Electric pulses were delivered after intravenous injection of bleomycin, using external surface electrodes, as well as needle-shaped electrodes that were designed to be inserted into the tumours for more effective electric field distribution in the
tissue. The cat’s lifespan increased significantly compared to control group of eleven untreated cats [1]. Our group started to apply electrochemotherapy in veterinary medicine in 1999. The objective was to introduce electrochemotherapy with cisplatin into veterinary medicine, where there is a need for inexpensive and effective treatment of cutaneous and subcutaneous tumours of various histological types. In the first study, the response to electrochemotherapy was assessed on tumour nodules in three cats with mammary adenocarcinoma and fibrosarcoma, and in seven dogs with mammary adenocarcinoma, cutaneous mast cell tumour, hemangioma, hemangiosarcoma, adenocarcinoma glandulae paranalis and neurofibroma. All together twentyfour tumour nodules of different size were treated; five with cisplatin injected intratumourally and 19 with electrochemotherapy with intratumoural administration of cisplatin. Square wave electric pulses of 100 μsec, 910 V amplitude (amplitude to electrode distance ration 1300 V/cm), frequency 1 Hz were delivered through two parallel stainless steel electrodes (thickness, 1 mm; width, 7 mm; length, 8 mm, with rounded tips and an inner distance between them of 7 mm) with an electropulsator Jouan GHT 1287 (Jouan, France). Each run of electric pulses was delivered in two trains of four pulses, with 1 sec intervals, in two perpendicular directions. Good contact between the electrodes and the skin was assured by depilation and application of a conductive gel to the treatment area.
Fig. 1. Antitumor effectiveness of electrochemotherapy (ECT) with intratumorally injected cisplatin in locally invasive squamous cell carcinoma of the ear in cat. Square wave electric pulses of 100 μsec, 910 V amplitude, frequency 1 Hz were delivered through two parallel stainless steel plate electrodes. Each run of electric pulses was delivered in two trains of four pulses, with 1 sec intevals, in two perpendicular directions. The cat was treated only once by electrochemotherapy. The tumour fell off already 1 week after treatment. Good cosmetic effect was obtained with no recurrence of the tumour for one year.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 586–588, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Electrochemotherapy in veterinary medicine
Electrochemotherapy with cisplatin had good antitumour effect on all tumours treated (Fig.1.). Their average size four weeks after treatment (0.01 cm3) was greatly reduced compared to those treated by intratumoural cisplatin injection alone (3.0 cm3). Collectively, electrochemotherapy treated tumours responded with 84% objective responses, whereas only one tumour partially responded to cisplatin treatment alone. Evaluated by contingency table, the response to treatment with electrochemotherapy was significantly better than that of the cisplatin treated group (p=0.014). Furthermore, there was a significant prolongation of the duration of response in electrochemotherapy treated tumours (p=0.046) [2]. There is also a study using electrochemotherapy with intratumourally injected cisplatin for treatment of large sarcoids in horses from the groups from Toulouse, France. Rols and colleagues showed 100% of objective responses of these sarcoids. All lesions disappeared after only 2 to 3 electrochemotherapy sessions and no regrowth was observed in the 18 months follow-up period. Our group is also working on sarcoides in horses using electrochemotherapy with cisplatin [3]. Our results are comparable to those of Rols, Tamzali and Teissie. In our latest study between March 2000 and September 2006, electrochemotherapy performed on 28 patients male dogs with perianal tumours of different age (median 12 years; range 8 – 14), different breeds and different histological type (31 (40.8%) nodules were classified as perianal adenocarcinoma, 31 (46%) nodules adenoma and 10 (13.2 %) tumour nodules that were not histologicaly verified. We elaborated further on electrochemotherapy with intratumoural injection of cisplatin and also tested electrochemotherapy with intratumourally injected bleomycin. This study was a prospective non randomized study, conducted in accordance with protocol for eletrochemotherapy, based on previous experience in human and animal clinical studies. Two different electroporation protocols were used in the study. Protocol 1 comprised 8 electric pulses of pulse duration 100 μsec, amplitude to electrode distance ratio 1300 V/cm and frequency 1 Hz. Electric pulses were generated by an electric pulses generator Jouan GHT 1287 (Juan, St Herblaine, France) and delivered through two parallel stainless steel electrodes (IGEA S. r. l., Carpi, Italy; plate electrodes: thickness, 1 mm; width, 7 mm; length, 8 mm, with rounded tips and an inner distance between them of 7 mm). Each run of electric pulses was delivered in two trains of four pulses, with 1 s interval, in two perpendicular directions. Good contact between the electrodes and the skin was assured by depilation and application of a conductive gel to the treatment area. Protocol 2 comprised of 8 electric pulses of pulse duration 100 μsec, amplitude to electrode distance ratio 1000
587
V/cm and frequency 5000 Hz. The electric pulses were delivered through needle electrodes (4 needles in a row, 2 rows, 4 mm apart; IGEA S. r. l., Carpi, Italy). The use of the second electroporation protocol was enabled by the newly designed electric pulse generator CLINIPORATORTM (IGEA S. r. l., Carpi, Italy that was developed in the frame of the EU Commission funded project for clinical use and is CE labeled. In most cases one electrochemotherapy session was needed to obtain good antitumour response and only in 14 cases more than one session was performed (2 sessions in 7 nodules and 3 sessions in 7 nodules). Overall treatment results on patients that completed evaluation of the response demonstrated good antitumour effectiveness. Four weeks after electrochemotherapy an objective response (OR) was achieved in 59 of the 65 nodules available for evaluation (90.7%) with 75.3% CR, 15.4% PR and 9.3% NC. At the end of the observation period the objective responses (OR) was obtained in 68 of 73 evaluated nodules and improved to 93.2% with a few PR (6.8%) and CR being the prevalent response (86.4%) compared to the response rate at 4 weeks after electrochemotherapy (Fig.2.). Negative response was observed in low percentage of the treated nodules with few NC (6.8%), but none of the treated nodules progressed (PD) One patient was euthanized on owner’s decision and was therefore not evaluated at the end of observation time. According to the histological type, there was no significant difference in OR at the end of observation time (adenocarcinoma 96.7% and adenoma 91.4%, p = 0.21, »other« 85.7%). At the end of observation period nodules in dogs previously castrated (26 tumour nodules in 10 dogs) responded with 100% OR rate (88.8% CR and 11.2 % PR) and nodules in dogs that were previously not castrated (50 nodules in 18 dogs) responded with 89.1% OR rate (84.7% CR, 4.4% PR and 10.9% NC). No statistical significant difference in OR rate regarding previous castration was observed (p = 0.528). A statistical significant difference between the treatment responses to electrochemotherapy according to the nodule size was found (p = 0.033). At the end of observation period nodules smaller than 3 cm² responded better to the treatment with 95.3% OR rate and high CR rate (89.2%), compared to the nodules bigger than 3 cm² that responded with 75% OR rate and CR rate of 62.5%. According to the drug used, when the nodules were treated with single chemotherapeutic drug, results of OR rate of the tumor nodules at the end of observation period showed no statistical significant difference between OR rate after electrochemotherapy with bleomycin (92.3%) compared to electrochemotherapy with cisplatin (95.2%, p = 0.728). Among the five nodules that were treated with both,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
588
Natasa Tozon and Maja Cemazar
study confirmed that electrochemotherapy is highly effective treatment regardless of tumor histology, chemotherapeutic drug or electroporation protocol used, that results in long term complete responses up to 77 months.
CONCLUSIONS In conclusion, the results of the studies performed on different tumours in different animals’ species demonstrated that electrochemotherapy with cisplatin or bleomycin, in tumours unresponsive to treatment using electrochemotherapy with cisplatin, is an effective and safe local treatment. The advantages of this therapy are its simplicity, short duration of treatment sessions, low chemotherapeutic doses, and insignificant side effects, as well as the fact that that the patient does not have to stay in the hospital. Fig. 2 Antitumor effectiveness of electrochemotherapy (ECT) with intratumorally injected cisplatin in perianal tumour in dog. The dog was treated by electrochemotherapy with cisplatin intratumourally 2 times at 4-week intervals. Square wave electric pulses of 100 μsec, 910 V amplitude, frequency 1 Hz were delivered through two parallel stainless steel plate electrodes. Each run of electric pulses was delivered in two trains of four pulses, with 1 sec intervals, in two perpendicular directions. Good local control was obtained 8 weeks after the first electrochemotherapy session.
cisplatin and bleomycin, 2 nodules responded with CR (40%), in another 2 PR ( 40%) was obtained and 1 nodules remained unchanged (NC, 20%). Finally, statistical analysis of the treatment response according to the type of electroporation protocol used for the treatment did not show differences in responsiveness of tumor nodules to electrochemotherapy (p = 0.968). At the end of observation period OR rate was 87% with Protocol 1 and 96% with Protocol 2. It is worth mentioning that the treatment with cisplatin or bleomycin given intratumourally alone or combined with electric pulses did not result in any local or systemic toxicity. We noticed partial necrosis of the tumours after a week and exulceration with the formation of a superficial scab, which fell off within 5 weeks. After treatment, none of the animals suffered from a local or systemic infection. In addition, the treatment had no effect on blood tests or biochemistry of treated animals. The results of that study are in agreement with the results of our previous study performed on perianal tumors [4]. In that study the histology of the treated tumors was not performed and only one electroporation protocol was used. Nevertheless, the results of the latest
ACKNOWLEDGMENT The authors acknowledge the financial support of the state budget by Slovenian Research Agency (Projects No. P3-0003, J3-7044 and P4-0053).
REFERENCES 1.
2. 3. 4.
Mir L.M., Devauchelle P., Quintin-Colonna F., Delisie F., Doliger S., Fradelizi D., Belehradek J. Jr., Orlowski S., First clinical trial of cat soft-issue sarcomas treatment by electrochemotherapy, Brit. J. Cancer 76 (12) (1997) 1617-1622. Tozon N., Sersa G., Cemazar M., Electrochemotherapy: Potentation of local antitumour effectivness of cisplatin in dogs and cats, Anticancer Res. 21 (2001) 2483-2486. Rols M.P., Tamzali Y., & Teissie J., Electrochemotherapy of horses. A preliminary clinical report, Bioelectrochemistry 55 (2002) 101-105. Tozon N., Kodre V., Sersa G., Cemazar M., Effective treatment of perianal tumors in dogs with electrochemotherapy, Anticancer Res. 25 (2005) 839-846.
Author: Nataša Tozon Institute: University of Ljubljana, Veterinary Faculty, Small Animal Clinic Street: Cesta v mestni log 47 City: Ljubljana Country: Slovenia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Electrochemotherapy of equids cutaneous tumors: a 57 case retrospective study 1999-2005 Y. Tamzali1, J. Teissie2, M. Golzio2 and M. P. Rols2 1
Equine Internal Medicine, Ecole Nationale Veterinaire, Toulouse, France. 2 IPBS CNRS, Toulouse cedex, France.
Abstract— Electrochemotherapy is a new anticancer therapy where the transient permeabilization of cells by electric field pulses induces a significant increase of antitumoral drug concentration and toxicity in tumor cells. It has been successfully applied to the treatment of tumors in animals and humans by using antimitotic drugs. This report describes its first use in the treatment of equids skin tumors, mainly sarcoids. 57 equids have been enrolled totalizing 248 tumors located at different body positions. Treatment was performed under short duration general anesthesia. Intra tumoral injections of cisplatin were followed by short and intense electric pulses directly applied on the skin at the tumor sites. Two to four successive treatments were applied at two-week intervals. Objective antitumour responses were obtained in 94.7% of the treated lesions. All horses tolerated the treatment well. No adverse effect from the electric pulses was observed even in the case of a high number of pulses, or when several consecutive treatments were applied. Keywords— electrochemotherapy, horses, cisplatin.
I. INTRODUCTION Cutaneous tumours are frequent in equids. Sarcoids represent more than 50% of those tumours. Chemotherapy using cisplatin is the most widely used method among the conservative treatments limited however to small tumors (less than 5 cm in diameter). This is due to its easy use, its rather low cost and its high efficiency (up to 90% for sarcoids and up to 70 to 90 % for carcinomas) [1]. However, the main disadvantage is the poor diffusion of the hydrophilic drug into the tumors. Cisplatin is therefore mixed with sesame oil in order to increase its remanence at the injection point [2]. It has been shown that in vitro electropermeabilization of cells potentiates cytotoxicity of bleomycin by several hundred times and cytotoxicity of cisplatin up to 70 times. In vivo, electropermeabilization of cells potentiates anti tumor effectiveness of cisplatin by a factor 20 [3]. This method, called electrochemotherapy (ECT), introduced in the 90’s, has already been successfully applied to mice and rats for a large variety of tumors [4]. Clinical trials have been performed in humans including small nodes of head and neck squamous cell carcinoma, melanoma, basal cell carcinoma
and adenocarcinoma [5, 6, 7, 8, 9, 11, 13, 14]. To date, very few data are available on domestic animals [10, 12, 15, 16, 17, 18, 19]. Increasing cisplatin concentration in sarcoids by using ECT would therefore enhance the cytotoxic effect thereby increasing treatment effectiveness. That was the aim of the study, horse sarcoids representing an interesting clinical model due to its high occurrence and specific localization to skin. II. MATERIALS AND METHODS A. Horse and tumor characteristics. From October 1999 to December 2004, 57 equids were treated: 39 horses, 3 ponies, 14 donkeys and 1 mule. Cutaneous tumor type was diagnosed by histology. The older cases have now completed a 3.5 years post treatment surveillance period. Equids were of both sexes and were 7.7 ± 3.9 years old on average (from 2 to 19 years old). Most of them were previously treated by surgery but had relapses. B. Methods The animals are treated under general anesthesia of short duration. The antimitotic drug was injected intra tumorally and at 1 cm in the skin margins of the tumour (one more cm is added at second treatment). Within 5 min after drug injection, the electrical treatment was applied by bringing electrodes in contact with the skin (Pictures 2a and 2b). Cisplatin (cis-platinium (II)-diammine dichloride, P4394, SIGMA, St-Louis) was prepared in sterile 0.15 M NaCl at 1 mg/ml concentration. It was then intra tumorally injected in a standardized manner (0.2 to 0.3 ml every 0.6 cm) by using “luer-lock ” or automatic syringes [2]. A specially designed set of wire contact electrodes was built. The distance between the electrodes was 0.9 cm and their length was 0.9 cm. A PS15 Jouan Electropulsator was used to deliver 8 pulses of 0.1 ms at a 1 Hz frequency with a 1.3 kV voltage (Pictures 1a and 1b).The pulse duration and current intensity were selected to take into account the recommendations of “the Commission de l'Electricite
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 610–613, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Electrochemotherapy of equids cutaneous tumors: a 57 case retrospective study 1999-2005
611
effects. They were examined 2 weeks after ECT to determine treatment responses. Pictures were taken prior to ECT treatment and every 2 weeks at each ECT session. Lesions were measured using a caliper. Responses were scored as follows: no response (NR); partial response (PR: > 50% reduction in tumor volume); complete response (CR: absence of any trace of tumor), and relapse [5]. A post treatment surveillance period of 2 years is required to close each case. III. RESULTS
Fig. 1. ECT procedures. Intratumoral cisplatin injections (A) and electrodes in contact with skin (B).
Fig. 2. ECT equipment – electropulsator (A) and electrodes (B). Treatment responses monitoring
Industrielle” concerning the fibrillation risks (for a pulse duration of 0.1 ms, the intensity must be less than 5 A). The contact of the electrodes with the skin was obtained by a conductive paste. Multiple electrotreatments were applied by moving the electrodes on the tumor surface on adjacent positions. This allowed the treatment of all the tumor surface. Several successive treatments were performed with a two-week interval. During and immediately following the ECT treatment, horses were carefully monitored to determine immediate
A total of 248 tumours from 57 animals were treated by ECT. Histological diagnosis showed 233 sarcoids (52 cases), 10 neurofibrosarcoma (3 cases), 1 SCC (1 case) and 4 anaplasic sarcoma (1 case). Among the 233 sarcoids, 20 flats or occults, 93 verrucous, 78 nodular or fibroblastic types and 42 mixed forms were treated by ECT. Anatomical location of cutaneous tumours was head (33 tumours), neck (5), trunk (77), limbs (74) and genital or paragenital area (59). 69 tumours (67 sarcoids, 1 neurofibrosarcoma and 1 SCC) about 28 cases were relapses after previous treatments like ligature (16 tumours), surgery (36), cryosurgery (5), immunotherapy with BCG (1), intratumoral chemotherapy with bleomycin (7) or local caustic treatment (4). Previous treatments were performed 1 to 36 months before (mean 8,2 ± 9,4 months; median: 6) and the relapse time observed was 1 to 14 months (mean: 4,7 ± 4,2 months, median: 5). 54 cases showed an OR (94,7 ± 5,8%) whose 50 CR (87,7 ± 8,5%) after one to 13 ECT sessions (mean: 3,58 ± 2,02, median: 3) and one to 77 months of follow-up (mean: 31,9 ± 17,8 months; median: 30 months). ECT treatment was interrupted by the owners in 5 cases (3 sarcoids, 1 neurofibrosarcoma and 1 anaplasic sarcoma) after one to 5 sessions (3 PR and 2 NOR). Concerning the 52 cases of sarcoids, ECT alone or in association to surgery resulted in 50 OR (96,2 ± 5,2%) with 47 CR (90,4 ± 8%) after 1 to 13 ECT sessions and one to 77 months follow-up (mean: 33,5 ± 17,7 months; median: 32,5 months). All equids bearing another tumour type were subjected to a surgery-ECT combination treatment. One SCC showed a CR after 3 sessions and 12 months follow-up. On 3 cases of neurofibrosarcoma, ECT resulted in 2 OR which were 2 CR after 2 to 3 sessions and 12 to 35 months follow-up. Finally, one case of anaplasic sarcoma responded partially to 3 ECT sessions after 5 months follow-up. No adverse reaction was elicited by the delivery of repeated pulses except the classical expected muscle contractions. Skin integrity was preserved. The day after ECT treatment and the followings, a slightly edematous reaction was noticed on some horses for lesions located on thin skin regions (Pictures 3, 4 and 5).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
612
Y. Tamzali, J. Teissie, M. Golzio and M. P. Rols
Fig. 3. Multiple fibroblastic and mixed sarcoids – before ECT (A) and two years after ECT (B).
Fig. 5. Nodular sarcoid - before ECT (A) and two years after ECT (B).
IV. CONCLUSION The combination of intratumoral drug injection with electropermeabilization (ECT) enhances the effectiveness of chemotherapy. It leads to significantly higher cells antimitotic drug concentration. Anti -tumor effect seems to be long-lived due to the stabilization of the treated lesions as observed 2 years after ECT. Because ECT is observed to be a safe method, results of this study on horse sarcoids are encouraging.
REFERENCES 1. 2. Fig. 4. Mixed sarcoid as a relapse to a prior surgical excision before ECT (A) and two years after ECT (B).
Theon P (1998) Intralesional and topical chemotherapy and immunotherapy. Vet Clin North Am Eq Pract 14: 659-671 Theon AP, Pascoe JR, Carlson GP et al. (1993) Intratumoral chemotherapy with cisplatin in oily emulsion in horses. J Am Vet Med Assoc 202:261-67
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Electrochemotherapy of equids cutaneous tumors: a 57 case retrospective study 1999-2005 3.
Cemazar M, Miklavcic D, Scancar J et al (1999) Increased platinum accumulation in SA-1 tumour cells after in vivo electrochemotherapy with cisplatin.Br J Cancer 79:1386-1391 4. Cemazar M, Milacic R, Miklavcic D et al (1998) Intratumoral cisplatin administration in electrochemotherapy: antitumor effectiveness, sequence dependence and platinum content. Anticancer Drugs 9:525530 5. Giraud P, Bachaud J-M, Teissie J, Rols M-P (1996) Effects of electrochemotherapy on cutaneous metastases of human malignant melanoma. Int J Radiation Oncology Biol Phys 36:1285-1286 6. Glass LF, Pepine ML, Fenske NA et al. (1996) Bleomycin-mediated electrochemotherapy of metastatic melanoma. Arch Dermatol, 132:1353-1357 7. Heller R, Jaroszeski MJ, Glass LF et al. (1996) Phase I/II trial for the treatment of cutaneous and subcutaneous tumors using electrochemotherapy. Cancer 77:964-971 8. Heller R, Jaroszeski MJ, Reintgen DS et al. (1998) Treatment of cutaneous and subcutaneous tumors with electrochemotherapy using intralesional bleomycin. Cancer 83:148-157 9. Mir LM, Glass LF, Sersa G, Teissie J et al. (1998) Effective treatment of cutaneous and subcutaneous malignant tumours by electrochemotherapy. Br J Cancer 77:2336-2342 10. Mir LM, Devauchelle P, Quintin-Colonna F et al. First clinical trial of cat soft-tissue sarcomas treatment by electrochemotherapy. Br J Cancer, 76:1617-1622 11. Rols MP, Bachaud J-M, Giraud P, Chevreau C, Roche H, Teissie J (2000) Electrochemotherapy of cutaneous metastases in malignant melanoma. Melanoma Res,10:468-474 12. Rols MP, Tamzali Y, Teissie J (2002) Electrochemotherapy of horses. A preliminary clinical report. Bioelectrochemistry 55:101-106
613
13. Sersa G, Stabuc B, Cemazar M et al. (1998) Electrochemotherapy with cisplatin : potentiation of local cisplatin antitumour effectiveness by application of electric pulses in cancer patients. Euro J Cancer 34:1213-1218 14. Sersa G, Stabuc B, Cemazar M et al. (2000) Electrochemotherapy with cisplatin: clinical experience in malignant melanoma patients. Clin Cancer Res, 6:863-867 15. Spugnini EP, Porello A (2003) Potentiation of chemotherapy in companion animals with spontaneous large neoplasms by application of biphasic electric pulses. J Exp Clin Cancer Res 22:571-580 16. Tamzali Y, Teissie J, Rols MP (2001) Cutaneous tumor treatment by electrochemotherapy: preliminary clinical results in horse sarcoids Rev Med Vet 152:605-609 17. Tamzali Y, Teissie J, Rols MP (2003) First horse sarcoid treatment by electrochemotherapy: preliminary experimental results, 49th annual convention of the AAEP, New Orleans, Louisiana, 2003 18. Tozon N, Sersa G, Cemazar M (2001) Electrochemotherapy: potentiation of local antitumour effectiveness of cisplatin in dogs and cats. Anticancer Res 21:2483-2488 19. Tozon N, Kodre V, Sersa G, Cemazar M (2005) Effective treatment of perianal tumors in dogs with electrochemotherapy. Anticancer Res 25:839-845 Author: Institute: Street: City: Country: Email:
Youssef Tamzali Ecole Veterinaire de Toulouse 23 chemin des Capelles 31076 Toulouse France
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Electropulsation, an biophysical delivery method for therapy J. Teissie1 and M. Cemazar2 1
Institut de Pharmacologie et de Biologie structurale du CNRS, UMR UPS 5089, Toulouse, France 2 Institute of Oncology Ljubljana, Ljubljana, Slovenia
Abstract— Electropulsation is the direct delivery of electric pulses on cells to brings a targeted permeabilization of the membrane. This can be used in vivo by the direct field delivery on the patient following the injection of the drug or nucleic acid. This appears as a promising biophysical method for gene therapy. Limits and unknowns will be outlined in this short review. Keywords— delivery, electropermeabilization, electroporation, electrochemotherapy, electrogenotherapy.
I. INTRODUCTION Electropulsation is the direct delivery of electric pulses on cells. Under controlled conditions, it brings a targeted permeabilization of the cell membrane. This is valid not only for cells in culture but can be used in vivo by the direct field delivery on the organ or across the skin of the patient. Therefore it gained within the last 15 years much attention as an emerging way for increased delivery of chemotherapeutic drugs, into the cells of different types of tumors in vivo. Application of electric pulses to the tissues causes a transient permeabilization of the plasma membrane and thus allows exogenous polar molecules to enter the cells. In the case of electrochemotherapy i.e. combination of application of electric pulses with chemotherapeutic drug administration, this strategy for treatment of cutaneous tumors of different histological types already reached clinical stages [15]. The transfer of naked plasmid DNA and the expression of the gene of interest were enhanced by electropulsation into cells in culture was first described 25 years ago [6]. For the last 10 years, this approach was developed in different tissues by pulsing them just after the injection of the plasmid in the target tissue [7-11]. However, the transfection efficiency of this methodology in vivo remains lower than with viral vectors. The lack of immunogenicity of the method, easiness of the preparation of large quantities of endotoxin free plasmid DNA, control and reproducibility of the method, give a great potential for the clinical application to electrically-assisted nucleic-acid delivery. A promising development is to use the expression of the foreign protein to trigger an immune response in a safe way [12].
There is a need to critically discuss the main limitations and obstacles associated with electrically assisted delivery for therapy and the failures and problems [13] II. ELECTROPERMEABILIZATION A. Descriptive rules 1-1 Theory of membrane potential difference modulation An external electric field modulates the membrane potential difference. The transmembrane potential difference induced by the electric field, ΔΨi is a complex function g(λ) of the specific conductivities of the membrane (λm), the pulsing buffer (λo) and the cytoplasm (λi), the membrane thickness and the cell size. Thus, ΔΨi = f. g (λ). r. E.cosθ
(1)
in which θ designates the angle between the direction of the normal to the membrane at the considered point on the cell surface and the field direction, E the field intensity, r the radius of the cell and f, which is a shape factor (a cell being a spheroid). Therefore, ΔΨi is not uniform on the cell surface. It is maximal at the positions of the cell facing the electrodes. These physical predictions were checked experimentally by videomicroscopy by using potential difference sensitive fluorescent probes. When the resulting transmembrane potential difference ΔΨ (i.e. the sum between the resting value of cell membrane ΔΨo and the electroinduced value ΔΨi) reaches threshold values closed to 250 mV, membranes become permeable [14]. Permeabilization is controlled by the field strength. Field intensity larger than a critical value (Ep) must be applied to the cell. From Eq. (1), permeabilization is first obtained for θ close to 0 or π. Ep is such that: ΔΨperm = f g (λ) r Ep
(2)
Parts of the cell surface facing the electrodes are affected. The extend of the permeabilized surface of a spherical cell, Aperm, is given by: Aperm = Atot (1 - Ep /E)/2
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 618–621, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
(3)
Electropulsation, an biophysical delivery method for therapy
619
where Atot is the cell surface and E is the applied field intensity. Increasing the field strength will increase the part of the cell surface, which is brought to the electropermeabilized state. Permeabilization, due to structural alterations of the membrane, remained restricted to a cap on the cell surface [15]. The area affected by the electric field depends also on the shape (spheroid), on the orientation of the cell with the electric field lines and on the cell density [16, 17] The density of these alterations is strongly controlled by the pulse duration. An increase of the number of pulses first leads to an increase of local permeabilization level; but it gives permeabilization on both sides of the cell facing the electrodes. Electropermeabilization allows a post-pulse free-like diffusion of small molecules (up to 4 kDa) whatever their chemical nature. Polar compounds cross easily the membrane. But the most important feature is that this membrane organisation is long-lived in cells. Diffusion is observed during the seconds and minutes following the ms pulse. Most of the exchange took place after the pulse. Resealing of the membrane defects and of the induced permeabilization is a first order process, which appears to be controlled by protein reorganisation. Molecular transfer of small molecules (<4kDa) across the permeabilized area is mostly driven by the concentration gradient across the membrane. Electrophoretic contribution during the pulse remains negligible. Free diffusion of low weight polar molecules after the pulse can be described by using the Fick equation on its electropermeabilized part [18]. This gives the following expression for a given molecule S and a cell with a radius r: Φ(S) = 2πr2 Ps ΔS X(N, T) (1 - Ep/E) exp (-k(N, T) t)
(4)
where Φ(S) is the flow at time t after the N pulses of duration T (the delay between the pulses being short compared to t), Ps is the permeability coefficient of S across the permeabilized membrane and ΔS is the concentration gradient of S across the membrane. Ep depends on r (size). For a given cell, the resealing time (reciprocal of k) is a function of the pulse duration but not of the field intensity as checked by digitized videomicroscopy [18].
values were the same as the ones for cell permeabilization when pulses lasting ms were applied. Field strength is observed to have a critical role. Cell membrane must be permeabilized for plasmid-membrane interaction to occur. Plasmids interact only with the permeabilized cell surface. It is accumulated by the field associated electrophoretic drag as shown by fluorescence microscopy. But no free plasmid diffusion into the cytoplasm is detected while this was proposed in older works. No plasmid membrane interaction occurs if the nucleic acids are added after electropermeabilizing cells as proposed in. Negatively charged DNA molecules migrate when submitted to an electric field. But, electrophoretic DNA accumulation itself is not enough to bring transmembrane transfer and gene expression [19]. Pulse duration plays a critical role in the formation of the plasmid-cell complex. The complex between the plasmid and the cell surface is detected only when the pulse duration is at least 1ms. Furthermore, this interpretation is supported by the observation that the DNA content, determined by the local fluorescence emission and it size, is under the control of the field strength and the pulse duration [19]. This contribution of the pulse duration to the plasmidmembrane interaction has already been illustrated by a complex dependence of the gene expression. The associated gene expression Expr is shown to obey the following equation: Expr = K N T2.3 (1 - Ep/E) f(ADN)
(5)
as long as the cell viability is not affected to a large extent by the electrical treatment [19]. All parameters are as described above, K being a constant. The dependence on the plasmid concentration (ADN) is rather complex as high levels of plasmids appear to be toxic. A recent on line videomicroscopy study showed that plasmid DNA was trapped in the electropermeabilized membrane where it forms aggregates [19]. The practical conclusion is that in vitro an effective transfer is obtained by using long pulses in order to drive the DNA towards the permeabilized area of the membrane by electropermeabilization but with a low field strength to preserve the cell viability.
III. DNA ELECTROTRANFER IN VITRO Gene expression is obtained after applying electric pulses to a cell DNA mixture. No transfected cells were detected in absence of electric field, in absence of DNA, or when DNA was added after the pulses Electrotransfection was only detected for electric field values leading to permeabilization. Transfection threshold
IV. ELECTROPULSATION IN TISSUE The behaviour of cells in tissue is rather similar to what was described in cell suspension [20]. But a key parameter is the local field in the tissue [21]. The field is obtained from the voltage applied between the two electrodes. It depends on the design of the electrodes [22]. A tissue is an
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
620
J. Teissie and M. Cemazar
assembly of cells which can be experimentally approached by a cell pellet [23]. These results showed that the description obtained from the observation on cells in suspension was not strictly obeyed by cell in tissue. A more theoretical investigation showed that the induced potential change was strongly dependent on the cell density [24]. Stronger external field are needed to reach an effective potential at the cell level to trigger permeabilization. The organization of the tissue is altered by the permeabilization process giving a change in the field distribution and its associated effect [25]. One main problem is that it is very difficult to obtain direct on line follow up of the effect of the field on the cells in a tissue. Only indirect approaches are present. The conclusion is nevertheless that a strong control is obtained by the design of the pulse generators and of the electrodes. V. PULSE GENERATORS As a field is linked to the ratio between the voltage delivered by the pulse generator and the width between the electrodes, a sharp control of the voltage pulse is needed. This was recently critically reviewed [26]. Constant voltages (square pulses) must be delivered with an electronic control of the pulse duration. But it was suggested that the pulse shape was not very important [27]. It has been suggested that as the conductance of the tissue was strongly affected by the permeabilization process, a constant current generator may be more suitable [28]. Again, this is not obvious due to the complexity of the field and current distribution in such a complex organization as a tissue. VI. ELECTRODE TYPES A key parameter is the local electric field strength. As the field results from a voltage applied between two electrodes, the electrode configuration is clearly controlling the field distribution and as such the effectiveness of transfection [22]. Electrode configurations for therapeutic purposes are parallel plates, wire and contact plate electrodes as well as needle electrodes and needle arrays [29, 30]. Electrode configuration affects electric field distribution in tissue. But due to its anatomy and its electrical properties, tissue reacts to the applied external electric field. Making difficult the choice for optimal electrode configuration and pulse parameters for the particular target tissue. Empirical method of design are frequently developed [31-33]. A safe approach is to compute in advance the electric field distribution in tissue by means of modeling [34]. This is demanding due to heterogeneous material properties of tissue and its shape. In
most cases numerical modeling techniques were used. Mostly finite element and finite difference methods were applied. Simulations and their experimental validations were obtained either with plate parallel electrodes or with needle arrays, which are the most popular set ups. VII. TISSUE PRETREATMENT A tissue is more complex than a cell pellet. Extracellular matrix contains different components including collagen, proteoglycans, glycosaminoglycans, which all together form a structured gel [35]. To improve the distribution of plasmid prior to electropulsation in the muscle and its electrophoretically driven drift to the target cells, pre-treatment with hyaluronidase (an enzyme that breaks down hyaluronan, a component of extracellular matrix) appeared as a positive approach [36-38]. This kind of treatment needs to be analyzed in more details. A recent investigation showed that the methodology used in the injection of the nucleic acid in the target tissue was strongly controlling the level of expression. A fast injection is more efficient may be by adding a hydrodynamic stress to the cells [39]. Another problem is that the electroinjected plasmid is the target of the degrading effect of DNAses, that ere present in the tissue. This is an urgent need for the presence of protecting agents such as polyglutamate [40]. The design of more efficient protective agents must be considered. VIII. CONCLUSIONS Electropulsation is a very promising approach for a targeted safe delivery of drugs (clinical trials are under development) and nucleic acids (siRNA as well as plasmids). Its optimization is still to come as many basic as engineerings problems remain to be solved as we try to outline in this short review.
ACKNOWLEDGMENT This work was supported by a slovenian CNRS PICS program, by grants of the region Midi Pyrenees and the Association francaise de lutte contre les Myopathies (AFM). The authors want to thanks their colleagues and coworkers Prof Neumann (Bielefeld), Prof. Miklavcic and Sersa (Ljubljana), Dr Mir (Villejuif), Dr Rols and Golzio (Toulouse) for many discussions.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Electropulsation, an biophysical delivery method for therapy
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
12.
13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
Gehl, J (2003) Electroporation: theory and methods, perspectives for drug delivery, gene therapy and research, Acta Physiol Scand. 177: 437-47 Gothelf A.,. Mir L.M, Gehl J (2003) Electrochemotherapy: results of cancer treatment using enhanced delivery of bleomycin by electroporation, Cancer Treat Rev. 29 : 371-87 Orlowski S, Mir LM, (1993) Cell electropermeabilization: a new tool for biochemical and pharmacological studies, Biochim Biophys Acta. 1154 : 51-63 Mir LM, Orlowski S (1999). Mechanisms of electrochemotherapy. Adv. Drug Deliver Rev. 35:107-118. Mir LM,et al.(1998) Effective treatment of cutaneous and subcutaneous malignant tumours by electrochemotherapy. Br J Cancer 77: 2336-42. Neumann E, Schaefer-Ridder M, Wang Y et al (1982) Gene transfer into mouse lyoma cells by electroporation in high electric fields. EMBO J. 1: 841-5 Rols MP et al (1998) In vivo electrically mediated protein and gene transfer in murine melanoma. Nat Biotechnol, 16: 168-71. Aihara, H. and J.-I. Miyazaki (1998) Gene transfer into muscle by electroporation in vivo. Nature Biotechnology, 16: 867-870 Mir LM, Moller PH, Andre F, Gehl J. (2005) Electric pulse-mediated gene delivery to various animal tissues.Adv Genet. 54:83-114 McMahon JM, Wells DJ. (2004) Electroporation for gene transfer to skeletal muscles: current status. BioDrugs. 18:155-65.Wells DJ (2004) Gene therapy progress and prospects: electroporation and other physical methods. Gene Ther. 11:1363-9 Prud'homme GJ, Glinka Y, Khan AS et al (2006) Electroporationenhanced nonviral gene transfer for the prevention or treatment of immunological, endocrine and neoplastic diseases. Curr Gene Ther.6:243-73. Cemazar M, Golzio M, Sersa G et al (2006) Electrically-assisted nucleic acids delivery to tissues in vivo: where do we stand? Curr Pharm Des. 12:3817-25. Teissié J., Eynard N., Gabriel B et al (1999). Electropermeabilization of cell membranes. Adv. Drug Deliver Rev. 35:3-19. Hibino M., et al (1991) Membrane conductance of an electroporated cell analyzed by submicrosecond imaging of transmembrane potential. Biophys J 59:. 209-20. B. Valic, M. Golzio, M. Pavlin et al (2003) Effect of electric field induced transmembrane potential on spheroidal cells : theory and experiments. Eur. Biophys. J. 32 : 519-528 Pucihar G, Kotnik T, Valic B, Miklavcic D.(2006) Numerical determination of transmembrane voltage induced on irregularly shaped cells.Ann Biomed Eng. 34:642-52 Rols MP, and Teissie J. (1990). Electropermeabilization of mammalian cells: quantitative analysis of the phenomenon. Biophys. J. 58:1089–1098 Golzio M., Teissie J, and Rols MP (2002) Direct visualization at the single-cell level of electrically mediated gene delivery. Proc Natl Acad Sci U S A, 99: 1292-7. Canatella PJ, Black MM, Bonnichsen DM et al (2004). Tissue electroporation: quantification and analysis of heterogeneous transport in multicellular environments. Biophys. J. 86:3260-3268 Miklavcic D, Beravs K, Semrov D et al (1998)The importance of electric field distribution for effective in vivo electroporation of tissues.Biophys J. 74:2152-8. Gehl J, Sorensen TH, Nielsen K et al (1999)In vivo electroporation of skeletal muscle: threshold, efficacy and relation to electric field distribution. Biochim Biophys Acta.1428:233-40.
621 23. Schmeer M, Seipp T, Pliquett U et al (2004). Mechanism for the conductivity changes caused by membrane electroporation of CHO cell-pellets. PCCP 6:5564-5574. 24. Susil R, Semrov D, and Miklavcic D. (1998). Electric field induced transmembrane potential depends on cell density and organization. Electro. Magnetobiol. 17:391-399. 25. Pavlin, M., M. Kanduser, M. Rebersek, G et al (2005). Effect of cell electroporation on the conductivity of a cell suspension. Biophys. J. 88:4378–4390 26. Puc M, Corovic S, Flisar K et al (2004) Techniques of signal generation required for electropermeabilization. Survey of electropermeabilization devices, Bioelectrochemistry 64 : 113-24 27. Pliquett U, Elez R, Piiper A et al (2004) Electroporation of subcutaneous mouse tumors by rectangular and trapezium high voltage pulses.Bioelectrochemistry. 62:83-93. 28. Khan AS, Pope MA, Draghia-Akli R (2005).Highly efficient constant-current electroporation increases in vivo plasmid expression. DNA Cell Biol. 2005 ;24 :810-8. 29. Heller LC, Jaroszeski MJ, Coppola D et al (2007) Optimization of cutaneous electrically mediated plasmid DNA delivery using novel electrode.Gene Ther. ;14:275-80 30. Soden D, Larkin J, Collins C et al (2004)The Development of Novel Flexible Electrode Arrays for the Electrochemotherapy of Solid Tumour Tissue. (Potential for Endoscopic Treatment of Inaccessible Cancers).Conf Proc IEEE Eng Med Biol Soc.5:3547-3550 31. Spugnini EP, Citro G, Porrello A (2005) Rational design of new electrodes for electrochemotherapy, J. Exp. Clin. Cancer Res., 24, 245-254 32. Liu F, Huang LA(2002) syringe electrode device for simultaneous injection of DNA and electrotransfer.Mol Ther.;5:323-8 33. Tjelle TE, Salte R, Mathiesen I, Kjeken R. (2006)A novel electroporation device for gene delivery in large animals and humans.Vaccine. 24:4667-70. 34. Sel D, Mazeres S, Teissie J, Miklavcic D (2003)Finite-element modeling of needle electrodes in tissue from the perspective of frequent model computation.IEEE Trans Biomed Eng. 50:1221-32 35. Zaharoff DA, Barr RC, Li CY, Yuan F(2002) Electromobility of plasmid DNA in tumor tissues during electric field-mediated gene delivery. Gene Ther.9:1286-90 36. McMahon JM, Signori E, Wells KE et al (2001)Optimisation of electrotransfer of plasmid into skeletal muscle by pretreatment with hyaluronidase -- increased expression with reduced muscle damage.Gene Ther.8:1264-70. 37. Mennuni C, Calvaruso F, Zampaglione I et al (2002) Hyaluronidase increases electrogene transfer efficiency in skeletal muscle.Hum Gene Ther. 13:355-65 38. Schertzer JD, Plant DR, Lynch GS. (2006) Optimizing plasmid-based gene transfer for investigating skeletal muscle structure and function. Mol Ther. 13:795-803 39. Andre FM, Cournil-Henrionnet C, Vernerey D et al (2006) Variability of naked DNA expression after direct local injection: the influence of the injection speed.Gene Ther.13:1619-27 40. Nicol F, Wong M, MacLaughlin FC et al (2002) Poly-L-glutamate, an anionic polymer, enhances transgene expression for plasmids delivered by intramuscular injection with in vivo electroporation.Gene Ther. 9:1351-8.IFMBE at http://www.ifmbe.org Author: Institute: Street: City: Country: Email:
TEISSIE Justin IPBS CNRS UMR UPS 5089 205 Route de Narbonne 31077 Toulouse France
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Equine Cutaneous Tumors Treatment by Electro-chemo-immuno-geno-therapy Y. Tamzali1, B. Couderc2, M.P. Rols3, M. Golzio3 and J. Teissie3 1
Equine Internal Medicine, Ecole Nationale Veterinaire, Toulouse, France 2 Institute Claudius Regaud, Toulouse, France 3 IPBS CNRS, Toulouse cedex, France
Kishida has shown that in mice having B16 tumors, simultaneous administration of bleomycin and DNAc coding interleukin 12 (IL12) by electropermeabilization (electrochemo-immuno-geno-therapy : ECIGT) induces an immune response resulting in tumors volume decrease as well as decrease of metastatic lesions. Our work represents the first application in equine clinical practice of this approach coupling electrochemotherapy (ECT) and electroimmunogenotherapy (EIGT) (coelectrotransfer of cisplatin and a plasmid coding IL-12 on equine cutaneous tumors). Equine sarcoid represents a spontaneous model of tumors critical in equine oncology and representing in many cases a real therapeutic challenge. Electric impulses (1300 V, 0.1 ms) are applied on anesthetized animals for ECT and modified (500 V, 5 ms) for EIGT. 8 impulses are delivered. Horses selected for the protocol were having several sarcoids and carrying a poor prognosis even with an ECT treatment. They received a dose of cisplatin depending on the tumor volume to be treated, then 300 µg of plasmid CMV IL-12. The two injections were followed successively by an ECT and an EIGT treatment. Horses were monitored
for 24 hours in the clinic and evaluated post-treatment for the anti-tumor immune response (level of ARNm of IFNg after extraction of lymphocytes and immunohistochemistery on biopsies) and aspect of tumors lesions. Six horses were treated with ECIGT to date. This represents 24 sessions at two weeks intervals under general anesthesia. The results are encouraging on the basis of clinical and immune response (dosage by RT-PCR of ARNm coding gamma interferon increase 20 to 40 folds 48 hours after the second ECIGT treatment compared to basal dosage). The anti-tumor effect induces a recruitment of CD4 and CD8 lymphocytes sub-populations at tumors level by immunohistochemistry. Author: Youssef Tamzali Institute: Street: City: Country: Email:
Ecole Veterinaire de Toulouse 23 chemin des Capelles 31076 Toulouse France
[email protected]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 630, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
In vivo imaging of siRNA electrotransfer and silencing in different organs A. Paganin-Gioanni1, J.M. Escoffre1, L. Mazzolini2, M.P. Rols1, J. Teissié1and M. Golzio1 1
IPBS-CNRS, UMR5089 , Toulouse, France 2 ISTMT, CNRS, Toulouse, France
Abstract—RNA interference (RNAi)-mediated gene silencing approaches appear very promising for therapies based on the targeted inhibition of disease-relevant genes. The major hurdle to the therapeutic development of RNAi strategies remains however the efficient delivery of the RNAi-inducing molecules, the short interfering RNAs (siRNAs) and short hairpin RNAs (shRNAs), to the target tissue. In this study we have investigated the contribution of electrically-mediated delivery of siRNA and/or shRNA into muscles or tumors stably expressing a green fluorescent protein (EGFP) target reporter gene. The silencing of EGFP gene expression was quantified over time by fluorescence imaging in the living animal. Our study indicate that electric field can be used as an efficient method for RNAi delivery and associated gene silencing into cells of muscle and solid tumors in vivo. Keywords— Electroporation, RNA interference, imaging, siRNA, ShRNA.
I. INTRODUCTION Since its discovery1, RNA interference has been described and extensively characterized in a number of organisms 2-4. The identification of the short interfering RNAs (siRNAs) involved in this process and their use for sequence specific gene silencing has offered a new approach for molecular therapeutics by taking advantages of the progress in genomics5, 6. This development requires, however, new safe and efficient in vivo siRNA delivery methods 7. SiRNAs appear as a very promising new therapeutical agent but besides the problem of delivery, an unanswered problem is to know how long its effect lasts after a single dose delivery 8. Most recently published in vivo results were obtained by « hydrodynamic transfection ». This stringent approach (injection within a few seconds in the tail vein of a volume one tenth the mass of the animal) appears to bring siRNA (and DNA) delivery mainly targeted to the liver 9-13. Other methods were described where a systemic or a localized (portal vein injection) delivery was obtained by adding different chemical compounds to the siRNA solution 14-17. SiRNA gene silencing could be obtained in vivo on reporter as well as endogenous genes. Delivery remains a critical issue for the development of siRNA as an effective therapeutic 7. More recently, fluorescently labelled siRNA were
injected IV or IP in mice being complexed with cationic lipid liposomes 16. Rather high amounts (100 μg) were injected in this experiment. Clearly the need for a delivery method suitable for targeting a broad range of tissues remains required. The demonstration in 1998 of drug and plasmid electrotransfer and gene expression in tumours 18, 19 led to the proposal that in vivo electropulsation was a promising tool for exogenous agents delivery 20. Furthermore it was observed that a very efficient in vivo electroloading of large molecules other than plasmids was obtained for proteins 19, dextran 21, and antisense oligonucleotides 22. Electrically mediated gene transfer had been shown to be effective on many tissues: liver 23, skin 24, muscle 21, 25 and heart 26. Delivery is targeted to the volume where the field pulse is applied, i.e. under the control of the electrode localization. This technology allows delivery to almost all tissues, after a small surgery when needed. Impressive results were described in the case of muscles where treatment with non-invasive contact electrodes brought a long lasting expression of therapeutical genes 27. Recent developments in optical imaging provide continuous monitoring of gene delivery and expression in living animals 28. Indeed, reporter gene activity can be accurately followed of the same animal as a function of time with no adverse effects either on the reporter gene product or on the animal itself. This increases the statistical relevance of a study, while decreasing the number of animals required. Exogenous gene expression of fluorescent reporter proteins such as GFP can be detected by the associated emission using a highly sensitive CCD camera. II. MATERIALS AND METHODS A. siRNAs All siRNAs were purchased from Qiagen Xeragon (Germantown, MD, USA). The egfp22 siRNA (sense: 5’ r(GCA AGC UGA CCC UGA AGU UCA U), antisense: 5’ r(GAA CUU CAG GGU CAG CUU GCC G)) is directed against GFPmRNA. The P76 siRNA (sense: 5’r (GCG GAG UGG CCU GCA GGU A)dTT, antisense: 5’ r(UAC CUG CAG GCC ACU CCG C)dTT) is directed against an unrelated human mRNA and shows no significant homology to mouse transcripts. It was used as a control for specificity of
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 624–627, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
In vivo imaging of siRNA electrotransfer and silencing in different organs
the siRNA construct. ShRNA were provided by Cayla InvivoGen. B. In vivo Electropulsation Saline solution (i.e. PBS containing 40 units of the RNAse inhibitor RNAsinR (Promega, Madison, WI) and either egfp22 or p76-siRNA were slowly injected (about 15s) with a Hamilton syringe through a 26G needle (Hamilton, Bonaduz, Switzerland) into the muscle or the tumor, under anesthesia (with or without plasmid pEGFPC1). In control conditions, the volume of siRNA was replaced by the siRNA suspension buffer to keep the injection conditions similar. 30s after injection, an electric pulsation was applied. Plate parallel electrodes (length 1 cm, width 0.6 mm, interelectrode distance 6 mm) (IGEA, Carpi, Italy) were fitted around the muscle or the tumor. The skin was previously shaved with a cream (Veet). A good electric contact was obtained between the skin and the electrodes using a conducting paste (Eko-gel, Egna, Italy). The tissue was electropulsed with selected parameters using the PS 10 CNRS Electropulsator (Jouan, St Herblain, France). Square waved pulses were delivered. Voltage, pulse duration and frequency of pulses were all pre-set on the Electropulsator depending on the tissue (see legend of the figures). All parameters were monitored on line with an oscilloscope (Metrix, Annecy, France).
625
ROI was defined next to the non treated part of the tissue to determine the mean fluorescence background on the skin and normal tissue. Numerical values obtained by Metavue analysis software were transferred to Microsoft Excel and used for data analysis. To quantify the knockdown induced by the delivery of the siRNAs, the fluorescence intensity of the tissue on day 0 (i.e. before the treatment) was set as the 100% value for each animal and the " relative fluorescence on day X" values were in reference to this initial value. This allowed to get rid of the observed variations in the initial GFP fluorescence of tissue between the different animals. III. RESULTS A. Electrotransfer of siRNA in muscles
C. Non invasive stereomicroscopy fluorescence imaging in live animals: GFP expression in the tissue cells was detected directly through the skin on the anaesthetized animal by digitized fluorescence stereomicroscopy. This procedure allowed the observation of GFP expression on the same animal during several days. High magnification images were obtained with an epifluorescence stereomicroscope using the 0.8 magnification (Leica MZFL III, Germany) and a cooled CCD camera (Roper Coolsnap fx). A 15-mm2 part of the animal was observed, which was suitable to observe the treated tissue. Camera was driven by the MetaVue software 2.6 (Universal, USA) from a Dell computer. The exposure time was set at 1 s. with no binning. The fluorescence excitation was obtained with a Mercury Arc lamp (HBO, Osram, Germany) and a GFP filter (Leica) for emission. The GFP fluorescence was quantitatively evaluated at different days. A transmission picture and a picture with GFP filter were taken on each tissue. From the transmission picture, the tissue was located and manually gated to give the region of interest (ROI). On the picture taken with the GFP filter, the mean fluorescence in the gated area was quantitatively estimated. Another
Fig. 2: RNA interference in 9 weeks old C57Bl/6 mouse leg muscle. Representative images of the GFP fluorescence from the mouse leg (Each image is 1 cm wide). A- GFP expression resulting from plasmid alone electrotransfer as observed on days 7 and 23 in the same leg. B- GFP expression silencing as observed on days 7 and 23 in the same leg when the plasmid was cotransfered with the specific egfp22 siRNA. C- GFP expression remained unaffected when an unrelated siRNA (p76) was cotransfered with the plasmid. D- Changes in the mean fluorescence emission with time. Sample standard deviations are shown. (n=4)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
626
A. Paganin-Gioanni, J.M. Escoffre, L. Mazzolini, M.P. Rols, J. Teissié and M. Golzio
Plasmid electrotransfer and expression in muscles are known to be very efficient. Up to 70-80 % of the fibers can be transfected after injection of 25 μg of a GFP coding plasmid and by using adequate electrical conditions. Emission of the green protein was high 7 days after the electrical treatment. When the specific siRNA was electrotransferred (DNA +AntiGFP), significant decrease of the GFP expression was observed. Our fluorescence analyses also led us to conclude that inhibition of gene expression lasts more than 11 days. SiRNA delivery therefore occurs in almost all fibers.
encoding shRNA. The use of shRNA leads to a stronger inhibition of the GFP expression (more than 50 days) than synthetic siRNA. C. SiRNA transfer in tumors
B. Electrotransfer of shRNA in muscles Inhibition of GFP expression was observed by the decreased of fluorescence emission in the muscle after cotransfection of a plasmid encoding GFP and the plasmid Fig. 4: Time dependence of mean fluorescence intensity of the B16F10 GFP tumors Time dependence of mean fluorescence intensity of B16F10 GFP tumors in control untreated tumors and in tumors treated with; electric field : PBS + EP (), unrelated siRNA : p76 + EP (), 12 µg of egfp22 siRNA alone : AntiGFP - EP (S) and egfp22 siRNA electro-transferred : AntiGFP + EP (U) . On each animal, the mean fluorescence of the tumor was evaluated on a relative scale using the observation just before the treatment as a reference. (N= number of mice per group; for days 1 and 2: 7
In vivo imaging was used to follow fluorescent tumors and fluorescence intensity as a function of time. GFP fluorescence emission was quantified by digital imaging. Tumor growth was not affected by the treatment (neither siRNA intra-tumoral injection nor electric pulses i.e. EP). When the specific siRNA was electro-transferred (AntiGFP + EP), significant decrease of the GFP expression was observed within 2 days following the treatment. As shown in the pictures of the figures 1 and 2, the fluorescence associated to the tumor disappeared in the treated group (AntiGFP + EP) whereas the fluorescence remained the same in the different control groups.
Fig. 3: RNA interference in 9 weeks old C57Bl/6 mouse leg muscle. Representative images of the GFP fluorescence from the mouse leg (Each image is 1 cm wide). A- GFP expression resulting from plasmid alone electrotransfer as observed on days 13. GFP expression silencing was observed when the plasmid pEGFPC1 was co-transferred with the specific pCpG 62H1 EGFP_v1 shRNA (ratio 1/10, respectively). GFP expression remained unaffected when an unrelated pCpG 62H1 EGFP_v1 was cotransfered with the plasmid. B- Changes in the mean fluorescence emission with time. Sample standard deviations are shown. (n=4)
IV. CONCLUSIONS In this study, we defined an experimental system allowing visualizing and quantifying the down regulation of a expressed EGFP reporter gene in muscles and subcutaneous B16-F10 mouse melanoma tumors using non invasive in vivo fluorescence imaging. This allowed us to follow in situ the kinetics of siRNA-mediated inhibition of gene expression, the topology of the effect and to test the impact
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
In vivo imaging of siRNA electrotransfer and silencing in different organs
of electrical treatment on the establishment of gene knockdown.
ACKNOWLEDGMENT This work was supported by grants from the CNRS (“Imagerie du petit animal” program), the EU cliniporator project, the Region Midi-Pyrenées, the AFM and the ARC (Association pour la Recherche sur le Cancer).
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Fire A et al. Potent and specific genetic interference by doublestranded RNA in Caenorhabditis elegans. Nature 1998; 391: 806811. Denli AM, Hannon GJ. RNAi: an ever-growing puzzle. Trends Biochem Sci 2003; 28: 196-201. Hutvagner G, Zamore PD. RNAi: nature abhors a double-strand. Curr Opin Genet Dev 2002; 12: 225-232. Hannon GJ. RNA interference. Nature 2002; 418: 244-251. McManus MT, Sharp PA. Gene silencing in mammals by small interfering RNAs. Nat Rev Genet 2002; 3: 737-747. Sioud M. Therapeutic siRNAs. Trends Pharmacol Sci 2004; 25: 2228. Zamore PD, Aronin N. siRNAs knock down hepatitis. Nat Med 2003; 9: 266-267. Herweijer H, Wolff JA. Progress and prospects: naked DNA gene transfer and therapy. Gene Ther 2003; 10: 453-458. Song E et al. RNA interference targeting Fas protects mice from fulminant hepatitis. Nat Med 2003; 9: 347-351. McCaffrey AP et al. RNA interference in adult mice. Nature 2002; 418: 38-39. McCaffrey AP et al. Inhibition of hepatitis B virus in mice by RNA interference. Nat Biotechnol 2003; 21: 639-644. Lewis DL et al. Efficient delivery of siRNA for inhibition of gene expression in postnatal mice. Nat Genet 2002; 32: 107-108. Zender L et al. Caspase 8 small interfering RNA prevents acute liver failure in mice. Proc Natl Acad Sci U S A 2003; 100: 7797-7802. Verma UN et al. Small interfering RNAs directed against betacatenin inhibit the in vitro and in vivo growth of colon cancer cells. Clin Cancer Res 2003; 9: 1291-1300. Sorensen DR, Leirdal M, Sioud M. Gene silencing by systemic delivery of synthetic siRNAs in adult mice. J Mol Biol 2003; 327: 761-766. Sioud M, Sorensen DR. Cationic liposome-mediated delivery of siRNAs in adult mice. Biochem Biophys Res Commun 2003; 312: 1220-1225.
627
17. Bertrand JR et al. Comparison of antisense oligonucleotides and siRNAs in cell culture and in vivo. Biochem Biophys Res Commun 2002; 296: 1000-1004. 18. Mir LM et al. Effective treatment of cutaneous and subcutaneous malignant tumours by electrochemotherapy. Br J Cancer 1998; 77: 2336-2342. 19. Rols MP et al. In vivo electrically mediated protein and gene transfer in murine melanoma. Nat Biotechnol 1998; 16: 168-171. 20. Potts RO, Chizmadzhev YA. Opening doors for exogenous agents. Nat Biotechnol 1998; 16: 135. 21. Mathiesen I. Electropermeabilization of skeletal muscle enhances gene transfer in vivo. Gene Ther 1999; 6: 508-514. 22. Faria M et al. Phosphoramidate oligonucleotides as potent antisense molecules in cells and in vivo. Nat Biotechnol 2001; 19: 40-44. 23. Heller R et al. In vivo gene electroinjection and expression in rat liver. FEBS Lett 1996; 389: 225-228. 24. Glasspool-Malone J, Somiari S, Drabick JJ, Malone RW. Efficient nonviral cutaneous transfection. Mol Ther 2000; 2: 140-146. 25. Aihara H, Miyazaki J. Gene transfer into muscle by electroporation in vivo. Nat Biotechnol 1998; 16: 867-870. 26. Harrison RL, Byrne BJ, Tung L. Electroporation-mediated gene transfer in cardiac tissue. FEBS Lett 1998; 435: 1-5. 27. Rizzuto G et al. Efficient and regulated erythropoietin production by naked DNA injection and muscle electroporation. Proc Natl Acad Sci U S A 1999; 96: 6417-6422. 28. Yang M et al. Whole-body and intravital optical imaging of angiogenesis in orthotopically implanted tumors. Proc Natl Acad Sci U S A 2001; 98: 2616-2621. 29. Takahashi, Y., et al., Gene silencing in primary and metastatic tumors by small interfering RNA delivery in mice: quantitative analysis using melanoma cells expressing firefly and sea pansy luciferases. J Control Release, 2005. 105(3): p. 332-43. 30. Akaneya, Y., B. Jiang, and T. Tsumoto, RNAi-induced gene silencing by local electroporation in targeting brain region. J Neurophysiol, 2005. 93(1): p. 594-602. 31. Golzio, M., et al., Inhibition of gene expression in mice muscle by in vivo electrically mediated siRNA delivery. Gene Ther, 2005. 12(3): p. 246-51. 32. Shoji, M., et al., RNA interference during spermatogenesis in mice. Dev Biol, 2005. 282(2): p. 524-34. 33. Tao, J., et al., Inhibiting the growth of malignant melanoma by blocking the expression of vascular endothelial growth factor using an RNA interference approach. Br J Dermatol, 2005. 153(4): p. 71524.
Author: Institute: Street: City: Country: Email:
Golzio Muriel IPBS - CNRS 205 route de narbonne Toulouse France
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Quantification of ion transport during cell electroporation – theoretical and experimental analysis of transient and stable pores during cell electroporation M. Pavlin1, and D. Miklavcic1 1
University of Ljubljana/Faculty of Electrical Engineering, Ljubljana, Slovenia
Abstract— An increased permeability of a cell membrane during the application of high-voltage pulses results in increased transmembrane transport of molecules which otherwise can not enter the cell. This process known as electroporation or electropermeabilization is used in many biomedical applications including transfer of genes and electrochemotherapy of tumors. The induced transmembrane voltage leads presumably to the formation of structural changes (pores) in the cell membrane, however the molecular mechanisms of pore formation and stabilization are not fully explained. In this study we analyze together the transient conductivity changes during the pulses and increased membrane permeability for ions and molecules after the pulses. The conductivity of a cell suspension was measured during application of electrical pulses. By quantifying ion diffusion, efflux coefficients k due to the"transport" pores are obtained. We present a simple model which assumes a quadratic dependence of kN on E in the area where U > Uc which very accurately describes experimental values. The results show that the fraction of the transport pores increases with higher electric field due to larger permeabilized area and due to higher energy, which is available for the formation of pores. kN increases also with the number of pulses, which suggest that each pulse contributes to formation of more stable transport pores. Keywords— cell electroporation, stabilization, theory
ion
diffusion,
pore
I. INTRODUCTION High-voltage electric pulses cause structural changes in the cell membrane leading to increased membrane permeability for ions and molecules. This process known as electroporation or electropermeabilization is used in biomedical applications including electrochemotherapy and transfer of genes. It is believed that the induced transmembrane voltage causes structural changes in the cell membrane however the exact molecular mechanisms are not completely understood [1]. In this study we analyzed transient conductivity changes during the electric pulses and increased membrane permeability for ions and molecules after the pulses in order to determine which parameters affect stabilization of pores,
and to analyze the relation between transient pores and long-lived pores. To achieve this we measured conductivity of a cell suspension during application of electrical pulses as well as after pulse application using several test pulses. In parallel, experiments of molecular uptake were performed. II. METHODS A. Electroporation and current-voltage measurements The experimental setup consisted of a generator that delivered square pulses, an oscilloscope and a current probe. Two high-voltage generators were used; for protocol where 8 × 100 μs pulses were used a prototype developed at the University of Ljubljana, Faculty of Electrical Engineering, was used and for the second protocol CliniporatorTM (IGEA s.r.l., Carpi, Modena, Italy) device was used, which allowed us to deliver two sets of pulses (high-voltage and lowvoltage pulses) with a given delay in between. During the pulses the electric current was measured with a current probe (LeCroy AP015, New York, USA) and the applied voltage with the high-voltage probe (Tektronix P5100, Beaverton, USA). Both, current and voltage were measured and stored on the oscilloscope (LeCroy 9310 C Dual 400 MHz, New York, USA). In the first type of experiments we used a protocol using train of eight square pulses of 100 μs duration with 1 Hz repetition frequency (8 × 100 μs protocol). Parallel aluminum plate electrodes (Eppendorf cuvettes) with d = 2 mm distances between the electrodes were used. Pulse amplitudes were varied to produce applied electric field E0 = U/d between 0.4 kV/cm to 1.8 kV/cm. In the second type of experiments first eight high-voltage (HV) 100 μs pulses with repetition frequency 1 Hz were delivered, and after a given delay of (0.1ms - 4200 ms) one 1ms low-voltage (LV) test pulses was delivered (HV+LV protocol). The applied voltages were UHV=200 V and ULV=40 V, which gave applied electric field EHV = 1 kV/cm and ELV = 0.2 kV/cm. For every set of parameters a reference measurement on medium with no cells was also performed.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 593–596, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
594
M. Pavlin and D. Miklavcic
B. Cells and medium Mouse melanoma cell line, B16F1, was used in experiments. Cells were grown in Eagle's minimum essential medium supplemented with 10% fetal bovine serum (Sigma-Aldrich Chemie GmbH, Deisenhofen, Germany) at 37°C in a humidified 5% CO2 atmosphere in the incubator (WTB Binder, Labortechnik GmbH, Germany). For all experiments the cell suspension was prepared from confluent cultures with 0.05% trypsin solution containing 0.02% EDTA (Sigma-Aldrich Chemie GmbH, Deisenhofen, Germany). From the obtained cell suspension trypsin and growth medium were removed by centrifugation at 1000 rpm at 4°C (Sigma, Germany) and the resulting pellet was resuspended in medium and again centrifuged. For electroporation a low-conductive medium was used that contained phosphate buffer (PB) and 250 mM sucrose without K+ ions was used to observe diffusion of K+ ions. In all the experiments dense cell suspensions having cell volume fractions F = 0.3 (1×108 cells/ml) were used.
external electric field. The bright shaded part represents the area exposed to above-threshold transmembrane voltage |Um | > Uc, i.e. the permeabilized region, which depends on the applied electric field: Sc = S0 (1 - Ec/E).
diffusion, since diffusion is a relatively slow process which occurs mainly after the pulse application. If we assume that volume fraction F and surface area of the pores Str are approximately constant, we obtain that the solution of Eq.2 is an exponential increase to maximum cemax = Fci0 (ci0 is the initial internal concentration):
ce (t ) = cemax [ 1 − exp ( − τt ) ],
(3)
with a time constant τ and permeability coefficient k being dependent on the fraction of transport pores fper:
III. RESULTS A. Diffusion through the permeabilized membrane In general the increase in membrane permeability can be described as the fraction of the permeabile surface of the cell membrane which can be also defined as the fraction of all “transport” pores: fper = Str / Stot = S por / S 0
Fig. 1 A two dimensional representation of the spherical cell exposed to the
.
(1)
where Spor represents the area of pores of one cell, S0 total area of one cell, Stot total area of N cells and fper represents the fraction of the pores which are large enough to contribute to increased permeability, where the term permeability defines increased diffusion for ions and molecules through the cell membrane. Different studies showed that diffusion of ions and molecules occurs only through the permeabilized area Sc = S0 (1 - Ec/E), i.e. through the area which is exposed to above-critical voltage [1,2]. We can therefore derive a diffusion equation which describes the flux of a given molecule due to the concentration gradient through the permeable membrane: c (t ) − ci (t ) dne (t ) =− e D fpc (E , tE , N )(1 − Ec / E )S 0 , dt d
(2)
where fper= fpc(1 - Ec/E), and fpc = Spor/Sc represents the fraction of pores in the permeabilized region. The above equation is good approximation of ion and molecular
τ =
3D
1
fper d R F (1 − F )
,
k = 1/ τ .
(4)
By measuring current and voltage during the train of successive pulses we obtain the change of the initial level of the conductivity, i.e. conductivity increase due to the ion efflux. Thus we can calculate the relative change in the conductivity due to the ion efflux u + Z + Fcemax Δσe (t ) = A [ 1 − exp ( −kt ) ], A = K K . σ0 σ0
(5)
If we consider a general case where the permeability coefficient depends on the number of pulses which were used; kN = k (N-th pulse) is now the time dependent Δσ a sum of the terms between the pulses: Δσ(t ) = σ0
∑A N
N
[ 1 − exp ( −kN t ) ] .
(6)
Form this it follows that the permeability coefficient after the N-th pulse can be determined from the measured conductivity at N-th pulse (ΔσN) and at N+1-th pulse (ΔσN+1) [3]: kN =
Δ σN + 1 ⎤ ⎡ Δ σN 1 ln ⎢ 1 − 1− ⎥ , Δσmax Δσmax ⎦⎥ ΔtN ⎢⎣
(7)
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Quantification of ion transport during cell electroporation – theoretical and experimental analysis of transient and stable pores
595
where ΔtN is the time difference between N-th and N+1-th pulse, and Δσmax is the maximum value of the conductivity, i.e. the saturation point when the concentrations inside and outside the cell are equal. From the permeability coefficient kN the fraction of pores can be estimated using Eq. 4: fper N ≈ kN
d R F (1 − F ) 3D '
, D ' = D exp(−0.43 w 0 )
(8)
where we take into account that the effective diffusion constant D' of K+ ions inside the pore differs from that in the bulk [4,5]. B. Electric field dependent permeability-a theoretical model In the previous sections we have obtained the equation, which determines how the electric field governs the area of the cell membrane which is exposed to the above-critical transmembrane voltage Uc and has increased permeability: Sc(E) = S0(1 - Ec/E). Furthermore, we can assume that pore formation in the area where U > Uc is governed by the free energy of the pore, where the electrostatic term includes also the square of the electric field ΔWe = aE2 [1,5]. Based on this we can assume that the most simplified equation, which describes the field dependent permeability, can be written as [3]: kN (E ) = C N (1 − Ec / E ) E 2 .
Fig. 2 Transient conductivity changes during N-th pulse of the train of 8×100 μs pulses are shown. ΔσtranN is normalized to the initial conductivity. Solid line - cells in medium, dotted line - reference measurement on medium without cells during the first pulse.
(9)
where CN are constants that depend on the size of the pores and their growth, and are thus dependent also on the number of pulses. The above equation takes into account the increase of the area of the cell exposed to the above critical voltage and the quadratic field dependence in the permeabilized region. C. Experimental results From current-voltage measurements the change in the conductivity during the Nth pulse was obtained: ΔσtranN = σN - σN0, with σ0 we denote the initial conductivity at the start of the first pulse. Results are presented in terms of the local electric field E rather than applied electric field E0 = U/d since for a high density of cells that we use the local field experienced by each cell is smaller than the applied field due to the interaction between the cells. The ratio E/E0 was taken from our previous study for volume fraction F = 0.3 [6]. Fig. 2 represents transient conductivity changes ΔσtranN/σ0 during the N-th pulse for E0 = [0.4 - 1.8] kV/cm. An increase in transient conductivity changes is observed above 0.5 kV/cm, which is in agreement with the threshold for permeabilization of B16 cells obtained for molecular
Fig. 3 Relative conductivity changes between the pulses Δσ/σ0 = (σN0 - σ0)/σ0, where σN0 is the initial level at the start of the Nth pulse. 8×100 μs pulses were used with repetition frequency 1 Hz. uptake. Interestingly, the number of pulses does not influence the transient conductivity [3,4]. In Fig. 3 relative changes of the initial level of conductivity at the start of the N-th pulse Δσ/σ0 = (σN0 σ0)/σ0 for consecutive pulses are shown. The results are corrected for the colloid osmotic swelling [4]. Similarly as in Fig. 2 the initial level starts to increase for E > 0.5 kV/cm, which can be explained with the efflux of ions (mostly K+ ions) from the cytoplasm through membrane pores. For higher electric fields ions efflux increases up to 1.6 kV depending also on the number of applied pulses. We further used the conductivity changes between the pulses to obtain permeability coefficients kN (E). Using Eq. 7 the permeability coefficient can be calculated for different field strengths and number of pulses which is shown in Fig. 4. We compare measured values (symbols) with the
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
596
M. Pavlin and D. Miklavcic
0.7 k7
0.6
k6 k5
0.5 −1
kN[s ]
k
4
0.4
k3
0.3
k
2
0.2
k
1
0.1 0 0
0.5
1
1.5
E [kV/cm]
Fig. 4 The permeability coefficients kN after the N-th pulse obtained from the conductivity changes (see Fig. 3) using Eq. 7 compared to of the prediction of the model according to Eq. 10 (lines).
permeabilization for several pulses. The relaxation time of transient changes is few ms, whereas the resealing (longlived pores) last for minutes after pulses. By analyzing the diffusion of ions the permeability coefficient and fraction of long-lived pores was obtained. A simple model where kN = CN(1 - Ec/E)E2 can describe the field dependence suggesting that long-lived pores are formed proportionally with the square of the electric field in the area where Um > Uc. Therefore fraction of long-lived pores increases with higher electric field due to larger permeabilized area and due to higher energy available for pore formation. It increases also with the number of pulses, which suggest that each pulse contributes to formation of more and /or larger stable transport pores, whereas the number of transient pores does not depend on the number of pulses.
ACKNOWLEDGMENT This research was supported by the Ministry of High Education, Science and Technology of the Republic of Slovenia under the grants J2-9770-1538 and P2-0249.
REFERENCES 1. 2.
Fig. 5 Conductivity change between HV and LV pulse Δσ =σ HVmax - σ LV0 for two experiments (circles and squares), t is delay between HV and LV.
3. 4.
prediction of the theoretical model (lines) calculated using Eq. 9. The kN‘s and with this also the fraction of the "transport" pores approximately linearly increase with N and increase also with the electric field strength and that the theoretical mode is in good agreement with experiments. In Fig. 5 conductivity change between the end of HV pulse and beginning of test LV pulse: Δσ =σ HVmax - σ LV0 is shown for different delays t between the two pulses. We can see that 10μs after the pulse is conductivity already dropped for 0.03S/m which further increase to 0.053S/m after 100ms. Then the effect is reversed due to ion efflux so that few seconds after HV pulse the conductivity is increased.
CONCLUSIONS
5. 6.
Weaver JC, Chizmadzhev YA (1996) Theory of electroporation: A review. Bioelectrochem. Bioenerg. 41:135-60 Rols MP, Teissie J (1990) Electropermeabilization of mammalian cells: quantitative analysis of the phenomenon. Biophys. J. 58:10891098 Pavlin M, Leben V, Miklavcic D (2007) Electroporation in dense cell suspension—Theoretical and experimental analysis of ion diffusion and cell permeabilization. Biophys. Biochim. Acta 1770:12-23 Pavlin M, Kanduser M, Rebersek M et al. (2005) Effect of Cell Electroporation on the Conductivity of a Cell Suspension. Biophys J 88:4378-4390 Glaser RW, Leikin SL, Chernomordik LV et al. (1988) Reveersible electrical breakdown of lipid bilayers: formation and evolution of pores. Biochim Biophys Acta 940:275-287 Pavlin M, Pavselj N, Miklavcic D (2002) Dependence of Induced Transmembrane Potential on Cell Density, Arrangement, and Cell Position Inside a Cell System. IEEE Trans Biom Eng 49: 605-612 •
Address of the corresponding author:
Author: Mojca Pavlin Institute: University of Ljubljana Street: Trzaska 25 City: 1000-Ljubljana Country: Slovenia Email:
[email protected]
Transient conductivity changes are similar for a single or several pulses in contrast to considerable increase in
____________________________________ IFMBE Proceedings Vol. 16 _____________________________________
Real time electroporation control for accurate and safe in vivo electrogene therapy David Cukjati1,2, Danute Batiuskaite1,3, Damijan Miklavčič2, Lluis M. Mir1 1
UMR 8121 CNRS, Institute Gustave-Roussy, Villejuif, France University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia 3 Vytautas Magnus University, Faculty of Natural Sciences, Department of Biology, Kaunas, Lithuania 2
Abstract— In vivo cell electroporation is the basis of DNA electrotransfer, an efficient method for non-viral gene therapy using naked DNA. The electric pulses have two roles, to permeabilize the target cell plasma membrane and to transport the DNA towards or across the permeabilized membrane by electrophoresis. For efficient electrotransfer, reversible undamaging target cell permeabilization is mandatory. We report the possibility to monitor in vivo cell electroporation during pulse delivery, and to adjust the electric field strength on real time, within a few microseconds after the beginning of the pulse, to ensure efficacy and safety of the procedure. A control algorithm was elaborated, implemented in a prototype device and tested ex vivo. Controlled pulses resulted in protection of the tissue where uncorrected excessive applied voltages lead to intense tissue damage and consecutive loss of gene transfer expression. Keywords— DNA electrotransfer, gene therapy, electropermeabilization, electroporation
I. INTRODUCTION Biotechnological and biomedical applications of in vivo delivery of short high voltage pulses, like in vivo DNA electrotransfer, also termed electrogenetherapy, are rapidly developing [1]. For efficient in vivo gene transfer, it is necessary to inject DNA into the tissue and to achieve cell plasma membrane permeabilization [2]. Increased membrane permeability results from supraphysiological transmembrane voltages induced by external electric pulses [3]. The two key steps of DNA electrotransfer in vivo are the permeabilization of the target cells plasma membrane by electroporation, also termed electropermeabilization, and the electrophoresis of the DNA within the tissue. These two effects can be obtained separately using the appropriate sequence of electric pulses: short (100 µs) square-wave high voltage pulses (HV) that permeabilize the cells without substantial DNA transport to the cells and long (100 ms) low voltage pulses (LV), that are instrumental in facilitating the DNA transfer into the cells. For a safe gene transfer electropermeabilization must be reversible, that is not excessive, in order to avoid permanent cell damage. Optimal parameters for in vivo electroporation can be determined using in vivo tests for cell permeabilization [4] after the
pulse, like the one based on 51Cr-EDTA uptake [5, 6], and by using mathematical modeling to determine electric field distribution [7]. However, it would be much better to control cell permeabilization during the pulse delivery. Here, we report that in vivo electroporation can be precisely computer-controlled to ascertain that permeabilization will be achieved at the end of the pulse, while at the same time permanent cell damage is prevented. We demonstrate that the temporal progression of tissue electroporation can be detected in real time at the beginning of the pulse on the basis of current and voltage measurements made during pulse delivery to tissues. Then, adjustment of the pulse voltage in real time ensures cell membrane reversible permeabilization. Using optimized LV parameters, real-time control of HV, as reported here, results then in safe and efficient gene transfer. II. MATERIALS AND METHODS Female Wistar rats (Janvier) were anesthetized and pulses directly delivered to rat skeletal muscle and liver through two parallel metal plate electrodes, separated by 5.7 mm for skeletal muscles and by 4.4 mm for liver. Electric field distribution in the tissue encompassed by the electrodes was calculated by finite elements method (EMAS, Ansoft, USA) and found to be sufficiently homogeneous to refer to the electric field in the tissue by the value of the ratio of the applied voltage to the electrodes distance. One experiment per rat extremity (triceps brachii muscle of the hind limb and gastrocnemius medialis muscle of the forelimb) was performed. Electrodes were applied directly on the skeletal muscles after incision of the skin in the back part of the limb. Liver tissue was accessed by midventral incision in the abdominal wall. Four separate sites were exposed to the electric pulses in each rat liver. In experiments reported in Fig. 1 and Fig. 2, 8 squarewave pulses of 100 μs duration were delivered at a repetition frequency of 1 Hz by a PS 15 electropulsator (Jouan). Rise time of the pulses was 0.6 to 2.1 µs while fall time was 2 μs. High voltage probe PK 2kV (LeCroy), coil wide band current transformer model 5124 (Pearson Electronics) and digital oscilloscope (Waverunner, LeCroy) were used to
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 606–609, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Real time electroporation control for accurate and safe in vivo electrogene therapy
607
of 51Cr-EDTA) per gram of the tissue exposed to the electric pulses. 51Cr-EDTA uptakes were used to calculate mean values of uptake as a function of the ratio of the applied voltage to electrodes distance, in the rat skeletal muscle and in liver. Solid (muscle), dashed (liver) and dotted (muscle, transcutaneous pulses) lines in Fig. 2a present linear regressions to field intensities corresponding to low uptake values, increasing uptake values, and decreasing uptake values. Electric field threshold values of reversible and irreversible electroporation were determined as the field intensities corresponding to intersections of consecutive linear regressions. For the demonstration of the efficacy of the proposed algorithm for real time electroporation control appropriate software was developed and installed in a CliniporatorTM (IGEA) instrument that was then used to deliver voltage pulses (amplitude 600 V and duration 100 µs) to rabbit skeletal muscle ex vivo through two plate electrodes separated by 6 mm. The CliniporatorTM not only delivers pulses but it also measures current and voltage during the pulse with sampling rate 10 MS/s as well as it processes measured data in real time. III. RESULTS
Fig. 1 (a) Voltage and current traces for below reversible (dashed line) and reversible electroporation settings (solid line). (b) Conductance (g) of muscle and liver tissue during the pulse. The electric fields reported in the abscissa are the ratio of the applied voltage to electrodes distance.
measure and store voltage and current for 200 µs after the beginning of each of the eight pulses with sampling rate between 25 and 100 MS/s (Fig. 1a). Each current sample was divided by the corresponding voltage sample to yield conductance (g). Conductance was defined only when voltage was nonzero (Fig. 1b). For muscle, amplitude of the voltage pulses was varied from 50 to 320 V and for liver from 50 to 550 V. 200 μl of 51Cr-EDTA (Amersham) with a specific activity of 3.7 MBq/ml were injected intravenously, 5 or 4 minutes before the electric pulses delivery to muscle or liver respectively. The injected 51Cr-EDTA distributes freely in the vascular and extracellular compartments, but does not enter the intracellular compartments unless access is provided, e.g. by electroporation. Animals were sacrificed 24 hours after 51Cr-EDTA injection. Tissues exposed to electric pulses were taken out, weighed and counted in a Cobra 5002 gammacounter (Packard Instruments). The net 51CrEDTA uptake as a result of electropermeabilization was calculated as the measured activity (converted to nanomoles
The electrical current response to a low voltage pulse (no electropermeabilization) applied to the tissue consists of a rapid initial current increase followed by an exponential decrease, as “capacitors” are charging, and a constant level of current after “capacitors” are fully charged, suggesting that the “capacitors” were charged in a few microseconds (Fig. 1a). When the pulse voltage was high enough to permeabilize the tissue, the current was found to further increase during the pulse, as the conductivity of permeabilized tissue increased [8]. Indeed, unless cells are permeabilized all current passes around the cells, while permeabilization provides additional current paths so the total current increases. Time dynamics of the current increase during the pulse depends on the pulse voltage; however, at our pulse length current always reaches constant level before the pulse ends. In order to analyze these changes independently of pulse voltage, conductance (g), which is the ratio current/voltage, was calculated (Fig. 1b). Tissue permeabilization was quantified using the 51Cr– EDTA uptake method [5] on the same samples in which current and voltage were recorded, after we demonstrated that 51Cr–EDTA injection did not modify the time course of g (data not shown). Under control conditions (no electric field applied) the average uptake in the liver was 0.073±0.010 nmol/g (mean±std.dev.), while in muscle it was 0.003±0.002 nmol/g, indicating that almost all mole-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
608
cules were washed out from the skeletal muscle in 24h, while liver tissue was still retaining some 51Cr–EDTA. The 51 Cr–EDTA enters the cells and remains entrapped inside the cells only if cells are reversibly permeabilized (if cells are irreversibly permeabilized, the 51Cr–EDTA leaks out of the cells). In rat skeletal muscle directly exposed to the electric pulses, 51Cr–EDTA retention 24 h after the injection significantly increased at field intensities above 220 V/cm (Fig. 2a). Uptake increased with increasing electric field intensity until 430 V/cm. When the field intensity was further increased, uptake was significantly reduced, which reflects the onset of irreversible permeabilization. For transcutaneous pulses, uptake in the skeletal muscle showed the same pattern but at higher field strengths (Fig. 2a). Uptake results in rat liver show a clear increase of uptake in liver at field intensities above 350 V/cm, with maximum uptake at 600 V/cm and decreasing uptake at higher field intensities. Time courses of g for rat muscle and liver at various electric field intensities are presented in Fig. 1b. At permeabilizing electric field intensities, it can be seen that, after a very fast rise and a fast decrease (transient which is result of charging of the tissue “capacitors”), g is then increasing during the rest of the pulse delivery, and furthermore increases faster and reaches higher levels at higher electric field intensities. We decided to relate dynamics of g during the pulse to the level of tissue electropermeabilization and to seek for reliable parameters linking dynamics of g to the tissue electropermeabilization level. Detailed mathematical analysis of g time courses was performed and many parameters of this curves plotted as a function of the applied field. The following parameters (Fig. 2b, c and d) showed clear relationship with permeabilization levels and could be used for real-time pulse control: (1) the time elapsed from the pulse beginning to the minimal value of g (t(gmin) in µs) (Fig. 2b), (2) the slope of g versus time normalized to the g value at the time of calculation at t > t(gmin) (dg/dt in %/µs) (Fig. 2c), (3) the total change in g values, describing the percent difference between g at the end of pulse and the minimal g (Δg in % of g at the end of pulse) (Fig. 2d). As for the minimum, the following control algorithm can be proposed: (1) if the minimum is detected at time periods shorter than 1.8 µs and 6.2 µs after pulse application to rat skeletal muscle and liver, respectively (Fig. 2b), the electric field intensity is too high and should be lowered in order to assure no irreversible tissue damages related to irreversible membrane permeabilization; (2) if the minimum is detected at time periods longer than 6.2 µs in muscle and 25 µs in liver after the pulse beginning, electric field intensity has to be increased to obtain tissue permeabilization.
David Cukjati, Danute Batiuskaite, Damijan Miklavčič, Lluis M. Mir
Fig. 2 (a) Mean values of 51Cr-EDTA uptake (±SEM) as a function of the ratio of the applied voltage to electrodes distance, in the rat skeletal muscle without the skin (closed circles) and liver (open circles). The 51Cr-EDTA uptake results for the rat skeletal muscle with the skin are presented with closed squares. The reversible electroporation range is graphically displayed in each of the panels, respectively, for the liver and the rat muscle without the skin. Parameters for detection of tissue permeabilization are: (b) Time elapsed from pulse beginning to minimal conductance as a function of electric field, for rat muscle and liver. (c) Conductance maximal slope versus time. (d) Conductance change during the pulse.
If the minimum is found within the expected time interval, the electroporation process can also be monitored later. Indeed, the slope of g versus time calculated in real time after the minimum can be used to verify the tissue electropermeabilization and that the process is reversible. The algorithm should then consider that a slope for rat skeletal muscle between 0.30 and 3.30 %/µs assures that tissue will be reversibly permeabilized with eight 100 µs long pulses (Fig. 2c). The slope should fall within a range of 0.08 and
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Real time electroporation control for accurate and safe in vivo electrogene therapy
0.28 %/µs for liver. If the slope is not in the expected range, pulse delivery should be readjusted. Thus, combinations of controls based on the time of the minimum and on slope measurement can be used for real-time pulse adjustment. Of course, after the first pulse is completely delivered, an a posteriori validation could be performed. A g change during the pulse in the range between 9.6 % and 31.8 % for muscle and in the range between 2.9 % and 9.0 % for liver assures that tissue was reversibly electropermeabilized (Fig. 2c). Lower values indicate insufficient permeabilization whereas higher values indicate irreversible damages. Our findings were already implemented in a prototype device and demonstrated ex vivo on rabbit muscle tissue. The prototype device was programmed to find minimal conductance, and when found, to calculate the conductance slope. The algorithm was set to decrease the output voltage to a half immediately after conductance slope exceeds 3.3 %/µs. In the ex vivo experiment the prototype found minimal conductance at 2.4 µs after pulse beginning. With deactivated algorithm the maximal conductance slope was 6.2 %/μs. The slope exceeded threshold value 3.3 %/µs at 3.95 µs after pulse beginning. After only 0.7 µs the microprocessor corrected the output voltage setting it to 300 V. IV. CONCLUSIONS In our study we described and demonstrated a real-time detection method of tissue permeabilization. Current and voltage recording during the pulse is easy and can be analyzed in real time. To our knowledge, no other available approach can be used to adjust the pulse voltage in real time in order to ascertain the achievement of reversible tissue permeabilization. To achieve the expected electroporation level it is sufficient to measure the parameters of the first pulse, to adjust them in real time as proposed above and then to administer seven additional pulses. Alternatively the high voltage pulse can be followed by low voltage electrophoretic pulses for DNA transfection. One of the advantages of being able to check electropermeabilization level after one pulse instead of after a full sequence is that it could minimize discomfort to a patient. The electric field threshold values for reversible and irreversible permeabilization are in agreement with previously reported knowledge [6]. Threshold values are considerably lower in rat muscle than in rat liver (Fig. 2a). This can be explained by the fact that the liver cells, smaller, must be exposed to a higher electric field intensity than the skeletal muscle cells, larger, to induce the same transmembrane voltage in agreement with Schwan’s equation as well as with more recent determinations of the transmembrane
609
potential changes on spheroidal cells. In fact our parallel determinations in rats using transcutaneous pulses demonstrate that thresholds in skeletal muscle with skin are almost identical in rats (550 V/cm) and in mice (530 V/cm). We can conclude that differences between tissues are much larger than differences between species, at least among the common laboratory animals. Thus it can be speculated that the algorithm, with minor adaptations, could also operate in larger animals and even in humans. As we demonstrated, it is possible to achieve electric pulses safe delivery by real-time voltage control. Thus safe gene electrotransfer can be achieved without the need of previous precise tuning of the experimental conditions, a situation that can be very useful in the clinical settings.
ACKNOWLEDGMENT This research was supported by the CNRS, the Institut Gustave-Roussy and the Cliniporator project (FP5, Contract No. QLK3-1999-00484) of the European Community.
REFERENCES 1. 2. 3. 4. 5. 6.
7. 8.
F. André, L.M. Mir (2004) DNA electrotransfer: its principles and an updated review of its therapeutic applications, Gene Ther 11, S33S42. S. Satkauskas, M.F. Bureau, M. Puc et al. (2002) Mechanisms of in vivo DNA electrotransfer: Respective contributions of cell electropermeabilization and DNA electrophoresis, Mol Ther 5, 133-140. J. Teissie, T.Y. Tsong (1981) Electric-field induced transient pores in phospholipid-bilayer vesicles, Biochemistry 20, 1548-1554. J. Gehl, L.M. Mir (1999) Determination of optimal parameters for in vivo gene transfer by electroporation, using a rapid in vivo test for cell permeabilization, Biochem Biophys Res Commun 261, 377-380. J. Gehl, T.H. Sorensen, K. Nielsen et al. (1999) In vivo electroporation of skeletal muscle: threshold, efficacy and relation to electric field distribution, Biochim Biophys Acta 1428, 233-240. D. Miklavčič, D. Šemrov, H. Mekid et al. (2000) A validated model of in vivo electric field distribution in tissues for electrochemotherapy and for DNA electrotransfer for gene therapy, Biochim Biophys Acta 1523, 73-83. D. Šel, D. Cukjati, D. Batiuskaite et al. (2004) Sequential Finite Element Model of Tissue Electropermeabilisation, IEEE Trans Biomed Eng 52, 816-827. M. Pavlin, D. Miklavčič (2003) Effective conductivity of a suspension of permeabilized cells: A theoretical analysis, Biophys J 85, 719729. Address of the corresponding author: Author: Institute: Street: City: Country: Email:
David Cukjati Faculty of electrical engineering Tržaška 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The effects of irreversible electroporation on tissue, in vivo Boris Rubinsky School of Computer Science and Engineering Center for Bioengineering in the Service of Humanity and Society The Hebrew University of Jerusalem, Jerusalem Givat Ram Campus, 91904 Israel Abstract— Irreversible electroporation (IRE) is a new tissue ablation technique in which micro to millisecond electrical pulses are delivered to undesirable tissue to produce cell necrosis through irreversible cell membrane permeabilization, which leads to changes in the cell homeostasis. A unique attribute of IRE is that it affects only the cell membrane and no other structure in the tissue. This report summarizes findinsg on IRE tissue ablation methodology in the pig liver, and provides short results on short and long term histopathology of IRE ablated tissue and discusses the clinical implications of the findings. Among the major findings are the observation that cell ablation occurs to the margin of the treated lesion with several cells thickness resolution. There appears to be complete ablation to the margin of blood vessels without compromising the functionality of the blood vessels, which suggests that IRE is a promising method for treatment of tumors near blood vessels (a significant challenge with current ablation methods). Consistent with the mechanism of action of IRE on the cell membrane only, we show that the structure of bile ducts, blood vessels and connective tissues remains intact with IRE. We report extremely rapid resolution of lesions, within two weeks, which is consistent with retention of vasculature. We also document tentative evidence for an immunological response to the ablated tissue. Last, we show that mathematical predictions with the Laplace equation can be used in treatment planning. The IRE tissue ablation technique, as characterized in
this report, may become an important new tool in the surgeon armamentarium. Keywords— irreversible electroporation, minimally invasive surgery, imaging monitored, ultrasound, liver, cancer.
REFERENCES 1. 2. 3. 4.
Davalos, R., L. Mir, Rubinsky B., (2005) Annals Biomed. Eng. 33. 223-231 Miller, L., Leor, J., & Rubinsky, B. (2005) Technology in Cancer Research and Treatment 4, 699-705. Edd, J. F., Horowitz, L., Davalos, R. V., Mir, L. M., & Rubinsky, B. (2006) IEEE Trans Biomed Eng 53, 1409-1415. Rubinsky, B., Onik G., Mikus, P. (2007) Technology in Cancer Research and Treatment, 6 37-48. Author: Boris Rubinsky Institute: Center for Bioengineering in the Service of Humanity and Society -The Hebrew University of Jerusalem Street: Jerusalem, Givat Ram campus City: Jerusalem Country: Israel Email:
[email protected]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 629, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
The induced transmembrane potential and effective conductivity of cells in dense cell system M. Pavlin1, and D.Miklavcic1 1
University of Ljubljana/Faculty of Electrical Engineering, Trzaska 25, Ljubljana, Slovenia
Abstract— Studying electric potential distribution on the cell membrane and electric conductivity gives us an insight into the effects of the electric field on cells and tissues. Since cells are always surrounded by other cells we studied how their interactions influence the induced transmembrane potential (TMP) and effective conductivity. We studied numerically and analytically the effect of cells arrangement for cases, where cells were organized into simple-cubic, body-centered cubic and face-centered cubic lattices. We show that in contrast to some reported results the phenomenological effective medium equations (EMT) can not be used to determine the local electric field and the induced transmembrane potential in dense systems, whereas the effective conductivity of biological cells in dense system can be analyzed with Maxwell or Bruggeman EMT equations. We also derive a zero order approximation for the induced TMP in dense suspension, where dominant factors which govern the change in the local field are the cell volume fraction and number of nearest neighbors. The presented analysis demonstrates that the local electric field and induced TMP in dense system have to be calculated numerically, whereas EMT equations are useful only for estimating effective (bulk) values of a certain physical property such as dielectric constant or conductivity.
solution for a single spherical cell a general equation can be derived which describes dependence of the induced TMP on the frequency and electrical and geometrical properties of the cell and the external medium [1]. The next step for applying these results to real systems is to calculate and analyze dependence of the induced TMP and effective conductivity of cells in dense suspensions or tissues. Theoretically this represents solving Laplace equation for a system of many interacting constitutes (cells) which can not be obtained analytically [1]-[4]. For this reasons we set to solve this problem numerically using finite-element method (FEM). In this paper we present calculation of the transmembrane potential induced on the cell membrane, and effective conductivity of cells in dense systems such as cell suspensions, aggregates and tissue. The results are compared to analytical approximation for the induced transmembrane potential and to different analytical effective medium equations for effective conductivity.
Keywords— transmembrane potential, effective conductivity, numerical methods, dense system, effective medium
A. Induced transmembrane potential
I. INTRODUCTION The induced transmembrane potential and electric conductivity of biological cells exposed to electromagnetic fields is of interest in a variety of applications, such as gene electrotransfer, electrochemotherapy, study of forces on cells undergoing fusion, models of cardiac tissue response to defibrillating currents and study of potential health effects of electric and magnetic fields [1]. Therefore, investigation of induced potential distribution on the cell membrane is important in studying the effects of the electric field on biological cells. In order to understand how the electric fields interact with cells, we have to first evaluate and analyze how the electric fields couple into cells. For a single cell exposed to the external (applied) electric field the analytical calculations provide very good estimates for cells which are spherical or more general ellipsoidal shapes. Solving the Laplace
II. THEORY
When the electric field is applied to a cell or cell system, a non-uniform transmembrane potential (TMP) is induced on exposed cells. Potential distribution on the surface of a cell placed in an electric field can be calculated analytically or numerically. Even though analytical solutions are possible only for some analytically defined shapes such us spheroids, they give us a rough picture of the dependence of induced TMP on electric and geometric parameters. An idealized model of a biological cell is a sphere consisting of a cell cytoplasm i surrounded by a very thin, low conducting membrane m, which is placed in a conductive medium e as shown in Fig. 1, where d denotes membrane thickness, R cell radius and θ is the angle measured with respect to the electric field direction. Analytical solution for static case for the induced TMP is given by Schwan equation [1]: ΔΨ = TMP = g (λ ) E0 R cos θ ,
(1)
where ΔΨ represents potential drop across the cell membrane.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 635–638, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
636
M. Pavlin and D.Miklavcic
⎛ ⎜ 3f σ = σ e ⎜1 + σ −σ ⎜ σ p + 2σ e − f −a p 4 e f ⎜⎜ σ p −σe σ p + 3 σe ⎝
Fig. 1 Schemtic representation of a spherical cell, where σe, σi and σm represent specific conductivities of external medium, internal medium and cell membrane, respectively, θ is an angle measured with respect to the electric field direction (E0), R denotes cell radius and d membrane thickness.
⎞ ⎟ ⎟, 10 / 3 ⎟ ⎟⎟ ⎠
(5)
where a is the numerical factor, which according to Rayleigh is 1.65. Later Tobias and Meredith obtained the same formula with the value of the numerical factor a being 0.523 instead of 1.65 [4]. All these effective medium theories (EMT) are exact only for dilute suspensions, however for higher volume fractions they give only approximate values.
Factor g (λ) is a function of cell parameters and E0 is the applied electric field. For physiological conditions where d<
(2)
B. Effective conductivity The calculation of effective (bulk) conductivity of an inhomogeneous medium is theoretically a complex problem due to the mutual interactions between the particles. The effective medium theories use an average field and neglect local field effects [4] to obtain approximate analytical solutions. Maxwell was the first to derive an equation for the effective conductivity σ of a dilute suspension. He assumed that the potential due to N spheres placed in the external field having conductivity σp and dispersed in a medium having conductivity σe (Fig. 1a) is equal to the potential of an equivalent sphere having the effective conductivity σ (Fig. 1b). From this one can derive:
σe −σ p σe −σ , = f 2σ e + σ 2σ e + σ p
f =
N R3 , D3
(3)
where f is the volume fraction of the particles dispersed in the medium and D denotes the radius of the equivalent sphere. Maxwell equation was extended by Bruggeman to concentrated suspensions by a mathematical procedure and obtained the result known as Bruggeman formula:
σ − σ p ⎛ σ e ⎞1/ 3 = 1− f . σ e − σ p ⎜⎝ σ ⎟⎠
Fig. 2 Maxwell derivation of conductivity for a dilute suspension of particles. a) N spheres having conductivity σp dispersed in a medium with σe induce the same potential in the external field E as b) one sphere of a radius D having the effective conductivity σ. III. METHODS Numerical calculations were performed by finite element modeling software Femlab (Comsol, Sweeden). A DC current flow analysis was chosen to calculate the electric potential and current density distribution. Biological cells were modeled as non-conductive spheres, since under normal conditions membrane conductivity is many orders smaller than that of the external medium. Cells were organized either into simple-cubic (sc), body-centered cubic (bcc) or face-centered cubic (fcc) lattice shown in Fig. 3. Using the symmetry of cubic lattices and applying appropriate boundary conditions, we were able to model infinite cubic lattices with a model of a primitive cell [2].
(4)
For a special case of the heterogeneous medium with spherical particles arranged in a simple-cubic lattice, Rayleigh calculated the approximate result [4]:
Fig. 3 Unit cells for a) simple cubic (sc), b) body-centered cubic (bcc) and c) face-centered cubic (fcc) lattices. A is length of unit cell side shown in fig. a)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The induced transmembrane potential and effective conductivity of cells in dense cell system
IV. RESULTS
637
1 MAXWELL
0.9
A. Analytical calculations
RAYLEIGH 0.8
ΔΨ = 1.5
F1 ( f ) E0 R cos θ , F2 ( f , N )
(6)
σe −σ p , σ p ≈ 0 ⇒ F1 = 1 + 0.5 f . 2σ e + σ p
(7)
The factor F2 in Eq. 6 incorporates the change in the potential due to the effect of neighboring cells and depends on cell arrangement and volume fraction: 1/ 3
⎛ 3f ⎞ F2 = 1 + ⎜ ⎟ ⎝ 4 Nπ ⎠
.
(8)
However, this derivation [5] was based on the assumption that the potential in dense system is affected by the neighboring cells F2 as well as through the change in the local electric field, which was derived on the basis of EMT Maxwell equation leading to F1. Therefore this is not correct since the change in the effective field is already taken into account by factor F2 and factor F1 can be omitted: F1 = 1 .
BRUGGEMAN FEM sc
e
0.6
FEM bcc 0.5
FEM fcc
0.4 0.3 0.2
where F1 according to [5] reflects the relation between the effective field E and external field E0 and depends on cell volume fraction and can be derived from Maxwell EMT equation (Eq. 3): F1 = 1 + f
TOBIAS
0.7
σ/σ
Here we present analytical approximation for the potential induced in membrane of spherical cells ordered in cubic lattices having a certain volume fraction f. Following derivation of Qin et al. [5] one obtains the solution for the induced potential on the cell membrane:
0.1 0 0
0.1
0.2
0.3
0.4 f
0.5
0.6
0.7
0.8
Fig. 4 The normalized effective conductivity of a cell suspension for different volume fraction f of numerical FEM results (symbols) and analytical EMT solutions of Maxwell, Rayleigh, Tobias and Bruggeman (lines) is shown.
In Fig. 4 it can be seen that the analytical theories are exact for smaller values of f, however for higher volume fractions deviations from numerical values can be observed. In general our FEM results show good agreement with experimental results performed on different model systems [2],[4]. In the case of the ordered spheres of uniform sizes the experimental results fit best to Maxwell and Tobias equations. In Fig. 5 numerical calculations of the normalized maximal transmembrane potential (θ = 0°) for sc, bcc and fcc cubic lattices is shown. It can be seen that the induced
(9)
1.6
This can be understood also by analyzing the limit case of very dense system (f > 0.7). Then the factor F1 according to Eq.7 would increase to 1.5 meaning that the potential in very dense system would be the same as for a single cell, which however is in contrast with the decrease in the induced TMP observed in dense cell suspensions and tissue [2].
1.5
sc TMP
max
bcc TMPmax fcc TMPmax
TMPmax/E0R
1.4 1.3 1.2 1.1
B. Numerical calculations We used finite-element method (FEM) to calculate the induced transmembrane potential and effective conductivity of cells arranged in infinite cubic lattices which represent a model of dense cell systems such as cell suspensions, aggregates and tissue.
1 0.9 0
0.1
0.2
0.3
0.4 f
0.5
0.6
0.7
0.8
Fig. 5 Comparison of numerical calculations for the normalized maximal transmembrane potential (θ = 0°) for three different cubic lattices (simple cubic-sc, body centered cubic-bcc and face centered cubic lattice-fcc).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
638
M. Pavlin and D.Miklavcic 1.6 fcc Qin et al.
analytical TMPmax/E0R
1.5
bcc
1.4 sc
1.3 1.2
fcc
1.1
bcc
1 sc
0.9 0
0.1
0.2
0.3
0.4 f
0.5
0.6
0.7
0.8
Fig. 6 Comparison of analytical calculations for the normalized maximal
depend significantly on the arrangement of cells. In contrast to this the induced transmembrane potential strongly depends on cell ordering, with fcc lattice being most real representation of realistic random ordering of cells in suspension or tissue. We furthermore show with numerical calculations of the induced TMP that in contrast to some reported results the phenomenological EMT equations can not be used to calculate analytically the local electric field and the induced TMP in dense systems and if used could introduce large errors. The presented analysis demonstrates that the local electric field and induced TMP in dense system have to be calculated numerically, whereas analytical EMT equations are useful only for estimating effective (bulk) values of a certain physical property such as dielectric constant or conductivity.
transmembrane potential (θ = 0°) for three cubic lattices, our analytical approximation (solid lines) and calculations of Qin et al. (dashed lines).
TMP and strongly on cell arrangement. For sc and bcc lattices the induced TMP falls approximately linearly with increasing volume fraction from 1.5 to 1 and 1.15, consequently, whereas for fcc lattice it drops more slowly from 1.5 to 1.27. The difference between different lattices is expected since sphere arrangement affects mutual interactions between the cells due to different numbers of nearest neighbors: 6 (sc), 8 (bcc) and 12 (fcc). In Fig. 6 comparison of the normalized maximum induced TMP calculated using our analytical approximations (Eqs. 6 and 9) with calculations of Qin et al. is shown. It can be seen that according to reference [5] the potential should increase above 0.1 volume fraction which is non-physical and not in agreement with numerical calculations (see Fig. 5). Our analytical approximation represented in Fig. 6 only approximately follows the numerical results for fcc lattice, but basically can not accurately predict the induced TMP in dense systems. V. CONCLUSIONS In this paper we present analytical and numerical calculation of the transmembrane potential induced on the cell membrane and effective conductivity of cells in dense systems such as cell suspensions, aggregates and tissue. As already demonstrated in several studies [2],[4] we obtained that analytical effective medium equations very accurately describe effective conductivity also for high density of cells. The effective conductivity also does not
ACKNOWLEDGMENT This research was supported by the Ministry of High Education, Science and Technology of the Republic of Slovenia under the grants J2-9770-1538 and P2-0249.
REFERENCES 1. 2. 3.
4. 5.
Neumann E, Sowers AE, Jordan CA (1989) Electroporation and Electrofusion in Cell Biology. Plenum Press, New York Pavlin M, Slivnik T, Miklavcic D (2002) Effective Conductivity of Cell Suspensions IEEE Trans Biom Eng 49:77-80 Pavlin M, Pavselj N, Miklavcic D (2002) Dependence of Induced Transmembrane Potential on Cell Density, Arrangement, and Cell Position Inside a Cell System IEEE Trans Biom Eng 49: 605-612 Takhasima S (1989), Electrical properties of Biopolimers and Membranes. Adam Hilger, Bristol Qin Y, Lai S, Jiang Y et al. (2005) Transmembrane voltage induced on a cell membrane in suspensions exposed to an alternating field: A theoretical analysis. Bioelectrochem 67: 57– 65 Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Mojca Pavlin University of Ljubljana Trzaska 25 1000-Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Tumor blood flow modifying and vascular disrupting effect of electrochemotherapy G. Sersa1, M. Cemazar1, S. Kranjc1 and D. Miklavcic2 1
Institute of Oncology Ljubljana, Department of Experimental Oncology, Zaloska 2, Ljubljana, Slovenia University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia, Trzaska 25, Ljubljana, Slovenia
2
Abstract— The aim of this study was to determine the tumor blood flow modifying, and potential vascular disrupting effect of electrochemotherapy with bleomycin or cisplatin. Electrochemotherapy was performed by application of short intense electric pulses to the tumors after systemic administration of bleomycin or cisplatin. Evaluated were antitumor effectiveness of electrochemotherapy by tumor measurement, tumor blood flow modifying effect by Patent blue staining technique, and sensitivity of endothelial and tumor cells to the drugs and electrochemotherapy by clonogenicity assay. Electrochemotherapy was effective in treatment of SA-1 tumors in A/J mice resulting in substantial tumor growth delay and also tumor cures. Tumor blood flow reduction following electrochemotherapy correlated well with its antitumor effectiveness. Virtually complete shut down of the tumor blood flow was observed already at 24 h after electrochemotherapy with bleomycin whereas only 50% reduction was observed after electrochemotherapy with cisplatin. Sensitivity of human endothelial HMEC-1 cells to electrochemotherapy suggests a vascular targeted effect for electrochemotherapy in vivo with bleomycin as well as with cisplatin. These results show that, in addition to direct electroporation of tumor cells, other vascular targeted mechanisms are involved in electrochemotherapy with bleomycin or cisplatin, potentially mediated by tumor blood flow reduction, and enhanced tumor cell death as a result of endothelial damage by electrochemotherapy. Keywords— sarcoma experimental – drug therapy – blood supply, bleomycin, cisplatin, electroporation, drug delivery systems.
I. INTRODUCTION Enhanced delivery of chemotherapeutic drugs into tumor cells by electroporation is termed electrochemotherapy [1]. A local increase in plasma membrane permeability, after exposure of tumor nodules to electric pulses (electroporation), results in increased uptake of chemotherapeutic drugs into the tumor cells. Electrochemotherapy has been shown to be successful for drugs such as bleomycin and cisplatin, which normally exhibit impeded transport through the plasma membrane. The increased antitumor effectiveness of bleomycin and cisplatin combined with electroporation has already been demonstrated in experimental and clinical studies although the underlying mechanisms remain to be clarified [1-4].
In addition to increased drug delivery into the cells, application of electric pulses to the tumors was found to exert tumor blood flow modifying effect [5,6]. Electric pulses, as used in preclinical and clinical studies were found to reduce tumor blood flow. Transient reduction in tumor blood flow down to 30% of control was found, but recovered to almost pre-treatment level within 24 hours [5]. Application of electric pulses to solid tumors would not be expected to selectively electroporate tumor cells alone. All cells in all areas where electric field exceeds the critical threshold level would be electroporated [7]. Therefore endothelial cells are also potential targets for electroporation. Since the initial concentration of the drug is the highest in tumor blood vessels, during electroporation, electrochemotherapy is probably effective on endothelial cells in the tumor blood vessels. This may lead to severe damage to the vasculature of the tumors and consequently induce a secondary cascade of tumor cell death, e.g. by abrogating oxygen supply to the cells. This phenomenon, described as vascular targeted therapy, has been exploited in several studies [8]. The aim of this study was to elucidate tumor blood flow modifying and vascular targeted effects of electrochemotherapy with bleomycin or cisplatin by measuring tumor perfusion, and cells survival of endothelial cells in relation to their antitumor effectiveness. II. MATERIALS AND METHODS A. Animals, tumors and cell lines A/J mice of both sexes, purchased from the Institute Rudjer Boskovic, Zagreb, Croatia, were used. Subcutaneous murine fibrosarcoma SA-1 tumors (The Jackson Laboratory, Bar Harbour, ME) were implanted, by injecting 0.1 ml NaCl (0.9%) containing 5 x 105 viable tumor cells under the skin on the rear dorsum. Six to 8 days after implantation, when tumors reached approximately 40 mm3 in volume (6 mm in diameter) mice were randomly divided into experimental groups, consisting of at least 6 mice. Treatment protocols were approved by the Ministry of Agriculture, Forestry and Food of the Republic of Slovenia No. 323-02237/01.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 602–605, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Tumor blood flow modifying and vascular disrupting effect of electrochemotherapy
Human dermal microvascular endothelial cells (HMEC1) cells were generously provided by Dr. F.J. Candal (Center for Disease Control, Atlanta, USA). Cells were grown as monolayer in D-MEM supplemented with 10% fetal calf serum (FCS, Sigma, USA) in a humidified incubator at atmospheric oxygen supplemented with 5% CO2 at 37°C. They were routinely subcultured twice per week. B. Electrochemotherapy protocol Bleomycin (Bleomycin, Mack, Germany) was dissolved in phosphate buffered saline and the dose of 5mg/kg in 0.2 ml volume was injected intravenously. Bleomycin solution was prepared freshly for daily injections. cis-Diamminedichloroplatinum (II) (Cisplatin) was obtained from Bristol Myers Squibb (Austria) as a crystalline powder and a stock solution prepared in sterile water at a concentration of 1 mg/ml. The final cisplatin solution (4 mg/kg in 0.2 ml) was freshly prepared in 0.9% NaCl each day and was injected intravenously. Electric pulses were delivered by two flat, parallel stainless steel electrodes 8 mm apart (two stainless steel strips: length 35 mm, width 7 mm with rounded corners), which were placed percutaneously at the opposite margins of the tumor. Good contact between the electrodes and the skin was assured by means of conductive gel (Parker Laboratories, New York, USA). Eight square-wave pulses of 1040 V amplitude (amplitude to distance ratio 1300 V/cm), with a pulse width of 100 μs and repetition frequency of 1 Hz were generated by electroporator Jouan GHT 1287 (Saint Herblaine, France). In the electrochemotherapy protocol, tumors were exposed to electric pulses 3 minutes after bleomycin or cisplatin injection. Treatments were performed without anesthesia and were well tolerated by the animals. Tumor growth was followed by measuring three mutually orthogonal tumor diameters (e1, e2 and e3) using a vernier caliper on each consecutive day following treatment. Tumor volumes were calculated by the formula Π x e1 x e2 x e3 /6. From the calculated volumes the arithmetic mean and SE were calculated for each experimental group. Tumor growth delay was calculated for each individual tumor by subtracting the doubling time of each tumor from the mean doubling time of the control group and then averaged for each experimental group.
603
C. Assessment of tumor staining by Patent blue Patent blue (Byk Gulden, Switzerland) was used to estimate tumor perfusion. Patent blue (1.25%), diluted in 0.2 ml 0.9% NaCl was injected into tail vein of animals after tumor treatment. The dye was distributed evenly through the blood at approximately 1 minute, thereafter animals were sacrificed and tumors were carefully dissected. Tumors were cut along their largest diameter and the stained versus nonstained tissue per cross-section was immediately estimated visually by two persons. The mean of both estimations was used as an indicator of tumor perfusion. D. Cytotoxicity assay for SA-1 and HMEC-1 cells treated by electrochemotherapy The sensitivity of the SA-1 and HMEC-1 cells to combined treatment with bleomycin or cisplatin and electric pulses (electrochemotherapy) was determined by in vitro colony forming assay. The cells (2.2 x 107 cells/ml) were mixed with bleomycin or cisplatin. One half of the mixture was exposed to 8 electric pulses (electric field intensity 1400 V/cm, pulse duration 100 μs, frequency 1 Hz) and the other half served as a control for bleomycin or cisplatin treatment alone. The bleomycin concentrations used ranged from 0.1 nM to 100 μM and the cisplatin concentrations from 16.7 to 670 μM. The cells were incubated with each drug for 5 min. The survival of cells treated with electrochemotherapy was normalized to electric pulses treatment alone. The IC50 values (drug concentration that reduced cell survival to 50% of control) were determined for each treatment group. E. Statistical analysis The significance of differences between the mean values of the groups was evaluated by modified t-test (Newman Keuls test) after a one way analysis of variance was performed and fulfilled. Sigma Stat statistical package (SPSS, USA) was used for statistical analysis. P levels less than 0.05 were taken as statistically significant. III. RESULTS A. Antitumor effectiveness Electrochemotherapy with either bleomycin or cisplatin was effective in inducing cytotoxicity in subcutaneous SA-1
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
604
G. Sersa, M. Cemazar, S. Kranjc and D. Miklavcic
Table 1. Antitumor effectiveness of electrochemotherapy on SA-1 tumors in mice (* P<0.05) n
Control Electric pulses Bleomycin (5 mg/kg)) Electrochemotherapy with bleomycin Cisplatin (4 mg/kg) Electrochemotherapy with cisplatin
20 17 20 17 10 10
Tumor doubling time (Days, AM±SE) 1.8 ± 0.05 3.1 ± 0.2* 1.9 ± 0.1 34.5 ± 2.9* 3.7 ± 0.4* 12.1 ± 1.6*
tumors (Table 1). Treatment of tumors with electric pulses alone had a minor effect on tumor growth. Treatment of mice with bleomycin or cisplatin alone had also minor effects on tumor growth; bleomycin having none, whereas cisplatin inducing 1.9 days tumor growth delay. When using bleomycin in electrochemotherapy, a highly significant growth delay of 32.7 days was achieved and 70% of the animals were cured (tumor free 100 days after the treatment). The animals tolerated the treatment well without scaring of the treatment area. Electrochemotherapy with cisplatin also resulted in good antitumor effect with reduction in tumor size at three days after the treatment, regrowth after 8 days, however no tumor cures were achieved. Tumor growth delay was 10.3 days, which was highly significant compared to the antitumor effectiveness of either single modality. B. Tumor blood flow changes Electrochemotherapy, either with bleomycin or cisplatin, induced substantial reduction of tumor blood flow. Untreated SA-1 tumors showed very low incidence of necrosis with approx. 90% of the tumor area stained with Patent blue. When electric pulses were applied to a tumor reduction in tumor staining was observed (Figure 1). By 1 hour after the application of electric pulses the percentage of unstained tumor section had increased to 45% and after 8 hours further increased to 65%, however tumor blood flow recovered almost completely within 24 hours. Treatment with bleomycin alone did not induce changes in tumor blood flow. However, electrochemotherapy with bleomycin demonstrated substantial increase in unstained tumor area at 8 hours after treatment, and virtually complete shut down of tumor perfusion at 24 hours after therapy compared to electric pulses alone (Figure 1). Treatment with cisplatin alone had minimal tumor blood flow modifying effect. However, electrochemotherapy with cisplatin demonstrated greatly increased unstained tumor area at 8 hours after treatment which remained significantly higher at 24 hours after the treatment compared to treatment with electric pulses alone (Figure 1).
Tumor growth delay (Days, AM±SE) 1.3 ± 0.2 0.1 ± 0.1 32.7 ± 2.9 1.9 ± 0.4 10.3 ± 1.6
100
Cures 0% 0% 0% 70 % 0% 0%
Control BLM CDDP EP ECT BLM ECT CDDP
90
% of unstained tumor area
Therapy
80 70 60 50 40 30 20 10 0
1
8
24
Time after therapy (hours)
Fig. 1 Changes in tumor blood flow after electrochemotherapy (ECT) with bleomycin (BLM) or cisplatin (CDDP), and electric pulses (EP) measured by Patent blue staining. Mean values ± SE of the mean of at least 6 mice per point.
C. Cytotoxicity of electrochemotherapy to tumor and endothelial cells The sensitivity of SA-1tumor cells and human endothelial cells HMEC-1 to bleomycin and cisplatin as well as to electrochemotherapy was evaluated by in vitro colony forming assay (Table 2). Endothelial cells were more sensitive to bleomycin than tumor cells. The potentiation of bleomycin cytotoxicity by electroporation was ~5000-fold for endothelial cells and ~2400-fold for tumor cells. Electrochemother-
Table 2. Cytotoxicity of electrochemotherapy to human endothelial HMEC-1 and mouse tumor SA-1 cells in vitro Cell line / Group
HMEC-1
SA-1
(IC50; μM)
(IC50; μM)
Bleomycin
20.0
60.0
Electrochemotherapy with bleomycin
0.004
0.025
Cisplatin
380.0
166.0
Electrochemotherapy with cisplatin
40.0
20.0
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Tumor blood flow modifying and vascular disrupting effect of electrochemotherapy
apy with cisplatin was less effective on endothelial as on tumor cells, but potentiation of cisplatin cytotoxicity by electroporation was bigger for endothelial cells (~10-fold), as for tumor cells (~8-fold). IV. DISCUSSION This study shows tumor blood flow modifying and vascular disrupting effect of electrochemotherapy with bleomycin as well as with cisplatin. The sensitivity of endothelial cells to electrochemotherapy with either, bleomycin or cisplatin correlates well with the enhanced reduction of tumor blood flow induced by electrochemotherapy in vivo and its antitumor effectiveness. As many preclinical and clinical studies have shown, electrochemotherapy either with bleomycin or cisplatin leads to high percentage of tumor cures, on many tumor types tested so far [1-4]. Electroporation was shown to significantly increase drug accumulation in the tumor cells.1,9 In view of our previous study observing that electrochemotherapy with cisplatin induced more than 20-fold increase in cell kill compared with cisplatin treatment alone, we proposed that, in addition to direct electroporation of tumor cells, other mechanisms may be involved in antitumor effectiveness of electrochemotherapy [9]. The direct blood flow modifying effect of electric pulses applied to the tumors has now been established. Application of electric pulses reduces blood flow selectively at the site of its application, i.e. within the tumor site, without modifying flow in normal tissues [5,6]. Recently, a new method by staining of tumors with Patent blue was evaluated, giving data on tumor blood flow, in support of that found using the 86 RbCl extraction technique [5]. Since the two methods correlated well, Patent blue staining technique was preferred in this study, because of its simplicity. The present study confirms the results of our previous study that application of electric pulses to the tumors induces transient reduction in tumor blood flow. Tumor blood flow modifying effect of electrochemotherapy was greater than after application of electric pulses alone. This effect was especially dramatic in electrochemotherapy with bleomycin, but in lesser extent after electrochemotherapy with cisplatin. Tumor blood flow after electrochemotherapy with bleomycin was completely shut down already at 24 hours after therapy, indicating that tumor vasculature was irreversibly damaged [10]. Since HMEC-1 endothelial cells were more sensitive to electrochemotherapy with bleomycin in vitro than SA-1 tumor cells, this vascular shut down may be ascribed in large part to the death of endothelial cells. In contrast, endothelial cells were less sensitive to electrochemotherapy with cisplatin in vitro, which was also reflected in less severe tumor blood flow
605
changes induced by this therapy with flow partly restored after 24 hours [10,11]. In summary, several mechanisms are involved in antitumor effectiveness of electrochemotherapy. Electroporation of the tumors increases delivery of cytotoxic drugs into the tumor cells, potentiating their cytotoxicity. Additionally, the current study demonstrates that nonselective electroporation of solid tumors enables cytotoxic action of electrochemotherapy to endothelial cells and enhanced tumor cytotoxicity by a vascular disrupting mechanism.
ACKNOWLEDGMENT The authors gratefully acknowledge financial support from the state budget of the Slovenian Research Agency.
REFERENCES 1.
Mir LM. (2006) Bases and rationale of the electrochemotherapy. EJC Suppl 4: 38-44. 2. Sersa G. (2006) The state-of-the-art of electrochemotherapy before the ESOPE study; advantages and clinical uses. EJC Suppl 4: 52-59. 3. Sersa G, Stabuc B, Cemazar M et al (2000) Electrochemotherapy with cisplatin: clinical experience in malignant melanoma patients. Clin Cancer Res 6: 863-867. 4. Marty M, Sersa G, Garbay JR et al. (2006) Electrochemotherapy – An easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: Results of ESOPE (European Standard Operating Procedures of Electrochemotherapy) study. EJC Suppl 4: 3-13. 5. Sersa G, Cemazar M, Parkins CS, Chaplin DJ (1999) Tumour blood flow changes induced by application of electric pulses. Eur J Cancer 35: 672-677. 6. Gehl J, Skovsgaard T, Mir LM (2002) Vascular reactions to in vivo electroporation: characterization and consequences for drug and gene delivery. Biochim Biophys Acta 1569: 51-58. 7. Miklavcic D, Corovic S, Pucihar G, Pavselj N (2006) Importance of tumour coverage by sufficiently high local electric field for effective electrochemotherapy. EJC Suppl 4: 45-51. 8. Chaplin DJ, Hill SA, Bell KM, Tozer GM (1998) Modification of tumor blood flow: Current status and future direction. Sem Radiat Oncol 8: 151-163. 9. Cemazar M, Miklavcic D, Scancar J et al (1999) Increased platinum accumulation in SA-1 tumour cells after in vivo electrochemotherapy with cisplatin. Br J Cancer 79: 1386-1391. 10. Cemazar M, Parkins CS, Holder AL et al (2001) Electroporation of human microvascular endothelial cells: evidence for an anti-vascular mechanism of electrochemotherapy. Br J Cancer 84: 565-570. 11. Sersa G, Cemazar M, Miklavcic D, Chaplin DJ (1999) Tumor blood flow modifying effect of electrochemotherapy with bleomycin. Anticancer Res 19 (5B): 4017-4022. Author: Sersa Gregor Institute: Street: City: Country: Email:
Institute of Oncology Ljubljana Zaloska 2 SI-1000 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Tumor electrotransfection progress and prospects: the impact of knowledge about tumor histology S. Mesojednik1, D. Pavlin2, G. Sersa1, A. Coer3, S. Kranjc1, A. Grosel1, G. Tevz1, M. Cemazar1 1
Institute of Oncology Ljubljana, Dept. of Experimental Oncology, Zaloska 2, SI-1000 Ljubljana, Slovenia; 2 University of Ljubljana, Veterinary Faculty, Cesta v Mestni log 47, SI-1000 Ljubljana, Slovenia; 3 University of Ljubljana, Medical Faculty, Korytkova 2, SI-1000 Ljubljana, Slovenia
Abstract— In order to improve transfection efficiency of electrotransfer to solid tumors in vivo, we explored how tumor histological properties affected transfection efficiency. In four different tumor types (B16F1, EAT, SA-1, LPB), the content of proteoglycans and collagen was morphometrically analyzed and the cell size as well as cell density determined. To demonstrate the influence of histological properties of solid tumors on electrotransfer correlation between histological properties and transfection efficiency with regard to the time interval between DNA injection and electroporation was determined. Our data demonstrate that soft tumors with spherical and larger cell size, low proteoglycans and collagen content and low cell density are more effectively transfected (B16F1, EAT) than rigid tumors with high proteoglycans and collagen content, small and spindle-shaped cells and high cell density (LPB and SA-1). Furthermore, the optimal time interval for increased transfection exists only in soft tumors, being around 5 to 15 min. Keywords— electroporation, solid tumors, luciferase, extracellular matrix components.
I. INTRODUCTION Electroporation is one of the most promising non-viral delivery systems for the use in clinical gene therapy of cancer since efficacy of electrochemotherapy i.e. enhanced delivery of chemotherapeutic drugs to tumor cells, has been proven in treatment of various subcutaneous and cutaneous tumors, both in clinical trials as well as in veterinary oncology [1,2]. Utilization of electrogene therapy for tumors treatment is currently in preclinical and early clinical stage. In these studies muscles are more frequently used targets than tumors, because only minority of studies delivering reporter genes or therapeutic genes into tumors by electroporation were capable to achieve sufficient electrotransfection. Moreover, studies on different tumor models have shown variable transfection efficiencies [3-6]. There is a substantial volume of literature explaining variable transfection efficiencies due to variations in optimal experimental conditions, therapeutic genes, plasmid DNA backbone and mice strain, while only a few publications drew attention to tumor microenvironment and to the problems related to the trans-
port of DNA through the tumor tissue [7,8]. These specific tumor properties may have equal or greater influence on the transfection efficiency in tumors as compared with methodological factors. In support to this hypothesis are preclinical and clinical studies on anticancer drug penetration through tumor tissue [7,9]. Limited penetration of anticancer drugs through tumor tissue has been proposed as a potential cause of resistance of solid tumors to anticancer drugs [7,10]. These kinds of studies lead to novel treatment strategies that might improve the cell kill, by improving the drug penetration through the tumor tissue. Therefore, an important challenge for efficient gene therapy of solid tumors is to identify structural and physiological properties of tumor tissue. Better understanding of tissue barriers might be useful for the prediction of the transgenes expression pattern in certain tumor types in vivo as well as for further optimization of gene delivery systems. The aim of our study was to determine the effect of histological properties of solid tumors on transfection efficiency of electrically-assisted gene delivery. The emphasis was on microscopic analysis of tumor tissue determining cell size, cell density and the amount of extracellular matrix components, proteoglycans and collagen. To demonstrate the influence of histological properties of solid tumors on electrically-assisted gene delivery correlation between histological properties and transfection efficiency with regard to the time interval between DNA injection and electroporation was determined. II. MATERIALS AND METHODS A. Experimental animal, tumor models and plasmids In this study, four different murine tumor cell lines were used: SA-1 fibrosarcoma syngeneic to A/J mice, LPB fibrosarcoma and B16F1 melanoma syngeneic to C57B1/6 mice and EAT mammary carcinoma syngeneic to CBA mice. For tumor induction 1.3×106 LPB, 5.0×105 SA-1, 3.0×106 EAT and 1.0×106 B16F1 cells were prepared in 0.1 ml physiological saline solution. Solid tumors were initiated by subcutaneous injection of cell suspension in the shaved
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 589–592, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
590
S. Mesojednik, D. Pavlin, G. Sersa, A. Coer, S. Kranjc, A. Grosel, G. Tevz, M. Cemazar
right flank of mice. When the tumors reached approximately 6 mm in diameter, the animals were randomly divided into experimental groups and subjected to specific experimental protocols. pCMVLuc (encoding luciferase) were prepared using the Endo-Free Maxi kit (Qiagen, Hilden, Germany) and diluted in H2O to concentration of 1mg/ml. B. Tumor electrotransfection protocol, histological analysis and determination of transfection efficiency Electrically-assisted gene delivery was performed by intratumoral injection of plasmid DNA (50 μg/tumor) with subsequent electroporation of the tumor. Electric pulses were delivered through two parallel stainless steel electrodes with 6 mm distance between them. Eight squarewave electric pulses were delivered in two sets of 4 pulses in perpendicular directions at frequency of 1 Hz, amplitude over distance ratio 600 V/cm and 5 ms duration. Electric pulses were generated by an electroporator Jouan GHT 1287 (Jouan, St. Herblain, France). Electrodes were placed percutaneously at the opposite margins of the tumor. Tumors were excised 48 h post-transfection. Groups consisted of 6 tumors per group. When tumors reached 6 mm in diameter tumors were excised, fixed in formalin, embedded in paraffin and cut. Microsections were stained with Masson’s trichrome for collagen, with PAS for proteoglycans and haematoxylin-eosin (H&E) for cell density estimation. The tumor slides were observed by transmission microscopy with a 60× objective, and images were taken by CCD camera. Cell density was determined by cell counting, cell size by measuring cell diameter. To determine proteoglycans and collagen content in the tumors, the M-100 multipurpose test system was used. Areal densities of proteoglycans and collagen were normalized to extracellular matrix area. Tumors were weighed, immediately frozen in liquid nitrogen and stored at -80°C until further procedures. Thawed tumors were homogenized in 1 ml of Glo Lysis Reagent (Promega, Madison, WI, USA), centrifuged and the supernatant stored at -80°C. Luciferase activity was measured with Genios luminometer (Tecan, Zurich, Switzerland). Luciferase activity was quantified as relative lights units and then converted to pg luciferase/mg tumor tissue, using the pre-prepared calibration curve of known quantities of luciferase (Promega). Differences between experimental groups were evaluated by one-way analysis of variance (ANOVA) followed by the Holm-Sidak test for multiple comparison. Correlation between the groups was determined by Pearson`s correlation statistics. Statistical analysis was done using Sigma Stat, SPSS Inc., Chicago, USA software.
III. RESULTS A. Transfection efficacy of electrically-assisted gene delivery in solid tumors Optimal time interval between DNA injection and electroporation for achieving the highest transfection efficiency varied depending on the tumor model. The highest level of luciferase activity was detected in B16F1 melanoma if plasmid DNA was injected between 5 and 15 min before electroporation. Similar results were obtained in the case of EAT carcinoma with significantly improved transfection detected at time intervals 10 min and 15 min. In LPB and SA-1 tumors, the highest luciferase activity was obtained when plasmid DNA was injected 10 or 15 minutes before electroporation of tumors. However, statistical analysis showed no significant difference between any of the time intervals tested in LPB and SA-1 tumors (Fig.1.). B. Histological properties of tumors The cell density as well as cell size statistically significantly differed between the tumor models. The highest cell density was observed in LPB and SA-1 fibrosarcomas, lower in EAT carcinoma and the lowest in B16F1 melanoma. Negative relationship existed between cell size and cell density. B16F1 melanoma was composed of the biggest cells and LPB of the smallest. The content of proteoglycans and collagen also differed between B16F1 melanoma and other three tumor models. LPB, SA-1 and EAT tumors had high proteoglycans con-
Fig. 1 Transfection efficiency in different tumor models with regard to the time interval between DNA injection and electroporation of tumors.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Tumor electrotransfection progress and prospects: the impact of knowledge about tumor histology
591
IV. DISCUSSION
Fig. 2 Cell size, cell density, proteoglycans and collagen content in 4 tumor models with respect to the transfection efficiency.
tent, while in B16F1 proteoglycans were observed only in traces. The content of collagen was approximately at the same level in LPB, SA-1 and EAT (~0.20). Only traces of collagen were found in B16F1 melanoma (Fig.2.)
In the present study we have shown that histological properties of the tumors, such as cell density, cell size, content of proteoglycans and collagen influence transfection efficiency of electrically-assisted gene delivery to tumors. Theoretical models and in vitro experimental results showed that permeabilization is not only a function of electric field intensity and cell size, but also of cell shape orientation as well as cell density and cell organization within a specific model of multicellular structure. Furthermore, mathematical models suggested that multicellular environment decreased the effects of electroporation on cell membrane permeability and that the presence of more cells around a given cell decreased the effect even further. Therefore, multicellular environment resulted in reduced uptake of molecules into the cells [11,12]. These results support the results of our present and previous in vivo studies on solid tumors where cell density correlated with transfection efficiency [3]. Another factor that might affect DNA distribution in tumors and finally the uptake of DNA into tumor cells is the composition, structure and amount of the extracellular matrix in tumors, which determine the transport properties of plasmid DNA injected into tumors [1]. In the case of electrically-assisted gene delivery the physiological resistance can be in part reduced by induced electric field in the tumors, which can facilitate DNA distribution. Zaharoff et al. showed that the average plasmid DNA movements via pulsed electric field were 4.2 fold farther in B16F1 melanoma compared to 4T1 sarcoma. The plasmid DNA mobility was correlated with the tumor collagen content, which was approximately 8-times greater in 4T1 than in B16F1 tumors [13]. This is in agreement with results of our study where histological analysis of tumors showed 2-3 fold greater content of proteoglycans and collagen in fibrosarcomas than in B16F1 melanoma. Besides above mentioned cellular properties of tumors, also the difference in extracellular matrix composition translated in difference in transfection efficiency, being the highest in B16F1 melanoma with the lowest proteoglycans and collagen content. Furthermore, to further evaluate and confirm the effect of composition of extracellular matrix on mobility and distribution of DNA in the tumors, different time intervals between DNA injection and electroporation of tumors were tested to demonstrate that DNA needs longer time to distribute in high proteoglycans and collagen content tumors compared to low proteoglycans and collagen content tumors. We found that B16F1 melanoma with the lowest content of proteoglycans and collagen has higher transfection efficiencies and at shorter time intervals compared to other three tumor models, especially fibrosarcomas. These three tumor models should have the same mo-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
592
S. Mesojednik, D. Pavlin, G. Sersa, A. Coer, S. Kranjc, A. Grosel, G. Tevz, M. Cemazar
bility of DNA, but due to the differences in cell size, cell shape and cell density, electrically-assisted gene delivery in EAT carcinoma yielded better transfection efficiency. The highest transfection efficiency was obtained at slightly longer time intervals between DNA injection and electroporation of tumors as in the case of B16F1 melanoma. LPB and SA-1 fibrosarcomas with high content of proteoglycans and collagen, high cell density, spindle like cells shape and small cell size resulted in low transfection efficiencies at all tested time intervals between DNA injection and electroporation. Finally, the unique biological properties of different tumor types may also be responsible for difference in transfection efficiency. It is well known that tumors are biologically heterogeneous systems with differences in growth rate, antigenicity, immunogenicity, morphology, histology, genotype and biochemical properties. So far, it has not been well defined, which are the major determinants of the tumor heterogeneity responsible for different levels of electrogene transfection efficiency. In our previous as well as in the present study we propose that electrogene transfection efficiency is at least in part dependent on tumor type. Namely, we found that the highest transfection efficiency was obtained in melanoma (B16F1), followed by carcinoma (EAT, T24) with the lowest transfection efficiency obtained in sarcomas or carcinosarcomas (LPB, SA-1, SaF, P22) [3,14]. Which are the biological properties of cells in solid tumors that contribute to the differences in transfection efficiency besides already known physical parameters of applied electric pulses and histological properties of tissues are currently not know, but molecular biology techniques, such as genomics and proteomics could give answer to this question. V. CONCLUSIONS Collectively, the results of our study point out the importance of histological properties of tumors for effective electrogene therapy. Tumors with larger spherical cells, low cell density and low proteoglycans and collagen content are more effectively transfected than high proteoglycans and collagen contented tumors with small and spindle-shaped cells of high density. Furthermore, in those tumors that had larger, spherical cells, the optimal time interval for transfection exists. As electrogene therapy is already tested in clinical trials Phase I-II, the knowledge about the histology of tumors can assist in planning of electrogene therapy, with respect to time interval between DNA injection and electroporation as well as selection of electrical parameters to obtain sufficient electric filed distribution in tumors.
ACKNOWLEDGMENT The authors acknowledge the financial support of the state budget by Slovenian Research Agency (Projects No. P3-0003 and J3-7044).
REFERENCES 1. 2. 3.
4. 5. 6. 7.
8. 9. 10.
11.
12. 13. 14.
Cemazar M, Golzio M, Sersa G, Rols MP, Teissie J. Electricallyassisted nucleic acids delivery to tissues in vivo: where do we stand? Curr Pharm Des 2006;12(29):3817-25. Andre F, Mir LM. DNA electrotransfer: its principles and an updated review of its therapeutic applications. Gene Ther 2004;11 Suppl 1:S33-S42. Cemazar M, Sersa G, Wilson J, Tozer GM, Hart SL, Grosel A et al. Effective gene transfer to solid tumors using different nonviral gene delivery techniques: electroporation, liposomes, and integrin-targeted vector. Cancer Gene Ther 2002;9(4):399-406. Rols MP, Delteil C, Golzio M, Dumond P, Cros S, Teissie J. In vivo electrically mediated protein and gene transfer in murine melanoma. Nat Biotechnol 1998;16(2):168-71. Heller L, Jaroszeski MJ, Coppola D, Pottinger C, Gilbert R, Heller R. Electrically mediated plasmid DNA delivery to hepatocellular carcinomas in vivo. Gene Ther 2000;7(10):826-9. Wells JM, Li LH, Sen A, Jahreis GP, Hui SW. Electroporationenhanced gene delivery in mammary tumors. Gene Ther 2000;7(7):541-7. Baumgartner G. The impact of extracellular matrix on chemoresistance of solid tumors--experimental and clinical results of hyaluronidase as additive to cytostatic chemotherapy. Cancer Lett 1998;131(1):1-2. Smrekar B, Wightman L, Wolschek MF, Lichtenberger C, Ruzicka R, Ogris M et al. Tissue-dependent factors affect gene delivery to tumors in vivo. Gene Ther 2003;10(13):1079-88. Minchinton AI, Tannock IF. Drug penetration in solid tumours. Nat Rev Cancer 2006;6(8):583-92. Tannock IF, Lee CM, Tunggal JK, Cowan DS, Egorin MJ. Limited penetration of anticancer drugs through tumor tissue: a potential cause of resistance of solid tumors to chemotherapy. Clin Cancer Res 2002;8(3):878-84. Canatella PJ, Black MM, Bonnichsen DM, McKenna C, Prausnitz MR. Tissue electroporation: quantification and analysis of heterogeneous transport in multicellular environments. Biophys J 2004 May;86(5):3260-8. Susil R, Semrov D, Miklavcic D. Electric field-induced transmembrane potential depends on cell density and organization. Electro- and magnetobiology 1998;17(3):391-9. Zaharoff DA, Barr RC, Li CY, Yuan F. Electromobility of plasmid DNA in tumor tissues during electric field-mediated gene delivery. Gene Ther 2002;9(19):1286-90. Cemazar M, Pavlin D, Kranjc S, Grosel A, Mesojednik S, Sersa G. Sequence and time dependence of transfection efficiency of electrically-assisted gene delivery to tumors in mice. Curr Drug Deliv 2006;3(1):77-81. Author: Institute: Street: City: Country: Email:
Suzana Mesojednik Institute of Oncology Ljubljana Zaloska 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Voltage breakdown measurement of planar lipid bilayer mixtures P. Kramar, D. Miklavcic and A. Macek Lebar 1
University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
Abstract— Electroporation is characterized by a formation of structural changes within the cell membrane, which are caused by the presence of electric field. It is stipulated that pores are mostly formed by rearranging of lipid molecules in lipid bilayer structure; if so, planar lipid bilayer is a good model for experimental and theoretical studies. In this study three different mixtures POPC, POPC + 1µM C12E8 and POPS:POPC (3:7) of lipid molecules where used to form planar lipid bilayers. Their breakdown voltage was measured by means of linear rising signal. The results of this study confirmed our expectations. At almost all slopes of linear rising volatge signal significant difference in voltage breakdown was found between POPC and POPC + 1µM C12E8 planar lipid bilayers, while Ubr of POPC and POPS:POPC (3:7) planar lipid bilayers did not differ significantly. Keywords— planar lipid bilayer, voltage beakdown, linear rising signal.
I. INTRODUCTION Electroporation is characterized by a formation of structural changes within the cell membrane, which are caused by the presence of electric field. These changes, named "pores", increase the plasma membrane permeability, and enable ions and molecules to enter the cell [1]. Reversible electroporation is used to introduce various substances into the cell and has many practical applications like gene electrotransfer, transdermal drug delivery and electrochemotherapy [2-5]. If cell membrane collapses due to too high electric field, the electroporation becomes irreversible. This form of the phenomenon can be used for liquid food and water conservation [6,7]. The applicability of electroporation is broad: from biotechnology and biology to medicine. Each application has its own optimal electrical parameters [8], which has to be determined beforehand [9]. It is stipulated that pores are mostly formed by rearranging of lipid molecules in lipid bilayer structure; if so, planar lipid bilayer is a good model for experimental and theoretical studies [10]. The planar lipid bilayer can be considered as a small part of total cell membrane. Lipid bilayer as an artificial model of the cell membrane can be made of only one type of lipid molecules, their mixtures or even lipids
and proteins [11]. Lipid bilayers of different compositions have different electrical properties that are, due to their influence on membrane stability in electric field, important for the use of electroporation. Planar lipid bilayer stability in an electric field and consequently the voltage that cause bilayer rapture, is one of the most important properties of planar lipid bilayers. The breakdown voltage of the lipid bilayer is usually determined by a rectangular voltage pulse. The amplitude of the voltage pulse is incremented in small steps until the breakdown of the bilayer is obtained [12]. Using such a protocol the number of applied voltage pulses is however not known in advance and each bilayer is exposed to voltage stress many times. Such a pretreatment of the lipid bilayer affects its stability and consequently the determined breakdown voltage of the lipid bilayer [13]. To avoid this inconsistency an approach using linear rising voltage signal was suggested [14] that allows the breakdown voltage determination by a single voltage exposure. In this study three different mixtures of molecules where used to form planar lipid bilayers. Their breakdown voltage was measured by means of linear rising signal. II. MATERIALS AND METHODS A. Electrical setup Our system for following up electroporation of planar lipid bilayers consists of a signal generator, a Teflon chamber and a device, which is used for measurements of membrane current and voltage (Fig. 1). Signal generator is a voltage generator of an arbitrary type that provides voltage amplitudes from -5 V to +5 V. It is controlled by costume written software (Genpyrrha), specially designed for drawing the voltage signal that is used for membrane electroporation. The last but not least part of the signal generator is an analogue switch. The switch disconnects the output of the signal generator and switches to the 1 MΩ resistor. The switch is fast, it turns off the signal generator in 2 ns. In this way a system discharge voltage is measured and consequently the capacitance of the lipid bilayer can be determinated.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 578–581, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Voltage breakdown measurement of planar lipid bilayer mixtures
579
µM) was injected into one of the compartments of the chamber, which contained 1.5 ml of salt solution. C. Measurement protocol
Fig. 1 System for electroporation of planar lipid bilayer. 1. The microprocessor board with MCF5204 processor and two modules. One module generates arbitrary signals and the other that is realized in Xillinx, is used for frequency extension. 2. Chamber for forming lipid bilayers and two Ag-AgCl electrodes. 3. Modules for current and voltage amplification. 4. Digital oscilloscope for data storing.
Two Ag-AgCl electrodes, one on each side of the planar lipid bilayer were plunged into the salt solution. Transmembrane voltage vas measured via LeCroy differential amplifier 1822. The same electrodes were used to measure transmembrane current. Both signals were stored in oscilloscope LeCroy Waverunner-2 354M in Matlab format. All the signals were processed offline. Chamber is made out of Teflon. It consists of two cubed reservoirs with volume of 5.3 cm3 each. In the hole between two reservoirs a thin Teflon sheet with a round hole (105 µm diameter) is inserted. Lipid bilayer is formed by the Montal - Muller method [15]. B. Chemical Setup The salt solution was prepared of 0.1 molar KCl and 0.01 molar Hepes in the same proportion. Some droplets of one molar NaOH were added to obtain pH 4.7. The lipids were prepared from POPC (1-Palmitoyl-2-Oleoyl-sn-Glycero-3Phosphocholine) and POPS (1-Palmitoyl-2-Oleoyl-snGlycero-3-[Phospho-L-Serine]) in powder form (Avanti Polar - Lipids inc. ZDA). The POPC and POPS were melted in solution of hexane and ethanol (Riedel-de Haën, Germany) in ratio 9:1. Octaethyleneglycol mono ndodecyl ether (C12E8) was obtained in crystal form (Fluka, Switzerland). Deionized water, filtered to remove organic impurities, with a resistivity of 18 MΩ, was used to make solution. The mixture of hexadecane and pentane (Merck, Germany) in ratio 3:7 was used for torus forming. Three types of planar lipid bilayers were studied: single molecules POPC, lipid mixture POPS:POPC (3:7) and lipidsurfactant mixtures POPC+1µM C12E8. POPC and POPS were mixed before applying into the chamber. For surfactant experiments, after membrane stability and threshold reproducibility were ensured, 15 ml of C12E8 solution at 100 times the desired concentration (1
Measurement protocol consisted of two parts: capacitance measurement (Fig. 2A) and lipid bilayer breakdown voltage measurement (Fig. 2B). Capacitance and the breakdown voltage were determined for each lipid bilayer. Capacitance of each planar lipid bilayer was measured by discharge method [14,16]. We determined breakdown voltage (Ubr) of the lipid bilayer by the linear rising signal. The slope of the linear rising signal (k) and the peak voltage of the signal has to be selected in advance. Seven different slopes were selected. Breakdown voltage was defined as the voltage at the moment tbr when sudden increase of transmembrane current was observed. Time of breakdown tbr was defined as a lifetime of the lipid bilayer at a chosen slope of the linear rising signal (Fig. 2). D. Statistics To compare breakdown voltages of the planar lipid bilayer exposed to voltage signals of different slopes Kruskal-Wallis one way analysis of variance on ranks test was used. To compare breakdown voltages of the planar lipid bilayer mixtures at each slope Kruskal-Wallis one way analysis of variance on ranks test was used. At both comparisons all pairwise multiple comparisons were made by Tukey's test. Descriptive statistics include mean value and standard deviation. Using nonlinear regression, a two parameters curve was fitted to the data
U=
a 1− e
−t
,
(1)
b
where U was Ubr measured at different slopes; t was corresponding tbr; and a and b are parameters. Parameter a is an
Fig. 2 Measurement protocol: A) capacitance measurement of lipid bilayer was measured in two steps. In the first step we measured capacitance of the electronic system without lipid bilayer. Second step was measuring capacitance of electronic system with lipid bilayer and salt solution. B) voltage breakdown measurement with linear rising signal.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
580
P. Kramar, D. Miklavcic and A. Macek Lebar
asymptote of the curve which corresponds to minimal breakdown voltage UbrMIN for specific bilayer. Parameter b governs the inclination of the curve. III. RESULTS Mean breakdown voltages Ubr of the POPC, POPC + 1µM C12E8 and POPS:POPC (3:7) planar lipid bilayers with their standard deviations for seven different slopes k are given in Table 1; while their mean specific membrane capacitances are listed in Table 2. Breakdown voltage Ubr of all planar lipid bilayer mixtures increased with increasing slope of the linear rising voltage signal. Ubr measured at slope 4.8 kV/s is not statistically different from Ubr measured at slope 5.5 kV/s and it is not statistically different from Ubr measured at slope 7.8 kV/s. Ubr measured at slope 5.5 kV/s is not statistically different from Ubr measured at slope 7.8 kV/s. Also Ubr Table 1 Breakdown voltages (Ubr) and lifetimes (tbr) for three different lipid bilayer mixtures exposed to linear rising voltage signals of different slopes (k). Values given are mean ± standard deviation. N denotes number of measurements in each experimental group. POPC
k (kV/s)
Ubr(V)
4.8 5.5 7.8 11.5 16.7 21.6 48.1
0.51±0.02 0.48±0.01 0.49±0.01 0.57±0.01 0.59±0.02 0.74±0.02
tbr(μs) N 104±4 86±3 61±2 34±1 27±1 15±1
POPC+1µM C12E8
POPS:POPC (3:7)
Ubr(V)
Ubr(V)
18 0.55±0.02 6 0.55±0.01 4 0.58±0.03 - 0.62±0.04 3 0.68±0.04 5 0.69±0.03 3 0.83±0.03
tbr(μs) N 113±3 98±3 74±4 54±3 40±2 31±2 18±1
3 3 8 5 7 5 6
0.53±0.05 0.53±0.01 0.54±0.02 0.59±0.02 0.63±0.07 0.63±0.02 0.78±0.09
tbr(μs) N 108±10 95±2 68±3 51±2 37±4 29±1 17±2
8 8 7 4 4 5 6
measured at slopes 16.7 kV/s and 21.6 kV/s are not statistically different. All other pairwise multiple comparisons exhibited; these means that Ubr of all other experimental groups differ significantly. At slopes 4.8 kV/s and 48.1 kV/s no significant difference was found between Ubr of three planar lipid bilayer mixtures (P=0.106 and P=0.129 respectively). At all other slopes significant difference is founded between POPC and POPC + 1µM C12E8 planar lipid bilayer mixture. The data are also presented in graphical form (Fig. 3). The parameters a and b of the curve (1) for each planar lipid bilayer mixture are presented in Table 3. An asymptote of the curve (a) is considered as minimal breakdown voltage UbrMIN for planar lipid bilayer made of POPC, POPC + 1µM C12E8 and POPS:POPC (3:7) IV. CONCLUSIONS Although planar lipid bilayer differs in a number of characteristics from the biological membrane, it is believed that general picture of the electroporation is the same [17]; pores are formed in planar lipid bilayer structure. When biomedical and biotechnological applications of electroporation are under consideration breakdown voltage is one of the most important properties of a lipid bilayer because it assures stable long lasting pores in biological membranes. The aim of this study was to determine breakdown voltages of planar lipid bilayers composed of different lipid mixtures by linear rising voltage signal. Seven different slopes of linear rising signal were selected due to already known experimental evidence that lipid bilayer lifetime is dependent on the applied voltage [12] and that the
Table 2
Specific membrane capacitances (c) for each planar lipid bilayer mixture. Values given are mean ± standard deviation. Number of measurements (N) in each experimental group is given in third column.
POPC POPC+1µM C12E8 POPS:POPC (3:7)
Table 3
c (μF/cm2)
N
0.5±0.1 0.21±0.02 0.29±0.07
33 37 42
Computed parameters (a) and (b) of two parameters curve fitted to measured data for each planar lipid bilayer mixture.
POPC POPC+1µM C12E8 POPS:POPC (3:7)
a (V)
b (µs)
0.49 0.58 0.55
14.77 16.03 14.56
Fig. 3 The breakdown voltage (Ubr) (dots) of lipid bilayers as a function of lifetime tbr. The gray lines show seven different slopes (k) of applied liner rising voltage signal. Red, green and blue curves represent two parameters curve fitted to data (Equation 1). Asymptotes of the curves (a) correspond to minimal breakdown voltage UbrMIN for lipid bilayers made of POPC, POPC + 1µM C12E8 and POPS:POPC (3:7) (Table 3).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Voltage breakdown measurement of planar lipid bilayer mixtures
lipid bilayer breakdown voltage is dependent on the lipid bilayer pre-treatment [13]. Our previus study shows that lifetime of lipid bilayer depends on the slope of linear rising voltage signal and that also the breakdown voltage is a function of the slope of the linear rising voltage signal; it increases with increasing slope [14]. Strength-duration two parameters curve (Equation 1) can be fitted to experimental data and an asymptote corresponds to minimal breakdown voltage UbrMIN of specific lipid bilayer mixture. In this study three different mixtures of planar lipid bilayer were selected. Mixture of POPC + 1µM C12E8 was selected due to already known experimental evidence; addition of C12E8 affects the electroporation process in planar lipid bilayers [12] as well as cells [18]. POPS lipid molecules were selected due to specific molecular properties; namely, POPS molecules itself can not build stabile planar lipid bilayer [16]. Therefore we prepared mixture of POPS and POPC molecules in the ratio 3:7. Due to significant part of POPC molecules in the mixture we expected that POPC and POPS:POPC (3:7) lipid bilayers did not differ much during electroporation process. On the other hand it was already shown that addition of C12E8 to POPC planar lipid bilayer changes bilayer properties and electroporation process as well [12,18]. The results of this study confirmed the expectations. At almost all slopes of linear rising volatge signal significant difference in Ubr was found between POPC and POPC + 1µM C12E8 planar lipid bilayers, while Ubr of POPC and POPS:POPC (3:7) planar lipid bilayers did not differ significantly.
581 5.
6. 7. 8. 9. 10. 11. 12. 13.
14. 15. 16. 17.
REFERENCES 1. 2. 3. 4.
Neumann E, Kakorin S, Toensing K, (1999) Funadamentals of electroporative delivery of drugs and genes, Bioelectrochemistry and Bioenergetics 48:3-16 Dent A R, Preat V, (2003) Transdermal delivery of timol by electroporation through human skin. Journal of Controlled Release 88:253-262 Ferber D, (2001) Gene therapy: safer and virus-free?, Science 294:1638-1642 Mir L M, Orlowski S, (1999) Mechanisms of electrochemotherapy. Advanced Drug Delivery Reviews 35:107-118
18.
Cemazar M, Wilson I, Dachs G U, et al. (2001) Direct visualisation of electroporation-assisted in vivo gene delivery to tumors using interval microscopy - spatial and time dependent distribution. BMC Cancer 4:81-87 Gould G W, (1995) Biodeterioration of foods and an overview of preservation in the food and dairy industries. International Biodeterioration & Biodegradation 36:267-277 Vernhes M C, Benichou A, Pernin P, et al. (2002) Elimination of freeliving amoebae in fresh water with pulsed electric fields. Water Research 36:3429-3438 Puc M, Corović S, Flisar K, et al. (2004) Techniques of signal generation required for electropermeabilisation. survey of electropermeabilisation devices. Bioelectrochemistry 64:113-124 Macek Lebar A, Sersa G, Kranjc S, et al. (2002) Optimisation of pulse parameters in vitro for in vivo electrochemotherapy. Anticancer Research 22:1731-1736 Fosnaric M, Kralj-Iglic V, Bohinc K, et al. (2003) Stabilization of pores in lipid bilayers by anisotropic inclusions. J Phis Chem B 107:12519-12526 Tien H T, Ottova A, (2003) Planar lipid bilayers (BLMs) and their applications. Elsevier, New York Troiano G C, Tung L, Sharma V, et al. (1998) The reduction in electroporation voltages by the addition of surfactant to planar lipid bilayer. Biophisical Journal 75:880-888 Abidor I G, Arakelyan V B, Chernomordik L V, et al. (1979) 246 electric breakdown of bilayer lipid membranes i. the main experimental facts and their qualitative discussion. Bioelectrochemistry and Bioenergetics 6:37-52 Kramar P, Miklavcic D, Macek-Lebar A, (2007) Determination of the lipid bilayer breakdown voltage by means of a linear rising signal. Bioelectrochemistry 70:23-27 Montal M, Muller P, (1972) Formation of bimolecular membranes from lipid monolayers and study their electrical properties. Proc Natl Acad Sci USA 69:3561-3566 Diedrich A, Gunther B, Winterhalter M, (1998) Influence of surface charges on the rapture of black lipid membranes, Physical Review E 58:4883-4889 Tarek M, (2005) Membrane electroporation: A Molecular Dynamics Simulation, Biophysical Journal, 88:4045-4053 Kanduser M, Fosnaric M, Sentjurc M, et al. (2003) Effect of surfactant polyoxyethylene glycol (C12E8) on electroporation of cell line DC3F, Colloid. Surface A 214:205-217 Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Alenka Macek Lebar University of Ljubljana, Faculty of Electrical Engineering Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Voltage commutator for multiple electrodes in gene electrotransfer of skin cells M. Kranjc, P. Kramar, M. Rebersek and D. Miklavcic Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, 1000 Ljubljana, Slovenija Abstract— Gene electrotransfer is a promising nonviral method for transferring genes into the cells. The method is based on electroporation and it has been proven to be successful in both in vivo and in vitro conditions. This phenomenon occurs when cells are exposed to electric field established by high and low voltage pulses. The first high voltage pulse results in a high level of cell permeabilization (permeabilization pulse), while the second low voltage pulse provides a driving force for transport of DNA into cells (electrophoretic pulse). The efficiency and successfulness of gene electrotransfer significantly depends on electrical devices in use. A voltage commutator presents one of the most important electrical components in bipolar or multi electrodes devices. Its main function is commutating high and low voltage pulses, which are delivered through the microelectrodes to the skin cells. Even though gene electrotransfer is based on electroporation, our previous voltage commutator for electroporation is appropriate for gene electrotransfer because of its slow switching between voltage pulses. The aim of this study was to develop and test a new voltage commutator with multiple outputs and faster switching capabilities. We achieved that with high speed MOSFET drivers, which can withstand a voltage up to 600 V with commutation delay time less than 170 ns. We examined and confirmed voltage commutator switching capability with measuring deviation of the output signal in the frequency range of gene electrotransfer. In addition, we discovered that voltage commutator is capable of driving voltage pulses with even higher frequency. We also made every one of ten voltage commutator outputs independently programmable to deliver its own sequence of high and low voltage pulses. We demonstrated that with commutating a typical gene electrotransfer pulse sequence, consisting of a high voltage pulse and followed by a low voltage pulse. Due to voltage commutator scalability, the number of outputs can be increased for future requirements. Keywords— voltage commutator, gene electrotransfer, high speed power MOSFET driver, multiple electrodes
I. INTRODUCTION Gene electrotransfer is a method using electric pulses to temporarily and reversibly permeabilize the cell membrane and to drive the DNA in to the cell electrophoretically [1]. Recent experiments of gene electrotransfer suggested the use of a pulsing protocol, consisting of a high voltage, permeabilizing pulse, followed by a low voltage, electrophoretic pulse [2-6]. Skin is an attractive target tissue for gene therapy because of its size and accessibility for in vivo gene
delivery. It is also an excellent target organ for DNA immunization because of the large number of potent antigen presenting cells, critical to effective immune response [7-8]. It can also be used for treatments of skin disorders and for treatment of diseases of other organs through systemic response. If the applied voltage on the skin is not too high, partial of full recovery of the skin resistance can be observed within a period of microseconds up to several hours [9]. Unpleasant sensations for patients during voltage pulse delivery can be reduced by increasing frequency of voltage pulses. Studies have confirmed that pulse frequencies higher than frequency of tetanic contraction (> 100 Hz) reduce the number of individual contractions to a single muscle contraction [10-11]. Voltage commutator presents a significant electrical component in realization of successful gene electrotransfer. Its main function is to commutate high and low voltage pulses, which are delivered through the microelectrodes to the skin cells. Final value of the amplitude of the voltage pulse is determined by a voltage commutator as a last component between microelectrodes and the rest of electrical components (Fig. 1). It is important that voltage commutator does not add any distortion to the signal or change it in any way, except switching the polarity when needed. With alternating polarity of output signals and correct array of microelectrodes we can create alternating electric field. Studies have showed higher percentage of transfected cells when alternating electric field was applied compared to direct electric field [12]. To achieve that, a voltage commutator must have multiple outputs for driving an array of microelectrodes. Even though a gene electrotransfer is based on electroporation our previous voltage commutator which is used for electroporation does not meet all of gene electrotransfer requirements to deliver voltage pulses with alternating polarity and high frequency. Commutation between output signals with previous voltage commutator is based on relays. Therefore minimum switching time is limited to only 12 ms [13]. The aim of our study was to develop a
Fig. 1 Block scheme of electrical components required for successful gene electrotransfer.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 574–577, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Voltage commutator for multiple electrodes in gene electrotransfer of skin cells
new multiple output voltage commutator with faster switching time. II. ELECTRICAL SPECIFICATION OF VOLTAGE COMMUTATOR For a successful gene electrotransfer it is important that a voltage commutator meets the following requirements: • • • • •
Driving rectangular high voltage pulses with an amplitude up to 200 V and duration ranging between 20 µs and 20 ms. Driving rectangular low voltage pulses with an amplitude up to 20 V and duration ranging between 10 ms and 500 ms. Commutating between pulses at frequency ranging from 1 Hz to 5 kHz. Capability of switching between polarities of pulses. At least 10 individual outputs for microelectrodes.
Figure 2 shows the basic system design of the voltage commutator. A voltage commutator is based on high speed power MOSFET drivers - IR2104 from the International rectifier (El Segundo, California/USA) which are responsible for commutating between output voltage signals. They can withstand voltages up to 600 V and have turn on time less than 170 ns. A single output of the voltage commutator is driven by two MOSFET drivers. Because of its small dimensions, we used 20 drivers and developed a voltage commutator with 10 outputs. Each output of the voltage commutator can be in one of three different states: positive, negative or high impedance. In addition, we can independently program each of the output to deliver its own sequence of high and low voltage pulses. It is also important that MOSFET drivers are galvanically insulated with optocouplers from the digital - logic part of the commutator due to voltage and current differences between high/low voltage
575
signals and digital signals. The control of the voltage commutator is set with appropriate digital signals, which are generated with an external computer or a microcontroller. We placed two different connectors for driving digital signals, DIN 64_AC and MSTBA 13 x 2. Our voltage commutator can thus be controlled with either one of them. III. PERFORMANCE AND EXPERIMENTAL RESULTS To demonstrate a frequency performance of the voltage commutator we designed an experiment where we measured the voltage on the output at various frequencies. For a driving signal we used a rectangular 10 V signal with increasing frequency from 1 Hz to 5 kHz. We measured amplitude of the output signal and examined its deviation from the input signal with Bode diagram (Fig. 3), which is based on equation (1).
⎛U ⎞ H = 20 ⋅ log ⎜ output signal ⎟ ⎜U ⎟ ⎝ input signal ⎠
(1)
During this measurement we did not observe any substantial changes in amplitude of the output signal. The maximum deviation of 0.03 dB does not affect a successful realization of gene electrotransfer. In addition, we did not observe any significant changes in the amplitude of the output signal at frequencies as high as 500 kHz. The demonstration of voltage commutator ability to commutate voltage pulses with required duration and amplitude was verified with the following experiment. We measured the voltage commutator performance of driving voltage pulses with maximum duration at certain amplitude of
Fig. 3 Deviation between input and output signal of voltage commutator Fig. 2 Block diagram of voltage commutator
presented in Bode diagram. Frequency area of gene electrotransfer is between hatched vertical lines. Each symbol represents an average value±standard deviation.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
576
M. Kranjc, P. Kramar, M. Rebersek and D. Miklavcic
the pulse. At the beginning, we used one rectangular voltage pulse for an input signal with amplitude of 600 V. Due to the limited value of the capacitance of the voltage commutator capacitors, the amplitude of the output signal started to decrease after a certain time. Therefore, when we decreased the amplitude of the input signal, the duration of the output signal prolonged. We measured the maximum duration of the output signal at amplitudes decreasing from 600 V towards 0 V. The results are shown in figure 4. As we compare voltage commutator performance area with demanded high and low voltage pulse area, we can observe that voltage commutator is capable to meet the pulse duration and amplitude requirements of gene electrotransfer. To demonstrate the ability of the voltage commutator to deliver sequence of high and low voltage signals, we performed the following experiment. For an input signal sequence we used two 250 µs rectangular high voltage pulses with amplitude of 100 V and 1 ms rectangular low voltage pulse with amplitude of 5 V. The pause between high and voltage pulses was 1 ms. After each sequence we changed polarities. We placed a load with 100 Ω resistance between the two outputs of the voltage commutator. As shown in figure 5, the voltage commutator successfully delivered output signal without any distortion. The polarity of the output signal has successfully changed after each sequence. IV. CONLCUSION The results of our study show that voltage commutator is capable of delivering all the electric requirements needed for gene electrotransfer of skin cells. It is considerably im-
Fig. 5 Voltage commutator performance of driving a sequence of low and high voltage pulses. Input signal (signal 3) was composed of two 250 µs high voltage pulses with amplitude of 100 V and one 1 ms low voltage pulse with amplitude of 5 V. Output signal with alternating polarity is shown as signal 1. We can also observe current (signal 2) trough the 100 Ω load placed on the output of the voltage commutator. The measurements were performed using LeCroy LT3544 digital oscilloscope, a LeCroy ADP305 voltage probes and LeCroy AP015 current probe.
portant which electrical devices are used for this nonviral method of transferring genes into cells. One of the most significant electrical components for bipolar or multi electrodes devices is a voltage commutator with the function of commutation high and low voltage pulses. The principle of commutation between voltage signals is based on high speed MOSFET drivers, which assure driving of voltage signals in the whole frequency range of gene electrotransfer. They can withstand the voltage up to 600 V, which is three times higher from maximum amplitude of the required high voltage pulse. Ten outputs of the voltage commutator allow us to connect an array of microelectrodes to establish an alternating electric field on a skin surface. A voltage commutator can also drive high and low voltage pulses with duration, which exceeds our current gene electrotransfer needs. Therefore, our voltage commutator meets all the gene electrotransfer requirements and even more, it is also ready for commutation of voltage pulses with even higher frequency, amplitude and duration. ACKNOWLEDGMENT
Fig. 4 Voltage commutator performance area in comparison with required high and low voltage area of gene electrotransfer pulses.
This research was supported by European project Angioskin (LSHB-CT-2005-512127) under the 6th framework program of the European Commission and Slovenian Research Agency.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Voltage commutator for multiple electrodes in gene electrotransfer of skin cells
REFERENCES 1. 2.
3.
4.
5.
6. 7.
8.
Somiari S, Glasspool-Malone J, Drabick J J, Gilbert R A, Heller R, Jaroszeski M, Malone R W (2000) Theory and in vivo application of electroporative gene delivery. Mol. Ther. 2 (3) 178-187 Bureau M F, Gehl J, Deleuze V, Mir L M, Scherman D (2000) Importance of association between permeabilization and electrophoretic forces for intramuscular DNA electrotransfer. Biochimica et Biophysica Acta 1474:353-359 Mir L M, Bureau M F, Rangara R, Rouy D, Caillaud J-M, Delaere P, Branellec D, Schwartz B, Scherman D (1999) High-efficiency gene transfer into skeletal muscle mediated by electric pulses. Proceedings of National Academy of Science 96:4262-4267 Satkauskas S, Bureau M F, Puc M, Mahfoudi A, Scherman D, Miklavcic D, Mir L M (2002) Mechanisms of in vivo DNA electrotransfer: respective contributions of cell electropermeabilization and DNA electrophoresis. Molecular Therapy 5(2):133-140 Satkauskas S, Andre F, Bureau M F, Scherman D, Miklavcic D, Mir L M (2005) Electrophoretic component of electric pulses determines the efficacy of in vivo DNA electrotransfer. Human Gene Therapy 16:1194-1201 Pavselj N, Preat V (2005) DNA electrotransfer into the skin using a combination of one high- and one low- voltage pulse. Journal of Controlled Release 106:407-415 Drabick J J, Glasspool-Malone J, King A, Malone R W (2001) Cutaneous transfection and immune response to intradermal nucleic acid vaccination are significantly enhanced by in vivo electropermeabilization. Molecular Therapy 3(2):249-255 Zhang L, Widera G, Rabussay D (2004) Enhancement of the effectiveness of electroporation-augmented cutaneous DNA vaccination by a particulate adjuvant. Bioelectrochemistry 63:369-373
9. 10.
11. 12.
13.
577
Vanbever R, Preat V (1999) In vivo efficacy and safety of skin electroporation. Advanced drug delivery reviews 35:77-88 Miklavcic D, Pucihar G, Pavlovec M, Ribarcic S, Mali M, MacekLebar A, Petkovsek M, Nastran J, Kranjc S, Cemazar M, Sersa G (2005) The effect of high frequency electric pulses on muscle contractions and antitumor efficiency in vivo for potential use in clinical electrochemotherapy. Bioelectrochemistry 65:121-128 Zupanic A, Ribaric S, Miklavcic D (2007) Increasing the repetition frequency of electric pulse delivery reduces unpleasant sensations that occur in electrochemotherapy. Neoplasma, in print Faurie C, Phez E, Golzio M, Vossen C, Lesbordes J C, Delteil C, Teissie J, Rols M P (2004) Effect of electric field vectoriality on electrically mediated gene delivery in mammalian cells. Biochimica et biophysica acta 665:92-100 Rebersek M, Corovic S, Grosel A, Sersa G, Miklavcic D (2003) Electronic design of electrode commutation control for multiple electrodes in electrochemotherapy and corresponding electric field distribution. The IEEE Region 8 EUROCON 2003, Ljubljana,2003, pp 193-196
Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Peter Kramar Faculty of Electrical Engineering Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
An Experimental Test of Fuzzy Controller Based on Cycle-to-Cycle Control for FES-induced Gait: Knee Joint Control with Neurologically Intact Subjects T. Watanabe1, A. Arifin2, T. Masuko3 and M. Yoshizawa1 1
2
Information Synergy Center, Tohoku University, Sendai, Japan Dept. Electrical Eng., Institute of Technology "Sepuluh Nopember" (ITS), Surabaya, Indonesia 3 Dept. Electrical and Communication Eng., Tohoku University, Sendai, Japan
Abstract— Functional Electrical Stimulation (FES) can be effective in assisting or restoring paralyzed motor functions caused by the spinal cord injury or the celebrovascular disease. The purpose of this study was to develop a control method of gait induced by FES. We had proposed a fuzzy control system based on cycle-to-cycle control for controlling hip, knee and ankle joints during the swing phase of FESinduced gait and evaluated it in computer simulation studies. In this report, the fuzzy controller was tested experimentally in controlling maximum knee extension angle stimulating the vastus muscles using surface electrodes with neurologically intact subjects. The fuzzy controller worked properly in regulating stimulation burst duration time and the maximum knee extension angle was controlled well. The experimental results suggested that the fuzzy controller would be practical in clinical applications for the control of FES-induced gait. However, it was also suggested that electrical stimulation with large burst duration time or muscle fatigue caused a change in muscle response. Including automatic controller parameter tuning during a gait control and further experimental tests were necessary for practical applications. Keywords— functional electrical stimulation (FES), cycle-tocycle control, fuzzy controller, knee joint.
I. INTRODUCTION Functional Electrical Stimulation (FES) can be an effective method of assisting or restoring paralyzed motor functions caused by the spinal cord injury or the celebrovascular disease. However, an appropriate FES control strategy is required to restore the paralyzed gait, because the movements of lower limbs during gait are complex multi-joint movements. Controlling paralysed limbs using FES is a difficult problem because of nonlinearity, time varying properties, significant time delay and so on in responses of the musculoskeletal system to electrical stimulation. The cycle-to-cycle control is a control method for restoring paralysed gait using FES [1, 2]. The cycle-to-cycle control implemented in a proportional-integral-derivative (PID) controller was experimentally tested in controlling knee extension angle [1] or hip flexion angle range [2]. However, the PID controller showed deterioration in compensating muscle fatigue of the hip flexors [2].
We proposed fuzzy control system based on the cycleto-cycle control method for controlling the swing phase of hemiplegic gait induced by FES [3]. Computer simulation study showed that the fuzzy control system was feasible to control multi-joint (the hip, the knee and the ankle) movements of the swing phase of FES-induced hemiplegic gait with better performance than PID controller. In this report, an experimental test of the fuzzy controller based on the cycle-to-cycle control was performed. Maximum knee extension angle was controlled by stimulating the vastus muscles with neurologically intact subjects. II. METHODS A. Outline of Cycle-to-Cycle Control In the cycle-to-cycle control, each muscle is stimulated by single burst duration of stimulation pulses with constant pulse amplitude, pulse width and frequency to induce joint movement reaching the target joint angle (such as maximum joint angle of normal gait). Therefore, the method is different from the traditional closed-loop control such as a tracking control of desired angle trajectory. Fig.1 shows the outline of the cycle-to cycle control used in maximum knee extension control as an example. The controlled maximum joint angle of the previous cycle is delivered as feedback signal. Error is defined as difference between the target joint angle and the obtained one. (n-1)th cycle
nth cycle
error = θtarget − θmax Control Algorithm
TB
(fuzzy controller)
θmax
θtarget
θtarget = θmax
Fig.1 Conceptual diagram of cycle-to-cycle control
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 647–650, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
648
T. Watanabe, A. Arifin, T. Masuko and M. Yoshizawa
The burst duration of stimulation pulses of a current cycle is regulated based on the error of the previous cycle to ensure the joint angle reaching the target angle at every cycle. Control algorithm of regulation of the stimulation burst duration of current cycle TB[n] is shown in (1), TB[ n] = TB[ n − 1] + ΔTB[ n]
(1)
where TB[n-1] is the stimulation burst duration for the cycle just before the current one and ΔTB[n] is the output of the controller. B. Fuzzy Controller Structure, membership function and rule sets of the fuzzy controller for the vastus muscles (the vastus medialis and lateralis) were based on our previous work [4]. Input variables were the error of the previous cycle and the desired range of joint angle. The desired range was joint angle range between the angle at the stimulation onset and the target angle. Although it would be possible to control only using ‘error’ as the input variables, ‘desired range’ was also used considering clinical applications. The input of ‘desired range’ makes it possible to realize appropriate control under different joint angles at beginnings of stimulations. Input membership functions were expressed as triangular and trapezoidal fuzzy sets. The membership function of the ‘error’ comprised seven linguistic terms and for ‘desired range’, the membership function consisted of three linguistic terms as shown in Table 1. Output variable was ΔTB * corresponding to ΔTB shown in (1). Membership function of the output variable was expressed as fuzzy singletons. Fuzzy rules directed control action to compensate the error by increasing TB when error was negative, otherwise decreasing TB when error was positive. In linguistic expresTable 1
Fuzzy rules of controller
error
desired range S
M
L
NL
PL
PL1
PL2
NM
PM
PL1
PL2
NS
PS
PM
PL1
Z
Z
Z
Z
PS
NS
NM
NL
PM
NM
NL
NL1
PL
NL
NL1
NL2
S: small, M: medium, L: large, NL2: negative large 2, NL1: negative large 1, NL: negative large, NM: negative medium, NS: negative small, Z: zero, PS: positive small, PM: positive medium, PL: positive large, PL1: positive large 1, PL2: positive large 2
sion, increasing TB was expressed by taking positive value of ΔTB * , and decreasing TB was by taking negative ΔTB * . The fuzzy rules of the controller were formulated as logic combination between two inputs as shown in Table 1. An example of the rule is expressed by, If error is NS (negative small) and range is S (small) Then ΔTB * is PS (positive small) where NS is linguistic value of input variable ‘error’ of premise, S is that of input variable ‘desired range’, and PS is that of variable ΔTB * of consequent. The fuzzy inference was accomplished by using the Mamdani method. The defuzzification process converts the fuzzy inference outputs, ΔTBk* , which are resulted by the kth rule, into a crisp value ΔTB. Center of gravity (COG) was used in the defuzzification process as shown in (2). ΔTB =
∑ μ (ΔTBk* )ΔTBk* k
∑ k
μ (ΔTBk* )
k : 1, 2, , N N : the number of rule
(2)
C. Experimental Method In order to perform experimental tests, algorithms of detecting maximum extension angle and stimulation onset time at each cycle were tested before control trials. The maximum extension angle was defined as peak value detected by comparing consecutive three sampled values of the joint angle. The stimulation onset time was defined as the time when changes of knee joint angle in most recently sampled consecutive 10 data (sampling interval was 50 ms) were less than or equal to 0.3 deg. The knee joint is extended by applied electrical stimulation from the resting position. When electrical stimulation is stopped, the knee joint is flexed by the gravity and oscillation occurs. Control of the next cycle is started by detecting the stimulation onset time when the oscillation stops. Values of the fuzzy output membership function were adjusted by trial and error method in a preliminary experiment before control trials and fixed during control experiments. Knee joint angle was controlled by stimulating the vastus muscles of the left leg of two neurologically intact subjects using surface electrodes (F-150, Nihon Koden). Pulse width and pulse frequency were fixed at 200 μs and 20 Hz, respectively. The knee joint angle was measured with an electric goniometer (M180, Penny & Giles). The subject sat on the desk that his leg didn’t reach the ground and relaxed his legs during experiments. Before control trial, stimulus pulse amplitude was determined in order to get enough control range without pain. The target value of the maximum knee extension angle was set at 30 deg (0 deg
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
2.0
Burst duration time TB [s]
Burst duration time TB [s] Burst duration time TB [s]
An Experimental Test of Fuzzy Controller Based on Cycle-to-Cycle Control for FES-induced Gait
1.5 1.5 1.0 1.0 0.5 0.5
80 80 70 70 60 60 50 50 40 40 30 30 20 20 10 100
0
5
10 15 20 25 30 35 10 Cycle 15 number 20 25 30 35 Cycle number (a) stimulation burst duration time (a) stimulation burst duration time 5
target target obtained obtained
Max. knee joint angle [deg]
Max. knee joint angle [deg] Max. knee joint angle [deg]
0.0 0.00
5
10 15 20 25 30 35 0 5 10 15 20 25 30 35 Cycle number Cycle number (b) obtained maximum knee extension angle and target (b) obtained maximum knee extension angle and target
Fig.4 A result of the cycle-to-cycle control of maximum knee extension Fig.2 A resultangle of the control of B, maximum bycycle-to-cycle fuzzy controller (subject 3rd trial)knee extension angle by fuzzy controller (subject A, 2nd trial)
means full knee extension). Capabilities of the controller were tested in automatic generation of stimulation burst duration. That is, the cycle-to-cycle control was started with the burst duration time of 0 sec. The control was performed three times with the time interval between 15 min and 20 min. III. RESULTS The algorithms of detecting the maximum extension angle and stimulation onset time worked appropriately during control trials. One trial in experimental data from subject B was removed from the data analysis because of data acquisition error. Figure 2 shows obtained maximum extension angle and regulated stimulation burst duration time as an example of the control result. The maximum knee extension angle reached the target at about the 10th cycle and it was controlled well after that. In the first and the third trials of subject A, however, the maximum knee joint angle was not controlled appropriately after stimulating with large TB as seen in Fig.3. In Fig.3, the value of TB was regulated and
649
2.0 1.5 1.0 0.5 0.0
0
5
10
15 20 25 30 Cycle number (a) stimulation burst duration time
35
80 target obtained
70 60 50 40 30 20 10
15 20 25 30 35 Cycle number (b) obtained maximum knee extension angle and target
Fig.3
0
5
10
A result of the cycle-to-cycle control of maximum knee extension angle by fuzzy controller (subject A, 1st trial)
the joint angle was controlled appropriately until the 28th cycle, probably compensating muscle fatigue. After electrical stimulation with large TB was applied to the muscle (about 1.9 sec at the 29th cycle, in this case), the muscle produced larger force in the cycles after the 30th cycle than in the previous cycles. Although the fuzzy controller decreased TB, the developed joint angle was not reduced sufficiently showing producing larger muscle force. Fig.4 shows an example of the control with another subject. After the 25th cycle, the oscillating response was caused a little bit, probably because of muscle fatigue. IV. DISCUSSIONS Results of the experiments suggested that the fuzzy controller used in the cycle-to-cycle control would be practical in clinical applications. Since only knee joint angle control was tested stimulating one muscle group in the experiments, control of multi-joint (the hip, the knee and the ankle) movements is expected to be tested.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
650
T. Watanabe, A. Arifin, T. Masuko and M. Yoshizawa
Large TB was considered to cause change in muscle response to electrical stimulation. Gradual decrease of muscle force caused by muscle fatigue could be compensated by the fuzzy controller in our previous computer simulation study [3]. The muscle fatigue compensation by increasing TB was also found in the experimental results as shown in Fig. 3. However, sudden change of muscle force production ability as seen in Fig. 3 was not modeled in our previous computer simulations. Muscle potentiation may be a reason of this phenomenon [5]. Using stimulation with large burst duration time have to be studied more with several subjects. Force production ability after muscle fatigue may vary from cycle to cycle as seen in Fig.4. It is necessary to study a method of dealing with time variant properties such as seen in Figs.3 and 4. For example, parameters of fuzzy controller have to be adjusted during a gait control. The automatic parameter tuning by a fuzzy model, which has been found to be effective in our previous computer simulation study [3], can be modified and applied to the controller.
ACKNOWLEDGMENT This study was partly supported by the Ministry of Education, Culture, Sports, Science, and Technology, of Japan under a Grant-in-Aid for Scientific Research, and the Ministry of Health, Labour and Welfare under the Health and Labour Sciences Research Grants.
REFERENCES 1. 2. 3.
4.
V. CONCLUSION In this report, the fuzzy controller based on the cycle-tocycle control for FES-induced gait was tested experimentally in maximum knee extension angle control. The fuzzy controller worked properly in regulating stimulation burst duration time and the maximum knee extension angle was controlled well. The experimental results suggested that the controller would be practical in clinical applications. However, it seemed that change in muscle response was caused by electrical stimulation with large burst duration time or after muscle fatigue. Modification of the fuzzy controller including automatic parameter tuning and further experimental tests in multi-joint control with several subjects including paralyzed subjects is necessary for practical applications.
5.
Veltink PH (1991) Control of FES-induced cyclical movements of the lower leg. Med. & Biol. Eng. & Comput., 29:NS8-NS12 Franken HM, Veltink PH et al (1995) Cycle-to-cycle control of swing phase of paraplegic gait induced by surface electrical stimulation. Med. & Biol. Eng. & Comput. 33:440-451 Arifin A, Watanabe T et al (2006) Design of Fuzzy Controller of the Cycle-to-Cycle Control for Swing Phase of Hemiplegic Gait Induced by FES. IEICE Trans. Inf. and Syst, E89D:1525-1533 Arifin A, Watanabe T et al (2003) Computer simulation study of the cycle-to-cycle control using fuzzy controllers for restoring swing phase of FES-induced hemiplegic gait. Proc. Symp. on Med. & Biol. Eng. 2003, Sapporo, Japan, pp.131-139 Eom GM, Watanabe T et al. (2002) Gradual potentiation of isometric muscle force during constant electrical stimulation. Med. & Biol. Eng. & Comput., 40:137-143 Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Takashi Watanabe Information Synergy Center, Tohoku University 6-6-05 Aramaki-Aza-Aoba, Aoba-ku Sendai Japan
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
FES treatment of lower extremities of patients with upper / lower motor neuron lesion: A comparison of rehabilitation strategies and stimulation equipment M. Bijak1, M. Mödlin2, C. Hofer2, M. Rakos3, H. Kern2, W. Mayr1 1 2
Center for Biomedical Engineering and Physics, Medical University of Vienna, Austria Department of Physical Medicine and Rehabilitation, Wilhelminenspital, Vienna, Austria 3 Otto Bock Healthcare Products GmbH, Vienna, Austria
Abstract— Functional Electrical Stimulation (FES) of lower extremities in patients suffering from paraplegia can be used to restore standing up from the wheelchair, standing, walking / stepping and sitting down. Usually only patients with an intact lower motor neuron (spastic paraplegia) can benefit while patients with flaccid paralysis are excluded due to the inexistent or very weak force response to electrical stimulation. The European Union (EU) supported project “Rise” investigated FES to recover long term denervated degenerated muscles (DDM). It turned out that this patient group can achieve similar goals like spastic paraplegics but require a longer rehabilitation course and stimulation parameters beyond the current EU regulations. Keywords— FES, lower extremities, denervated degenerated muscles
I. INTRODUCTION Since Kantrowitz [5] demonstrated in the early sixties standing of paraplegic subjects by quadriceps stimulation various groups and researches worked on rehabilitation strategies and technical equipment to restore lower limb function. Stimulation either with surface electrodes or implantable devices is state of the art. Quadriceps muscles are stimulated for knee extension, glutaeus muscles for hip stability and peroneal reflex is used for flexion functions. Other muscle groups are added according to the requirements and the technical possibilities like the available number of independent stimulation channels. Practically all established clinical FES applications are based on direct excitation of neural structures and in case of muscle functions indirect activation of the muscles. So patients who wanted to benefit from a FES program for lower extremities had to have an intact lower motor neuron. Individuals with conus cauda lesion have denervated lower limb muscles and suffer from severe muscle atrophy. After some years the major part of the muscle tissue is replaced by fat and connective tissue. The trophic situation of the paralyzed limbs worsens rapidly causing problems like decubital ulcers, dysfunction of wound healing and osteoporosis. Although early denervation has been widely studied the long term effects received much less attention since the
general believe is that all myofibers disappear within several month of denervation. Thus FES was not seen as a proper tool for recovering and strengthening of long term denervated degenerated muscles (DDM). For patients with intact lower motor neuron a clinical trial is ongoing in Vienna to find a rehabilitation strategy to achieve standing up and walking by means of FES in a short time. The European Union (EU) Commission Shared Cost Project RISE, with 9 project partners, 3 additional partners and 6 subcontractors all from 6 countries started in November 2001 and is established to create a systematic body of basic scientific knowledge about the restorative effects of electrical stimulation DDM and related topics. In the following the differences in the treatment of these two patient groups will be summarized. II. METHODS Up to now 20 patients with upper motor neuron lesion participated in the FES walking project. For this group of patients an eight channel stimulator mainly intended for stimulation of lower extremities was developed [1] (Fig. 1). If after a twelve weeks lasting FES trainings program with increasing intensity for muscle strengthening [3] the knee torque was above 30Nm a stand up, sit down and balancing training was implemented in a 6 days per week trainings regime. After 4-8 weeks FES supported walking was practiced. First in a parallel bar frame with wheels and then with a walker. Stimulation parameters are biphasic rectangular pulses with duration of 1ms up to 2ms and a frequency of 27 Hz. The voltage was adjusted to achieve strong contractions; over stimulation was avoided to reduce fatigue. Stimulation timing was optimized for comfortable and smooth stepping in close cooperation between therapists and patient [2]. For reactivation of DDM pilot studies showed that the technical requirements are completely different. Very long impulses and high currents beyond the allowed limits had to be used. A special allowance has been given to stimulate the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 658–660, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
FES treatment of lower extremities of patients with upper / lower motor neuron lesion
659
trainings duration. During the following 3 months pulse duration could be shortened to 80ms to 100ms (phase 2). After approximately a half year in phase 3 twitch stimulation was replaced with burst stimulation - 40ms pulses delivered at 20Hz. In the fourth phase (month 9-12) force trainings sessions where introduced with 70 to 80% of maximum force. Initially knee stretching was performed against gravity and later with increasing load around the ankles of up to 5kg. The progressive FES training increased mass and force of thigh muscles (Fig. 2) that allowed FES supported standing up and standing [8]. A comparison between the two groups of patients is summarized in table 1. IV. DISCUSSION AND CONCLUSIONS
Fig. 1. Top: 2 channel stimulator for DDM, Bottom: 8 channel stimulator mounted on a belt twenty seven subjects participating in the RISE project with long impulses and high currents. Since appropriate stimulators are not available on the market a custom device has been build [4]. In comparison to standard FES devices the stimulator for DDM requires stronger batteries and a more powerful output stage, resulting in a bigger device (Fig. 1).
Subjects with upper motor neuron lesion could reach FES supported standing up and walking / stepping within 6 month. Further FES training improves walking distance, maintains muscle mass and skin trophic and improves general health. Stimulation parameters are within usual FES ranges. In the scope of RISE project it could be demonstrated that patients with conus cauda lesion syndrome even with long term DDM can also benefit from FES training. [6; 7] describe more details of the observed muscle regeneration. Due to the absence of the neuromuscular junction and decomposition of motor units muscular contractions can only be elicited by depolarizing the cellular membrane of each single muscle fiber. The electrical membrane sensitivity strongly depends on the state of degeneration or recovery of the mus-
III. RESULTS All patients with upper motor neuron lesion could perform standing up and do at least a few steps within 6 month if the described trainings regime was obeyed. The required stimulation amplitude was in the range of ±30V to ±60V. In comparison patients from the RISE project required impulse durations between 10 and 150 ms and after severe degeneration up to 200ms. Consequently also the amplitude values are significantly higher than in nerve stimulation and require up to ±100V (±200mA) resulting in an energy of 4J per pulse delivered to the tissue. Since those energies are potentially dangerous for the skin special care had to be taken to avoid burns. Initially large conductive silicone rubber electrodes were applied to the skin via a wet sponge cloth and later, when the skin had adapted to the high currents via gel. The flexible electrodes had to fit closely to uneven skin surface to provide homogenous distribution of the electric field. A four phase rehabilitation program was worked out: Training started in the first phase with single twitch stimulation using 150ms-200ms pulses at 2Hz. The progressively increasing muscle excitability permitted an increase in daily
Fig. 2. CT scans of the right thighs 20 cm below trochanter major: (a) > 10 years denervated; (b) same subject as (a), 1 year stimulated; (c) 1.7 years denervated; (d) same subject as (c), 1 year stimulated.
___________________________________________ IFMBE Proceedings Vol. 16 _____________________________________________
660
M. Bijak, M. Mödlin, C. Hofer, M. Rakos, H. Kern, W. Mayr
Table 1: Comparison of some stimulation issues for lower extremities for patients with upper and lower motor neuron lesion Training before standing up Pulse duration Stimulation frequency Stimulation intensity Energy per pulse Electrodes Risk of skin burn Flexion reflex
Patients with Upper motor neuron lesion 3 month 1..2ms 25Hz … 70Hz Up to ±60V Up to 14mJ Hydrogel Electrodes Very low Yes
cle cell, but in any case it is much lower than the sensitivity of a nerve cell. To achieve muscle fiber activation much longer stimulation impulses and higher current as in patients with intact lower motor neuron are required. The recent EU regulations allow not more than 0,3J of energy per pulse delivered to the tissue but up to 4J are required for DDM. To bring the benefits of FES for patients with lower motor neuron lesion to clinical practice appropriate certified stimulators have to be commercially available. But therefore the regulations regarding the output energy per pulse have to be revised. A related proposal, supported by the outcome pf the RISE project is in preparation and will be sent to the EU bodies.
ACKNOWLEDGMENT This projects are supported by the Commission Shared Cost Project RISE (Contract no. QLG5-CT-2001-02191), the Austrian Ministry of Science and Otto Bock Healthcare Products
REFERENCES 1. 2.
Bijak, M., Mayr, W., Rakos, M.et al.: The Vienna functional electrical stimulation system for restoration of walking functions in spastic paraplegia. Artificial Organs, Vol. 26, No.3, 2002, pp 224-227. Bijak, M., Rakos, M., Hofer, C.et al.: Stimulation Parameter Optimization for FES Supported Standing up and Walking in SCI Patients. Jornal of artificial Organs, Vol. 29, No.3, 2005, pp 220-223.
Patients with Lower motor neuron lesion >1 year 10ms … 150ms 1Hz … 20Hz Up to ±100V (±200mA) Up to 4J Silicone rubber with wet sponge/ gel High No 3.
4. 5. 6.
7.
8.
[3] Bijak, M., Rakos, M., Hofer, C.et al.: From the wheelchair to walking with the aid of an eight channel stimulation system: a case study. 10th Annual Conference of the International Functional Electrical Stimulation Society, 2005, pp 270-272. [4] Hofer, C., Mayr, W., Stohr, H.et al.: A stimulator for functional activation of denervated muscles. Jornal of artificial Organs, Vol. 26, No.3, 2002, pp 276-279. [5] Kantrowitz, A.: Electronic Physiologic Aids. New York: Maimonides Hospital, 1963, [6] Kern, H., Boncompagni, S., Rossini, K. et al.: Long-term denervation in humans causes degeneration of both contractile and excitation-contraction coupling apparatus, which is reversible by functional electrical stimulation (FES): a role for myofiber regeneration? J. Neuropathol. Exp. Neurol., Vol. 63, No.9, 2004, pp 919-931. [7] Kern, H., Rossini, K., Carraro, U. et al.: Muscle biopsies show that FES of denervated muscles reverses human muscle degeneration from permanent spinal motoneuron lesion. J. Rehabil. Res. Dev., Vol. 42, No.3 Suppl 1, 2005, pp 43-53. [8] Mödlin, M., Forstner, C., Hofer, C.et al.: Electrical stimulation of denervated muscles: first results of a clinical study. Jornal of artificial Organs, Vol. 29, No.3, 2005, pp 203-206.
Author: Manfred Bijak Institute: Medical University of Vienna, Center for Biomedical Engineering and Physics Street: Waehringer Guertel 18-24/4L City: Vienna Country: Austria Email:
[email protected]
___________________________________________ IFMBE Proceedings Vol. 16 _____________________________________________
Magnetic Coils Design for Localized Stimulation L. Cret1, M. Plesa,1 D. Stet1 and R.V. Ciupa1 1
Technical University of Cluj-Napoca/Department of Electrotechnics, Cluj-Napoca, Romania
Abstract— The technique of magnetic stimulation of nerve fibres represents a new direction of research in modern medicine. This study starts from a major limitation of the coils used for magnetic stimulation: their inability to specifically stimulate the target tissue, without activating the surrounding areas. The first goal of this study was to determine the optimal configuration of coils for some specific applications and evaluate the distribution of the electric field induced. Once the coil configuration established, we address other issues that need to be solved: achieving smaller coils, reducing power consumption (the low efficiency of power transfer from the coil to the tissue is a major drawback) and reducing coil heating. Keywords— magnetic stimulation, coil design, half power region, localization
According to the electromagnetic field theory, the electric field E can be computed as a function of the electric potential V and the magnetic vector potential [2]: E=−
∂A − gradV ∂t
(1)
The first term of the equation, called “primary electric field - E A ”, is determined by means of the magnetic vector potential. For coils of non-traditional shapes, one can compute A using an approximation method in which the contour of the coil is first divided into a variable number of equal segments, and the magnetic vector potential in the calculus point is obtained by adding the contribution of each segment to the final value [3].
I. INTRODUCTION The preoccupation for improving the quality of life, for persons with different handicaps, led to extended research in the area of functional stimulation. Due to its advantages compared to electrical stimulation, magnetic stimulation of the human nervous system is now a common technique in modern medicine. A disadvantage consists, however, in the fact that the need of focal stimulation can not always be fulfilled. This is why the design of coils with special geometries can help achieving this goal. Another drawback consists of the low efficiency of power transfer from the coil to the tissue. The present paper starts by emphasizing the theoretical background of magnetic stimulation, referring to the mathematical model for the computation of the electric field, the electric circuit of the stimulator and the computation of the inductivity of magnetic coils. Then, we present different designs of coils and we asses the localization of the electric field produced by these structures and their energy transfer parameters. II. THEORETICAL BACKGROUND The human central nervous system can be stimulated by strong magnetic field pulses that induce an electric field in the tissue, leading to excitation of neurons [1].
z2 l − r
dz l
z h
P
0 r
-z 1
Fig. 1
Notation for the computation of the magnetic vector potential produced by a conductor segment
Considering the notations in figure 1, and the fact that the current I(t) flows through the conductor, the magnetic vector potential created by the segment into point P can be written, using the vectors defined above, as: μ I (t ) l A= 0 ⋅ ⋅ ln 4π l
l−r +l− l ⋅r r− l
l⋅r l
(2)
The second term of equation (1) represents “the secondary electric field - E V ”. It depends on the geometry of the tissue-air boundary, considered a planar surface. This term is computed knowing that on the tissue surface, the boundary condition to be fulfilled is: n ⋅ E A = − n ⋅ E V . For a flat
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 665–668, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
666
L. Cret, M. Plesa, D. Stet and R.V. Ciupa
(xb,yb,zb)
c
b
(xc,yc,zc)
ϕ
l1
a
Fig. 2
Γ1 Computing the mutual inductivity between two converging condutors
I = U 0 ωL ⋅ sin(ωt ) exp(− αt )
(3)
where α = R /( 2 L ) , ω = 1/ LC −α 2 , C is the capacitance, and R and L are the resistance and inductance of the coil, respectively. The inductance is evaluated by taking the line integral of the vector potential around the coil, for unit current [4]:
∫
L = A ⋅ dl . This formula permits the computation of in-
ductances of the special coils, designed to improve focality (the ability of a coil to stimulate a small area of tissue). For circuits with complex forms, we apply a method consisting in dividing the circuit in several parts. The self inductance of the circuit, divided in n parts, can be computed with the following formula [5]: n
∑ k =1
n
Lk +
n
∑∑ M
ki ,
for (i ≠ k )
(4)
k =1 i =1
L=
μ 0l ⎛⎜ 2l 3 128 r r 2 ⎞⎟ − ln − + 2π ⎜⎝ r 4 45π l 4l 2 ⎟⎠
(5)
with l the length of the conductor, and r the radius of its cross-section. The mutual inductivity between two straight conductors converging into a point is evaluated as [5]: M =
μ0 a+b+c a + b + c⎤ ⎡ cos ϕ⎢a ln + b ln 4π c+a−b c + b − a ⎥⎦ ⎣
(6)
Γ2
l2
(xd,yd,zd)
Fig. 3
Two segments in space
The given quantities are represented in figure 2, with a and b representing the length of the conductors and ϕ the angle between them. For the general case, we consider two conductor segments in space. The first segment is between points of coordinates (xa, ya, za) and (xb, yb, zb), while the second segment is between points (xc, yc, zc) and (xd, yd, zd), see figure 3. On the second segment, we consider a point of coordinates (x,y,z). The parametric equation of the second segment is: ⎧ x = x c + ( x d − x c )t ⎪ ⎨ y = y c + ( y d − y c )t ⎪ z = z + ( z − z )t c d c ⎩
(7)
with t ∈ [0,1] . With the above geometrical coordinates, we can find the mutual inductivity between these segments (using Neumann formula). For two circuits, Γ1 and Γ2, in a homogenous media with permeability μ, the mutual magnetic flux Φ21 is:
Φ 21 =
∫B
21 dS
SΓ2
The self inductivity of a short straight conductor, with round cross-section, for low frequencies, is [5]:
(x,y,z)
r
(xa,ya,za)
surface, the electrostatic potential V and E V can be computed with an analytical formula. The current required to induce the electric field is delivered by a magnetic stimulator (RLC circuit). The current waveform through the discharging of a capacitor, with an initial voltage U0, to the coil is [4]:
L=
l1 − r
=
∫A
21 dl 2
(8)
Γ2
Since circuits Γ1 and Γ2 are shaped like two straight segments, the mutual flux can be evaluated by integrating the magnetic vector potential created by the first segment along the second one. Considering the magnetic vector potential generated by a conductor segment (see equation (2)), the mutual inductivity can be computed using the following equation:
L 21
μ = 0 ⋅ 4π
∫ ln
Γ2
l 1 − r + l1 − l1 ⋅ r r− l1
l1 ⋅ r l1
⋅
l1 ⋅ dl 2 l1
(9)
We focus on stimulators with a fixed rise time of the current I(t) from 0 to peak, which is sufficient for comparing relative figures of merit of the stimulators.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Magnetic Coils Design for Localized Stimulation
667
Given the values of L and R, the capacitance C is obtained requiring that the rise time of the current is fixed. Because of the same requirement, we may substitute dI/dt (from equation (1))with dI / dt t = 0 = U 0 / L . Assuming that the activation of the nerve fiber occurs for a preset value of the electric field E, we obtain U0, the necessary initial voltage on the capacitor that would lead to activation. The energy dissipated in the circuit during one pulse of duration Δt is [4]: Δt
WJ = R ∫ I 2 (t )dt
(10)
0
The peak magnetic energy in the coil WB required to induce a given electric field is [4]:
WB =
1 2 LI peak 2
(11)
The temperature rise in the coil after one pulse of duration Δt is (assuming there is no cooling) [4]:
ΔT =
η Δt 2 I (t )dt cσA 2 ∫0
(12)
where η is the resistivity, σ the density, c the specific heat and A the cross-sectional area of the copper wire of the coil. These three quantities are evaluated to establish the parameters of energy transfer from the coil to the target tissue. To assess quantitatively the localization characteristics of the coils, we used the half power region (HPR) defined in [1] as the extent within which the magnitude of the normalized induced field is larger than 1 2 . A more localized magnetic coil will generate a smaller value of HPR. III. RESULTS AND DISCUSSIONS In this paper we analyze three different forms of magnetic coils [7], designed to improve focality of the electric field induced in the tissue during magnetic stimulation. The coils have the same number of turns (8), but these turns are differently positioned in space: The figure of 8 has four turns on each leaf; The “Slinky-3” coil has it’s 8 turns distributed over three directions in space, 3-2-3 turns per leafs; The “3-D differential coil” has 2 turns on each leaf of the initial Slinky-3 coil, and one turn on the two additional “wings”. The radius of the leaf is 25mm, therefore the length of the wire is the same for each coil.
Fig. 4
Geometry of the three tested coils: Figure of eight, Slinky-3 and 3D differential
The first step of our work consisted in using the algorithm described above in order to compute the inductance of each coil. Then, we considered that each of these coils is a part of the RLC series circuit that represents the magnetic stimulator. The capacitor is charged to an initial voltage of 400V. The derivative of the transient current generated by the discharge of this capacitor is maximum at the beginning of the transient regime. Then, we evaluated the electric field generated by these coils and compared results.All the computations are performed by implementing the algorithms described above in a Matlab routine. Figure 4 renders the geometry of the three tested coils and the sense of the current through every loop. For the Slinky-3 coil, the current in each loop is directed so that the central leg of the coil carries the total current (N × I, where N=8 represents the number of turns of the coil). The induced electric was evaluated, for all the structures, in a plane situated 15mm below the coil. A conclusive result is the one given in figures 5, which represent a comparative plot of the induced electric field generated by the three coils along x and y axis of the system of coordinates. The electric field represented in figure 5 is given in normalized, and the half power region is clearly underlined. Since the electric field generated along x axis is similar for the three coils, the difference in localization is given by the plot in figure 5. The largest y component of the half power region was evaluated to be y=44 mm for the Slinky-3 coil, y=36 mm for the butterfly coil and y=32 mm for the 3-D differential coil. Finally, we assessed the energetic parameters of the tested coils, for a radius of the leaf of 38,1mm. The results are listed and compared in table 1, considering that the coils induce an electric field equal to 60V/m in the target point (activation threshold).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
668
L. Cret, M. Plesa, D. Stet and R.V. Ciupa
2
Normalized values of the induced electric field
1
1 0.9
One can observe that the coil shaped as a figure of 8 has the lower energy consumption and also the lower temperature rise in the coil after one pulse. We also wanted to establish the variation of these energetic parameters along with the radius of the leafs. The energy consumption gets lower as the radius increases, for all the tested coils. Figure 6 is a plot of this variation for a figure of 8 coil, the most efficient one in the set.
Figure of 8 Slinky3 3D differential
0.8 0.7 0.6 0.5 0.4 0.3 0.2
IV. CONCLUSIONS
0.1 0 0
-80 -70 -60 -50 -40 -30 -20 -10
10
20
30
40
50
60
70
80
y [mm]
Fig. 5
Total electric field induced by the three tested coils along y axis (normalized values)
Table 1 Energetic parameters of tested coils Coil L(μH) WJ (J) WB (J) ΔT (°C) Ipeak (A) C (mF)
Figure of eight 4,2
Slinky-3 2,2
5,3487 2,8849 0,0921 1172,1 0,596
8,6375 4,4256 0,1473 1487,6 0,63
3D differential 2.8 14,6014 5,375 0,2441 1959,4 1,1
This paper analysis the localization of the electric field induced in the target tissue by three different coil designs during magnetic stimulation. The evaluation criteria were: focality of the electric field induces by these coils and efficiency of the energy transfer from the coil to the target tissue. The main conclusion of the paper consists in noticing that although the largest induced electric field is generated by the Slinky-3 coil, the 3-D differential coil has a better focalization, which means that this coil can be more efficient in activating selectively a target neuronal structure. From the point of view of energy transfer from the coil to the target tissue, the figure of 8 coil is the most efficient of the set. Therefore, one can conclude that the application is the one that sets the best design for the magnetic coil, and there is no universal solution suitable for all cases.
REFERENCES
12 10
1.
Energy [J]
8 Dissipated energy
6
2.
Magnetic energy
4
3.
2 0 0
20
40
60
80
Coil radius [mm]
(a)
4.
0.2
5.
Temperature variation/pulse [˚C]
0.18 0.16 0.14 0.12 0.1
Hsu K.H., Durand D(2001)., A 3-D Differential Coil Design for Localized Magnetic Stimulation, IEEE Transactions on Biomedical Engineering, vol. 48, Roth B.J., Basser P.J (1990) A Model of the Stimulation of a Nerve Fiber by Electromagnetic Induction , IEEE Transactions on Biomedical Engineering; vol 37; Cret L., Ciupa R., (2005) Remarks on the Optimal Design of Coils for Magnetic Stimulation, ISEM. Proc. vol. 4, Bad Gastein, Austria., 2003, pp 352-354; Ruohonen J., Virtanen J., (1997) Coil Optimisation for Magnetic Brain Stimulation, Annals of Biomedical Engineering, vol. 25; Cret L., Plesa M., (2006) Magnetic Coils for Localized Stimulation of the Central Nervous System, Acta Electrotehnica, Cluj Napoca, Romania., pp 114-117.
0.08 0.06 0.04 0.02 0 0
10
20
30
40
Coil radius [mm]
Fig. 6
50
60
70
(b)
Energetic parameters of a figure of 8 coil, varying with the radius of the leaf
Author: Institute: Street: City: Country: Email:
Laura CRET Technical University of Cluj-Napoca 15 Ctin Daicoviciu Cluj-Napoca Romania
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Optimal Control of Walking with Functional Electrical Stimulation: Inclusion of Physiological Constraints Strahinja Dosen1, Dejan B. Popovic2 1
Center for Sensory Motor Interaction, Aalborg University, Aalborg, Denmark Faculty of Electrical Engineering, University of Belgrade, Belgrade, Serbia
2
Abstract— Automatic sensory driven control of functional electrical stimulation (FES) assistive systems is of interest for neurorehabilitation of hemiplegic individuals. MEMS based accelerometers and gyroscopes are likely candidates for sensors within a practical FES system. In this paper we demonstrate that static optimization in the space of angular velocities that incorporates physiological constraints is an effective method for the synthesis of stimulation pattern with respect to the use of dynamic programming for optimization. The example presented in the paper uses the walking pattern from a healthy individual and parameters that are characteristic for a hemiplegic individual. Optimization was based on minimization of the tracking error from the desired trajectory defined in the phase space of angular velocities of leg segments and muscle effort. The evaluation of the applicability of the static optimization was based on the analysis of the tracking of joint angles. We found that maximal tracking error was bellow 7 degrees, which belongs to the typical variation of the joint angles during normal walking of a healthy individual. Keywords— Functional electrical stimulation, musculoskeletal model, optimal tracking, MEMS sensors
I. INTRODUCTION Lack of efficient control limits wider use of multichannel functional electrical systems (FES) for gait restoration. Most of FES systems, in home or clinical applications, use manually controlled open loop strategy. The system is operated through a set of switches that activate preprogrammed stimulation sequences. Literature includes simulation and analysis of applicability of sensory driven FES systems, as well as real-time feedback control; yet, typically limited to a single muscle or one joint [1]. Most of the studies are not practical enough to be accepted for control of walking in clinical or home environment. Current research considers more appropriate biomechanical modeling for FES [2], and models try to incorporate the full complexity of the musculoskeletal system [3]. This approach is characterized with major contributions to better understanding of motor control and role of central vs. reflex control in walking sensory-motor mechanisms within the central nervous system; but also with the problem of parameters identification. In parallel, it is very difficult to
estimate parameters in real-time, thereby, the model of the system becomes not observable and controllable. We presented earlier a reduced model of walking that is customized to a potential individual with disability [4]. Parameters in the model can be identified for eventual user experimentally [5]. The optimization was based on dynamic programming (DP) in order to determine muscle activations necessary to achieve tracking of a desired trajectory. In this way it was possible to analyze the feasibility of performing specific walking features and determine possible major reasons why this walking is not possible. The calculated muscle activations are used as an input to the rulebased controller used as the initial pattern of stimulation of paraplegic individuals [6]. In our more recent work we developed a static optimization (SO) optimal control that replaces the DP optimal control [7]. The model that was developed for SO also includes physiological constraints. Among several problems in implementation of sensory driven and feedback control is the difficulty of reliable estimation of joint angles in real time. The use of goniometers and similar systems is not practical or reproducible. The development of MEMS based technology resulted with accelerometers and gyroscopes that are easy to mount, small, have low power consumption and over all provide reproducible and acceptably precise output [8]. However, it was demonstrated that estimation of joint angles with these systems requires complex processing that might be difficult to apply in real time [9]. This led us to the question: Is the DP or SO applicable for optimal control when the sensory input comes from accelerometers or gyroscopes. This paper presents that DP is not applicable if the optimal control is applied to angular velocities, yet the new SO method that includes physiological constraints works satisfactory. II. METHODS We use planar biomechanical model of the leg that was developed in our previous studies [4]. It comprises leg segments modeled as rigid bodies connected with simple hinge joints. The model allows hip and knee flexion/extension and ankle plantar/dorsi flexion. Passive elastic properties of the joints are modeled as non linear resistive torques. Equiva-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 661–664, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
662
Strahinja Dosen, Dejan B. Popovic
space of angular velocities can be realized by optimizing the following cost function in each simulation step k: min R(k ) = min{RF (k ) + RS (k ) + RT (k ) + E (k )} U (k )
U (k )
6
E (k ) = ∑ λi ui2 (k )
(2)
i =1
RF (k ) = (Ω F (k + 1) − ω F (k + 1)) = (Ω F (k + 1) − f 2 (k )) 2
2
RS (k ) = (Ω S (k + 1) − ω S (k + 1)) 2 = (Ω S (k + 1) − f 4 (k )) 2 RT (k ) = (Ω T (k + 1) − ωT (k + 1)) 2 = (Ω T (k + 1) − f 6 (k )) 2
Fig. 1 Model of the leg. φP, φT, φS, φF – angle of the pelvis, thigh, shank and the foot from the horizontal; FX, FY – horizontal and vertical ground reaction forces; COP – distance of the center of pressure from the point OF on the sole of the foot; aHX, aHY – horizontal and vertical acceleration of the hip; MHF, MHE, MKF, MKE, MAF, MAE – torques produced by the equivalent hip, knee and ankle flexors/extensors.
lent muscles are used as actuators. A pair of antagonistic muscles replaces all muscles acting around the joint in the plane of interest. Three compartment multiplicative muscle model that takes into account neural activation, torque-angle and torque-angular velocity profiles of the muscle is used. The model is presented in the Figure 1. The model can be analyzed as a triple pendulum with the hip being a moving hanging point. After a sequence of transformations, a discrete mathematical model in the statespace form can be obtained. Absolute angles φ and angular velocities ω of the leg segments x1=φF, x2=ωF, x3=φS, x4=ωS, x5=φT, x6=ωT are state variables whereas muscle activations ui ∈ [0,1], i=1, 2...6 are control inputs (i.e. 0 – muscle relaxed, 1 – muscle maximally contracted). After some additional manipulation and rearrangement, we can express segment angles x1, x3 and x5 in the step k+2 and segment angular velocities x2, x4 and x6 in the step k+1 as functions of the current system state X(k) = [x1(k)…x6(k)]T and control inputs U(k)=[u1(k)…u6(k)]T: xi (k + 2) = f i ( X (k ),U (k )) = f i ( k ); i = 1,3,5 x j (k + 1) = f j ( X (k ),U (k )) = f j ( k ); j = 2,4,6
(1)
We need to calculate muscle activations for the model of the patient's leg that will force the model to track selected reference walking pattern recorded in a healthy subject. If ΩT, ΩS and ΩF are measured angular velocities of the thigh, shank and the foot, tracking of the reference trajectory in the phase
In Equation (2) ωT, ωS and ωF denote angular velocities generated in the simulation. Members RF, RS and RT represent squared tracking error in the step k+1 whereas E(k) is a term that penalizes total effort (energy) in the step k. Muscle activations are bounded signals and segment and joint angles have to be inside physiologically acceptable ranges. Thus, we have a following set of constraints that have to be enforced during optimization in the step k: U MIN ≤ U (k ) ≤ U MAX Ω MIN ≤ [ x 2 (k + 1) x 4 (k + 1) x6 (k + 1)]T = [ f 2 (k ) f 4 (k ) f 6 (k )]T ≤ Ω MAX ;
(3)
Φ MIN ≤ [ x1 (k + 2) x3 (k + 2) x5 (k + 2)]T = [ f1 (k ) f 3 ( k ) f 5 (k )]T ≤ Φ MAX ;
ϕ AMIN ≤ x1 (k + 2) − x3 (k + 2) = f1 (k ) − f 3 (k ) ≤ ϕ AMAX ϕ KMIN ≤ x5 (k + 2) − x3 (k + 2) = f 5 (k ) − f 3 (k ) ≤ ϕ KMAX ϕ HMIN ≤ x5 (k + 2) − π − ϕ P (k + 2) = f 5 (k ) − π − ϕ P (k + 2) ≤ ϕ HMAX
ΩMIN, ΩMAX, ΦMIN, ΦMAX, UMIN, UMAX, φMIN and φMAX represent minimal and maximal values of segment angular velocities, segment angles, muscle activations and joint angles respectively. Finally, optimization problem defined with Equations (2) and (3) can be solved numerically by using any of the available nonlinear programming solvers (e.g. fmincon in MATLAB). Optimal muscle activations calculated in the step k are applied to the model. System advances in the next state and optimization is repeated. III. RESULTS We used a software tool OptiWalk for simulations in the phase space of angular velocities [8] in order to illustrate the differences in application of the DP and SO. The example is related to the walking pattern shown in Figure 2. This complete gait stride was recorded from the walking of a healthy female (1.57 m; 49 kg) walking at 1.25 m/s. Parameters for the model were estimated experimentally [5]. The DP based simulation results with the activation pattern that needs more muscle strength than available (maximum contraction - 1), and hyperextension of the knee joint for the selected pattern and subject (Figure 3). Maximal
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Optimal Control of Walking with Functional Electrical Stimulation: Inclusion of Physiological Constraints HIP ACCELERATION [m/s 2] 5
4) HIP MUSCLES
1) HIP ANGLE [deg]
PELVIS ANGLE [rad]
hor
40
FLEXOR
gen ref
gen
30
ver
1
ref
1.6
663
0.5
20
0
0
10
1.5
0
0.5
-10
-5
0
0.2
0.4
0.6
0.8
1
1.2
0
ANGULAR VELOCITIES [rad/s] 10
0.2
0.4
0.6
0.8
1
1.2
0.2
FOOT
1.2
SHANK THIGH
0.4
0.6
1
1
1.2
0.4
0.6
0.8
1
0.2
1.2
0
0
0.2
0.4
0.6
0.8
1
1.2
0.4
0.6
0.8
1
0
0.2
CENTER OF PRESSURE 0.8
-0.2 1.2 0
0.4
0.6
0.8
1
1
1.2
EXTENSOR 0
0.2
3) ANKLE ANGLE [deg]
30
0.4
0.6
0.8
1
1.2
6) ANKLE MUSCLES
1
ref
gen ref
FLEXOR
gen
20
SAT
0.5
10 0
0 -10
0
0.2
SAT
0
0.2
0
gen ref
SAT
0.5
0.4
0.2
1.2
0
0
0.1
1
FLEXOR
25
0.6
0.1
0.8
gen
0.2 0.2
0.6
50
0.4
HORIZONTAL GROUND REACTION FORCE
0.4
0.5
0.6
0
0.2
5) KNEE MUSCLES
1
ref
0.8
-5
EXTENSOR 0
75
1
0
0.8
2) KNEE ANGLE [deg]
1.4
5
-10
0
VERTICAL GROUND REACTION FORCE
0.5
-20
0.2
0.4
0.6
0.8
1
1.2
TIME [s]
TIME [s]
Fig. 2 Walking pattern used in simulations. Ground reaction forces are normalized by subject's body weight. Position of the COP is normalized with respect to the length of the sole of the foot. tracking errors are 10, 24 and 20 degrees for the hip, knee and ankle joint respectively. Results of the simulation for the algorithm based on SO are presented in Figure 4. SO algorithm enforces physiological constraints given in equation (3). This time results show no hyperextension and muscle activations are within physiological limits. The tracking error stays small throughout the gait cycle. Maximal tracking errors are 4, 5, and 7 degrees for the hip, knee and ankle respectively. In addition to DP and SO based algorithms in the phase space of angular velocities, DP algorithm in the phase space of angles was applied to the same trajectory and subject. The algorithm resulted in almost perfect tracking of the trajectory. Obtained muscle activations are used as reference muscle activation patterns in the Figures 3 and 4. IV. DISCUSSION The aim of this study was to test the applicability of our algorithm based on DP to track reference trajectory defined in the phase space of angular velocities. The results of the simulation are given in Figure 3 and show significant tracking errors. The errors were small at
0
0.2
0.4
0.6
TIME [s]
0.8
1
1.2
1
EXTENSOR 0
0.2
0.4
0.6
0.8
1
1.2
TIME [s]
Fig. 3 Results for DP. Reference (desired) and calculated joint angles (plots 1, 2, 3) and reference and calculated muscle activities (plots 4, 5, 6 thick lines) are given. Box on the plot 2 marks hyperextension in the knee. Arrows on the plots 5 and 6 show saturated muscle activations.
the beginning of the gait cycle and increase abruptly around 0.2 s. Plot 2 shows that tracking of the angular velocities forces the knee into hyperextension (although there is no hyperextension of the knee in the reference trajectory). This can happen because DP does not enforce physiological constraints of the joint angle. In order to hyperextend the joint, knee extensor has to act strongly (plot 5). It finally saturates and due to high passive elastic torque that develops, large tracking error is produced. Plots 4, 5 and 6 show that calculated muscle activations differ significantly from the patterns that actually generate selected reference walking pattern. When analyzing Fig. 4 one can notice the following. Plots 4, 5 and 6 show that calculated muscle activations resemble in shape the patterns that would produce perfect tracking of the trajectory. The differences between reference and calculated muscle activations are due to some tracking error that still appears in all of the joints (plots 1, 2, 3). We demonstrated, on one of many examples we studied, that when applying DP one could conclude that the walking pattern is not feasible by a given individual since the muscles are not strong enough (Fig. 3). However, the use of SO with the same walking and model parameters shows that the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
664
Strahinja Dosen, Dejan B. Popovic
said walking pattern is feasible, and tracking errors are small enough (Fig. 4). The reason comes from the fact that the SO includes physiological constraints. We do not show here but the application of DP when using joint angles, and not angular velocities, leads to the tracking errors that are acceptable and does not lead to hyperextension. This means that tracking error in the space of joint angles will occur even if the error in the velocities is held very low. This, as shown, possibly leads to violation of some physiological constraints e.g. minimal and/or maximal joint angles. It is evident that such violation of the physiological constraints has global consequences and result in a significant tracking error over the whole trajectory. Here we presented a new method based on static optimization. Static optimization allows inclusion of all physiological constraints in the optimization. Application of SO based approach resulted in a good tracking of the reference trajectory. Although the tracking is not perfect, generated walking pattern is close enough to represent normal walking. Since they produce normal walking pattern, calculated muscle activations can be used for the development of a sensory driven FES controller. 4) HIP MUSCLES
1) HIP ANGLE [deg]
1
40 ref 30
REFERENCES 1. 2.
3. 4.
5.
gen
gen
The algorithm can be easily extended to enable tracking in the phase space of accelerations as well. This finding is an important step toward development of a FES system that can be used in clinical environment for therapy, and possibly later at home as an orthotic device. The next phase of our research is to test the applicability of the system in functional electrical therapy (FET) in hemiplegic individuals in whom a multichannel FES system uses up to four channels to control hip, knee and ankle joints of a paretic side.
ref 0.5
20 0
10 0
6.
0.5
-10 0
0.2
0.4
0.6
0.8
1
1
1.2
75
7.
0.2
0.4
0.6
0.8
1
1.2
5) KNEE MUSCLES
2) KNEE ANGLE [deg] ref
EXTENSOR 0
1
FLEXOR
gen
gen ref
8.
0.5 50 0 25
9.
0.5 0 0
0.2
0.4
0.6
0.8
1
1
1.2
EXTENSOR 0
0.2
3) ANKLE ANGLE [deg] 30
1 ref
20
0.4
0.6
0.8
1
1.2
6) ANKLE MUSCLES gen ref
FLEXOR
gen
Popovic, D. and Sinkjaer, T (2003) External Control of Movement. Control of Movement for the Physically Disabled, Second ed. Belgrade, Serbia, Academic Mind. Heilman, B. P., Audu, M. L., Kirsch, R. F., and Triolo, R. J (2006) Selection of an optimal muscle set for a 16-channel standing neuroprosthesis using a human musculoskeletal model. J Rehab Res Dev, vol. 43, no. 2, pp. 273-285. Davoodi, R., Brown, I. E., and Loeb, G. E (2003) Advanced modeling environment for developing and testing FES control systems. Med Eng Phys, vol. 25, no. 1, pp. 3-9. Popovic, D., Stein, R. B., Oguztoreli, M. N., Lebiedowska, M., and Jonic, S (1999) Optimal control of walking with functional electrical stimulation: A computer simulation study. IEEE Trans Rehab Eng, vol. 7, no. 1, pp. 69-79. Stein, R. B., Zehr, E. P., Lebiedowska, M. K., Popovic, D. B., Scheiner, A., and Chizeck, H. J (1996) Estimating mechanical parameters of leg segments in individuals with and without physical disabilities. IEEE Trans Rehab Eng, vol. 4, no. 3, pp. 201-211. Popovic, D. and Jonic, S (1999) Control of bipedal locomotion assisted with functional electrical stimulation. Proc. of the American Control Conference, San Diego, California, pp. 1238-1242. Dosen S., Popovic D., Azevedo C (2007) Optiwalk – A New Tool for Designing of Control of Walking in Individuals with Disabilities. Journal Europeen des Systemes Automatises, vol. no. X - X/20 (in press). Mayagoitia, R. E., Nene, A. V., and Veltink, P. H (2002) Accelerometer and rate gyroscope measurement of kinematics: an inexpensive alternative to optical motion analysis systems. J Biomech, vol. 35, no. 4, pp. 537-542. Dejnabadi, H., Jolles, B. M., and Aminian, K (2005) A new approach to accurate measurement of uniaxial joint angles based on a combination of accelerometers and gyroscopes. IEEE Trans Biomed Eng, vol. 52, no. 8, pp. 1478-1484. Author: Strahinja Dosen
0.5
10 0
0 -10
0.5
-20 0
0.2
0.4
0.6
TIME [s]
0.8
1
1.2
1
EXTENSOR 0
0.2
0.4
0.6
0.8
1
1.2
Institute: Street: City: Country: Email:
Center for Sensory Motor Interaction, Aalborg University Fredrik Bajers Vej 7-D3 Aalborg Denmark
[email protected]
TIME [s]
Fig. 4 Results for SO. Minimal knee angle is set to 1 degree. Reference (desired) and calculated joint angles (plots 1, 2, 3) and reference and calculated muscle activities (plots 4, 5, 6) are given.
This work was partly supported by the Danish Research Council and partly by Ministry of Science and Environmental Protection of Serbia.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The effect of afferent training on long-term neuroplastic changes in the human cerebral cortex R.L.J. Meesen1,2, O. Levin2 and S.P. Swinnen2 1
REVAL - Rehabilitation and Health Care Research Center, Department of Health Care, University college of Limburg, Hasselt, Belgium 2 Motor Control Laboratory, Department of Biomedical Kinesiology, Group Biomedical Sciences, K.U. Leuven, Belgium
Abstract- In the present study we explored the effect of longterm intervention protocol (3 w, 1 h/day) with sensory stimulation on neuroplastic changes in the human motor cortex. Interventions consisted of repetitive activation of afferent pathways of the right abductor policies brevis (APB) muscle with tendon vibration (TV) and transcutaneous electrical nerve stimulation (TENS). The representations of the hand (APB, ADM) and forearm (FCR, ECR) muscles were mapped using transcranial magnetic stimulation (TMS) before and after the 3 weeks of sensory intervention (TV and TENS) groups or after similar periods of daily active training of the APB or rest (control). Our observations showed a significant increase in motor cortical representation of all the four muscles (as measured by changes in the map size) for the TENS group. No such effects were observed in the tendon vibration group, active training group or the control group. Keywords- Afferent stimulation, neuroplasticity, transcranial magnetic stimulation (TMS)
I. INTRODUCTION Sensorimotor reorganization within the human cerebral cortex occurs during development, as a result of practice and experience, and following brain damage [1]. Studies using functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) showed that repetitive proprioceptive stimulation activates large parts of motor networks both in the contralateral and ipsilateral hemispheres (in addition to the primary sensory area) [2]. This has recently been linked with the emergence of a delayed facilitation or depression in the excitability of cortical circuits during and/or immediately after the end of repetitive afferent stimulation [3-6]. Yet, the long-lasting effects of afferent stimulation on structural reorganization in the motor cortex remain largely unknown. In humans, representational cortical plasticity can be assessed at a regional level by means of transcranial magnetic stimulation (TMS) mapping of corticomotor representations [7-9]. The TMS mapping technique has been used extensively to address dynamic changes in corticomotor representations following various experimental and pathological conditions [10]. Single TMS
pulses are delivered via a focal figure-of-8 coil to scalp positions arranged in a coordinate system overlying the primary motor cortex (M1). By measuring the motor evoked potential (MEP) amplitude in the targeted muscle(s), ‘maps’ based upon spatial changes in MEP amplitude among multiple stimulation positions can be composed. In this way, a functional topographic map of the M1 projection to hand and forearm muscles can be obtained. Motor output maps can be quantified by a number of variables, such as the optimal stimulation position, the map area and volume [8-9]. In the present study we explored the effect of long-term intervention protocol (3 weeks, 1 hour/day) with sensory stimulation on neuroplasticity in the primary motor cortex of normal healthy volunteers. Previous studies have demonstrated that the TMS mapping technique is sensitive to detect changes in the motor representation following somatosensory stimulation paradigms. Consequently, we wondered whether a recently introduced type of interventional somatosensory stimulation, i.e., muscle tendon vibration, has the potential to drive changes in human motor cortex organization. This question could be possibly relevant in the search for interventional protocols that promote functional recovery after central nervous system injury [11]. II. METHODOLOGY Subjects: A total of 48 neurologically healthy righthanded volunteers participated in the present study (20 males, 28 females mean age 27,6 SD14,2 range 18-53 years). The participants were naive about the purpose of the experiment, were screened for potential risk of adverse events during TMS (Wassermann et al. 1998), and provided written informed consent prior to participation. The experimental procedures were approved by the local Ethics Committee for Biomedical Research at the Katholieke Universiteit Leuven, according to the Declaration of Helsinki. Intervention: Interventions consisted of repetitive activation of afferent pathways of the right abductor policies
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 643–646, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
644
brevis (APB) muscle with tendon vibration (TV, n=12), transcutaneous electrical nerve stimulation (TENS, n=12), daily active training of the APB (n=12) or no intervention (control, n = 12). Tendon vibration (80 Hz, 1 mm) was applied at the muscle belly of the right APB muscle by a purpose-built shaker, structured from a DC motor (Maxon 34EBA201A). TENS (100 HZ) was applied via an electrical stimulator (Chattanooga Digitens). Transcranial Magnetic Stimulation: The representation areas of the hand (APB, ADM) and forearm (FCR, ECR) muscles were mapped using transcranial magnetic stimulation (TMS) before and after the 3 weeks intervention. Representation areas were mapped with a protocol modified from Wilson et al [9]. Subjects wore a specifically-built tight-fitting cap with a 1×1-cm orthogonal coordinate system referenced to the vertex (Cz) on it. The cap was positioned using cranial landmarks (nasion-inion) and the external auricular meatus as references. Single TMS pulses (interstimulus interval: 6s) were applied in 1 cmsteps in a clockwise spiral course beginning at the optimal stimulation position for the FCR. Each stimulation position was stimulated 8 times before moving to the adjacent grid point, until the border of the motor maps of each target muscles had been defined. The total number of points in each mapping session covered between 100 (10 × 10) and 225 (15 × 15) positions. Single-pulse transcranial magnetic stimuli were delivered by means of a Dantec MagLite r-25 stimulator (Medtronic, Skovlunde, Denmark) (maximal stimulator output: 1.5 Tesla) with a figure-of-eight coil (MC-B70 magnetic coil transducer, outer radius diameter: 50 mm). The magnetic stimulus had a biphasic pulse configuration with a pulse width of 280 μs. The coil was positioned tangentially to the scalp over the subjects’ left hemisphere with the coil handle pointing backward and rotated 45º away from the midsagittal line. The optimal stimulation position (hot-spot) for eliciting MEP’s in each of the four muscles was marked with a soft-tip pen. Stimulation intensity for mapping of the FCR- and ECR M1 representation was initially set at 120 % of the FCR rest motor threshold (rMT). rMT was determined at the optimal stimulation position as the lowest intensity needed to evoke MEP’s in the relaxed FCR of at least 50 μV amplitude in five out of ten consecutive trials [12]. Data Analysis: The size of the APB, ADM, FCR and ECR MEPs was measured by calculating the peak-to-peak amplitude of the signal. The number of active positions in each map was determined as points whose stimulation evoked a mean MEP in the target muscle with peak-to-peak amplitude of at least 100 μV. Mean peak-to-peak amplitudes of MEP waveforms obtained at each scalp site were plotted against antero-posterior and mediolateral
R.L.J. Meesen, O. Levin and S.P. Swinnen
distance. 3D-representations of mean motor outputs for the four target muscles were composed by linear interpolation of the mean MEP-amplitudes between adjacent stimulation positions (Matlab 6.5, MathWorks, Inc.). Mean MEP at each position was then normalized by mean MEP score at the hot-spot. The motor representation area of each muscle was defined as the number of stimulus positions whose stimulation evoked a mean MEP in the target muscle with a magnitude of at least 10 % its respective normalized peak. Map area referred to the contour Map volume referred to the sum of the mean amplitudes at all active stimulation positions. Advanced linear models applications (STATISTICA 6.0, StatSoft Inc.) were used for statistical analysis. Mapping variables were statistically compared by means of a 2 × 4 × 4 (TEST × GROUP × MUSCLE) analyses of variance (ANOVAs). The factor TEST consists of two levels, referring to the pre/post mapping sessions. The factor GROUP consists of four levels referring to TENS, TV, active and control groups and MUSCLE consists of four levels referring to four tested muscles (APB, ADM, FCR and ECR). When significant effects were found, post hoc testing (Bonferroni) was conducted to identify the source of the differences. III. RESULTS Examples of individual maps are illustrated in Figure 1, while group results are shown in Figure 2. Overall, we found large differences in the motor cortical representation of the hand muscles (ABP and ADM) between pre- and post maps in the TENS and TV groups but not in the active training or control groups. This observation is confirmed largely by the significant TEST × GROUP × MUSCLE interaction with respect to both map area and volume (F9,99 >3.43, p < 0.01). However, a significant enhancement in the motor representations of area and volume from Pre to Post mapping sessions was observed only in the TENS group [APB, ADM, FCR and ECR: all, p < 0.01] whereas no such effects were observed in subjects of the remaining groups. IV. DISCUSSION The present experiment shows for the first time that changes in the cortical representation of the hand muscles can be generated by repetitive activation of sensory afferences in the targeted muscle, with the larger effects observed in the TENS group. TENS is routineously applied as a proprioceptive stimulation technique in neurorehabilitation that has shown to activate large parts of the sensori-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The effect of afferent training on long-term neuroplastic changes in the human cerebral cortex
motor network as well as to induce facilitatory and/or inhibitory effects on the corticospinal motor representation of the targeted muscles when administered repeatedly [6]. The underlying mechanisms of those long-lasting effects are not yet completely understood. An increase in volume and/or area of the motor representations of the hand muscles is argued to indicate recruitment of a greater number of descending motor pathways in response to cortical stimulation with TMS. In general, the
Pre
APB
645
size of MEP provides an indication of the level of excitability of the corticospinal pathways; the MEP peak-to-peak indicates the peak of simultaneous excitement of the descending pathways and map area reflects the total amount of excited motorneurons [13]. As stimulus intensity was kept at the same level in both the pre- and post-intervention sessions, we propose that the sustained increase in map area could signify a gradual increase in the number of active motor neuron as a result of the intervention.
Post
APB
ADM
ADM
FCE
FCE
ECR
ECR
Fig. 1 Representative map areas of the ABP, ADM, FCR and ECR muscles before (Pre – left hand column) and after 3 weeks (Post – right hand column) period of sensory intervention with TENS.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
646
R.L.J. Meesen, O. Levin and S.P. Swinnen
40
PRE
35
POST
AREA (cm2)
30 25 20 15 10 5 0
APB
ADM
FCR
TENS
ECR
APB
ADM
FCR
TV
ECR
APB
ADM
FCR
ECR
ACTIVE
APB
ADM
FCR
ECR
CONTROL
Fig. 2 Group data showing motor representation area of the four muscles at Pre and Post mapping sessions.
This phenomenon may have been mediated either by affecting the excitability of pre-synaptic axonal elements or changing the efficiency of trans-synaptic interactions [7]. However, the most recent findings point to the involvement of a perceptual-to-motor transformation of the afferentinduced proprioceptive information, most likely occur at the cortical level rather than being a purely spinal reflex mechanism [14]. Besides the critical importance of obtaining fundamental insights into the mechanisms that drive plasticity in the human brain, the current state of knowledge also highlights the potential advantage of sensory training (transcutaneous electrical nerve stimulation) to serve as a useful complementary therapy in neurorehabilitation.
ACKNOWLEDGMENT Support for this study was provided through a grant from the Flanders Fund for Scientific Research (FWO Project G.0292.05).
REFERENCES 1. Donoghue JP (1995) Plasticity of adult sensorimotor representations. Curr Opin Neurobiol 5:749-754. 2. Nelles G, Jentzen W, Jueptner M et al. (2001) Arm training induced brain plasticity in stroke studied with serial positron emission tomography. NeuroImage 13:1146-1154. 3. McKay D, Brooker R, Giacomin P et al. (2002) Time course of induction of increased human motor cortex excitability by nerve stimulation. NeuroReport 13:12711273.
4. Steyvers M, Levin O, Van Baelen M et al. (2003) Corticospinal excitability changes following prolonged muscle tendon vibration. NeuroReport 14:1901-1905. 5. Steyvers M, Levin O, Verschueren SMP et al. (2003) Frequency-dependent effects of muscle tendon vibration on corticospinal excitability: a TMS study. Exp Brain Res 151: 9-14. 6. Tinazzi M, Zarattini S, Valeriani M et al. (2005) Longlasting modulation of homan motor cortex following prolonged transcutaneous electrical nerve stimulation (TENS) of forearm muscles: evidence of reciprocal inhibition and facilitation. Exp Brain Res 161:457-464. 7. Siebner HR, Rothwell J (2003) Transcranial magnetic stimulation: new insights into representational cortical plasticity. Exp Brain Res 148:1-16. 8. Wassermann EM, McShane LM, Hallett M et al. (1992) Noninvasive mapping of muscle representations in human motor cortex. Electroencephalogr Clin Neurophysiol 85:1-8. 9. Wilson SA, Day BL, Thickbroom GW et al. (1996) Spatial differences in the sites of direct and indirect activation of corticospinal neurones by magnetic stimulation. Electroencephalogr Clin Neurophysiol 101:255-261. 10. Pascual-Leone A, Grafman J, Hallett M (1994) Modulation of cortical motor output maps during development of implicit and explicit knowledge. Science 263:1287-1289. 11. Fraser C, Power M, Hamdy S, et al. (2002) Driving plasticity in human adult motor cortex is associated with improved motor function after brain injury. Neuron 34:831-840. 12. Rossini PM, Barker AT, Berardelli et al. (1994) Noninvasive electrical and magnetic stimulation of the brain, spinal cord and roots: basic principles and procedures for routine clinical application, Report of an IFCN committee. Electroencephalogr Clin Neurophysiol 91:79-92. 13. Ikoma K, Samii A, Mercuri B et al. (1996) Abnormal cortical motor excitability in dystonia. Neurology 46:1371-1376 14. Swayne O., Rothwell J., Rozenkranz K (2006) Transcallosal sensorimotor integration: Effects of sensory input on cortical projections to the contralateral hand. Clin Neurophysiol 117: 855-863
Author: Institute: Street: City: Country: Email:
Prof. dr. Raf L.J. Meesen REVAL - Rehabilitation and Health Care Research Center Guffenslaan 39 B-3500, Hasselt Belgium
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Treating drop-foot in hemiplegics: the role of matrix electrode C. Azevedo-Coste1, G. Bijelic2, L. Schwirtlich3 and D.B. Popovic4,5 1
DEMAR, LIRMM / INRIA, Montpellier, France Center for Multidisciplinary Studies of the Belgrade University, Serbia 3 Institute for Rehabilitation "Dr Miroslav Zotovic", Belgrade, Serbia 4 University of Belgrade Faculty of Electrical Engineering, Serbia 5 SMI, Department of Health Science and Technology, Aalborg University, Denmark 2
Abstract— We present advantages of the “intelligent matrix electrode” for providing selective correction of drop-foot in hemiplegic individuals. The matrix electrode which integrates stimulating and sensing parts could allow the emulation of the appropriate electrode shape and size; thereby, provision of selective stimulation that leads to functional movement and online adaptation of the electrode during the application. The need for selective stimulation follows recent findings about therapeutic effects of electrical stimulation in neurorehabilitation. The matrix electrode comprises small fields that can be made conductive and a controller that allows computerized selection of the fields being conductive. Here we present results from a study in nine hemiplegics. The matrix electrode was positioned over the peroneal nerve and primary dorsiflexor muscles and we estimated the movement of the foot by measuring the ankle joint angle. We found that the branched tree type shape and size of the electrode vary substantially when stimulating over the dorsiflexor muscles individuals in the study. We confirmed very high sensitivity to the position of small electrode when stimulating over the nerve. This indicates that the use of “intelligent matrix electrode” is favorable compared with conventional electrodes since it can adapt to individual and secure selective stimulation.
electrical stimulation do not lead to carry over improvements [6]. The specific need for selective surface stimulation is within the newly suggested Functional Electrical Therapy [12] programs of neurorehabilitation. Selective stimulation of sensory-motor system is essential for achieving joint rotations that lead to a desired function. The plausible solution for achieving selective stimulation is the application of matrix electrodes [7, 8, 9, 10, 13]. The idea behind the use of surface matrix electrodes is that one can select size, shape, and position of the electrode without physical reallocation or use of multiple electrodes. The matrix electrode allows selection of small fields to be conductive; thereby, definition of size, shape and position of the electrode. If the selection of the conductive fields is done manually and off-line, then the size and shape of the electrode cannot be changed during the use. This would limit the use since when standing up, walking, or rotating the leg the relative position of the conductive fields once selected moves with respect the sensory-motor systems that are responsible for activation of the desired muscle.
Keywords— Functional electrical therapy (FET), dropfoot, intelligent matrix electrode, hemiplegia, Functional electrical stimulation (FES). I. INTRODUCTION Hemiplegia is a condition where one side of the body is paretic or paralyzed; it is usually the consequence of a cerebro-vascular accident. One of the main consequences of hemiplegia is the drop-foot syndrome. Due to lack of controllability of muscles involved in flexing the ankle and toes, the foot drops downward and impedes the normal walking motion. Today, there are commercially available assistive systems (e.g., Odstock stimulator, [11]) that use surface electrodes and prevent drop-foot. The functionality of drop-foot stimulators depends on the timing of stimulation and functionality of dorsiflexion. The alternative to surface stimulation is to use implanted systems [1, 2, 3, 4, 5]; however, only in the cases where therapeutic effects of
Fig. 1 Principle of the “Intelligent Matrix Electrode” operation
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 654–657, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Treating drop-foot in hemiplegics: the role of matrix electrode
We suggest that the Intelligent Matrix Electrode (IME) could resolve the problem of positioning of the electrode; thereby, contribute to the elimination of the impairment that leads to drop-foot. This IME comprises stimulating and sensing subsystems: 1) The stimulation subsystem integrates a matrix interface fabricated in the size and shape that is appropriate for the muscle to be stimulated, and a controller that allows computerized selection of one or more fields within the said matrix electrode; 2) The controller receives information from the sensing subsystem about the movement. This assembly should automatically adjust the intensity and the shape and size of conductive part of the matrix interface with the skin that based on the on-line estimated ankle joint movement (Fig. 1). In this paper we present the results from the initial phase of the development of IME where we tested the hypothesis that matrix electrodes can be used effectively for control of drop-foot in individuals with hemiplegia. II. MATERIAL AND METHODS We used a matrix electrode with 24 fields [14] that can be made conductive by use of a microcontroller connected to the stimulator (Fig. 2). One matrix electrode, with 1,2cm diameter conductive fields, was placed over the peroneal nerve as it passes over
655
the head of fibula, causing movement of the foot when activated. The second matrix electrode, with 1,8cm diameter conductive fields, was placed over the tibialis anterior muscle. We used UNA FES® multi-channel programmable current controlled stimulator allowing full control of stimulation parameters (amplitude, frequency and pulse duration). and delivery of programmed sequences of pulses that follow the trigger signals. Each matrix electrode was connected to its own control box and each control box was connected to one channel of the stimulator. For both channels stimulation parameters were set at: frequency at 50 pulses per second, pulse duration at 300 μs. The matrix electrodes were used as cathodes and each of them was associated to an anode. We used a Penny & Giles flexible goniometer (Biometrics, Gwent, U.K.) to record the ankle dorsiflexion and eversion. The goniometer was connected to a laboratory acquisition system based on Labview 6.1 program and NI DAQ 6024 for PCMCIA (National Instruments, Austin, Texas). Nine volunteer hemiplegic individuals from the Institute of Rehabilitation “Dr Miroslav Zotović”, Belgrade participated in the study. The protocol was approved by the local ethics committee, and all subjects signed the consent form that follows the Declaration of Helsinki. The study subjects were sitting and the foot was not touching the ground. In a first part of the experiment, called mapping, each of the fields was successively activated and the current amplitude was increased up the apparition of a movement at the ankle. Stimulation intensity was then increased until maximal dorsiflexion was observed. For each field the signal from the goniometer was recorded. In a second phase of the experiment, from the observations made during the mapping stage, multi-field stimulation was performed, involving the fields which were shown to induce dorsiflexion, different shapes were tested and the corresponding goniometer values were recorded. III. RESULTS We present the results obtained when stimulating over the peroneal nerve and the ankle dorsiflexor muscles. In Figure 3, we have plotted the maximum value of the ankle flexion angle obtained by activating individually each of the matrix electrode fields place over the nerve (mapping). In Figure 4, we did the same for the electrode placed over the muscle.
Fig. 2 Description of the experimental setup.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
656
C. Azevedo-Coste, G. Bijelic, L. Schwirtlich and D.B. Popovic,
Fig. 3 Mapping: results obtained by stimulating peroneal nerve with each of the matrix electrode fields independently in 9 patients.
The selection of the electrode size and position in our case was manual, yet demonstrated that during the process of automatic setting of the shape and size of the electrode the electrode needs to have the same pattern for at least 500ms when stimulating with the frequency of 50 pulses per second in order of allow movement to be detected and stimulation to be delivered to the new set of conductive fields. The automatic selection of the optimal shape and size benefits greatly if the starting shape of the electrode follows the anatomy sensory-motor systems for principal dorsiflexor muscles. The selection of the optimal size and shape in this case lasts less than 2 minutes. The change of the electrode size and shape changes very little from day to day; hence, the daily setup lasts less than 1 minute. As expected from the basic anatomy the shape of the electrode that led to the optimal contraction and functional dorsiflexion in most case was a tree like branched structure; not a regular form of ellipse or rectangle (fig.5). The size of the electrode varies greatly from one to the next hemiplegic individuals. This all suggests that making electrode in custom shaped is not a realistic approach for optimal dorsiflexion when stimulating over the muscle.
Fig. 4 Mapping: results obtained by stimulating Tibialis Anterior m. with each of the matrix electrode fields independently in 9 patients.
IV. DISCUSSION An important result is that the response to the stimulation is very sensitive when using the electrode placed over the peroneal nerve. It is also found that small variation of pulse charge easily leads to undesired movements that could compromise the walking. The size of the electrode when positioned over the nerve is small, as already demonstrated and applied in most drop-foot stimulators. One finding is that in individuals in whom it is difficult to generate movement when stimulating over the nerve the same holds for the electrode placed over the muscle. Fig. 5 Multi-field stimulation over the muscle: optimal configuration of the matrix electrode in terms of maximum ankle flexion.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Treating drop-foot in hemiplegics: the role of matrix electrode
V. CONCLUSION The preliminary results issued from this study are encouraging and supporting our hypothesis that there is a need for a matrix electrode which could be adapted online. In this paper we present results showing that the application of matrix electrode is beneficial. We also demonstrated that the shape and size of the electrode vary from individual to individual; hence, individual adjustments are required. In the same experiments we demonstrated that the position of conductive fields that lead to optimal movement changes when standing up from the sitting position. This study is the first step towards the development of an “intelligent” matrix electrode which could resolve the problem of positioning of the electrode for drop-foot treatment. The final system will comprise stimulating and sensing subsystems associated to a controller adapting the selection of one or more fields within the matrix electrode. The emulation of the electrode shape will be based on the information from the sensing subsystem about the ankle actual movement. Aside from the active field combination, the system will also automatically adjust the stimulation intensity leading to the desired ankle joint movement (Fig.1). The current development is being expanded to replace the joint angle sensor with the MEMS based accelerometers that can be also used to trigger the operation of the stimulator and contribute to better timing of the stimulation to the walking cycle.
657 3.
4.
5.
6.
7.
8.
9.
10.
11.
ACKNOWLEDGMENT This project was partly supported by French National Network of Health Technologies, Danish Research Foundation, Denmark and Ministry for Science and Environmental Protection of Serbia. The work was partly supported by IMA, Vienna, Austria. We would also like to thank our volunteer subjects and the clinical staff from the Institute for Rehabilitation “Dr Miroslav Zotovic”, Belgrade, Serbia.
12. 13. 14.
O'Halloran T., Haugland M., Lyons G., and Sinkjaer, T. (2003) Modified implanted drop foot stimulator system with graphical user interface for customised stimulation pulse-width profiles. Medical & Biological Engineering & Computing, 41(6):701-709. Hoffer J.A., Baru M., Bedard S., Calderon E., Desmoulin G., Dhawan P, Jenne G., Kerr J., Whittaker M. and Zwimpfer T.J. (2005) Initial results with fully implanted Neurostep FES system for foot drop. 10th Annual Conference of the International FES Society, Montreal, Canada, July, 53-55. Burridge J., Haugland M., Larsen B., Svaneborg N., Iversen H., Brogger P., Pickering R., and Sinkjaer T. (2005) Long-term follow-up of patients using the actigait implanted drop-foot stimulator. 10th Annual Conference of the International FES Society Montreal, Canada, July, 264-266. Popovic D.B., Popovic M.B., Schwirtlich L., Grey M., Mazzaro N., and Sinkjaer T. (2005) Functional Electrical Therapy of walking: pilot study, 10th Annual Conference of the International FES Society, Montreal, Canada, July, 86-88. Fujii T., Seki K. and Handa Y. (2004) Development of a new FES system with trained Super-Multichannel Surface electrodes. 9th Annual Conference of the International FES Society, Bournemouth, UK, September. Keller T., Lawrence M., Kuhn A. And Morari M. (2006) New MultiChannel Transcutaneous Electrical Stimulation Technology for Rehabilitation. 28th IEEE EMBS Annual International Conference. New York City, USA, August, 194-197. Popovic M.R. (2006) Transcutaneous Electrical Stimulation Technology for Functional Electrical Therapy Applications. 28th IEEE EMBS Annual International Conference. New York City, USA, August, 2142-2145. O'Dwyer S.B., O'Keeffe D.T., Coote S., and Lyons G.M. (2006) An electrode configuration technique using an electrode matrix arrangement for FES-based upper arm rehabilitation system. Med. Eng. Phys. 28-2:166-176. Taylor P., Burridge J., Dunkerley A., Wood D., Norton J., Singleton C. and Swain I. (1999) Clinical Audit of 5 Years Provision of the Odstock Dropped Foot Stimulator. Artif Organs. 23(5):440-442. Popovic M.B., Popovic D.B., Schwirtlich L, and Sinkjaer T. (2004) Clinical evaluation of functional electrical therapy (FET) in chronic hemiplegic subjects. Neuromod, 7(2):133–140. WIPO Patent, International Publication Number PCT WO 2005/075018A1 Popovic-Bijelic A., Bijelic G., Jorgovanovic N., Bojanic D., Popovic D.B., Popovic M.B., and Schwirtlich L. (2005) Multi-Field Surface Electrode for Selective Electrical Stimulation, Artif Organs, 29(6):448-452. Author: Christine Azevedo-Coste
REFERENCES 1. 2.
Guiraud D., Stieglitz T., Koch K.P, Divoux J.L. and Rabischong P. (2006) An implantable neuroprostheses for standing and walking in paraplegia: 5-year patient follow-up. J. Neural Eng. 3:268-275. Guiraud D., Stieglitz T., Taroni G. and Divoux J.L (2006) Original electronic design to perform epimysial and neural stimulation in paraplegia. J. Neural Eng. 3:276-286.
Institute: Street: City: Country: Email:
DEMAR INRIA/LIRMM 161 rue Ada 34392 Montpellier cedex 5 France
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Troubleshooting for DBS patients by a non-invasive method with subsequent examination of the implantable device H. Lanmüller1, J. Wernisch2 and F. Alesch3 1
Department of Biomedical Engineering and Physics, Medical University of Vienna, Vienna, Austria 2 Institute of Solid State Physics, Technical University of Vienna, Vienna, Austria 3 Department of Neurosurgery, Medical University of Vienna, Vienna, Austria
Abstract— Multichannel devices are used for deep brain stimulation in patients suffering from Parkinson’s disease. A non invasive method to inspect each single output of these devices was applied in 12 patients. The clinician programmer indicates an electrode impedances beyond standard values in these patients or a non explainable loss of the therapeutic effect was given. A small device was developed to measure and display the stimulation impulse via surface electrodes. The results from the measurement pointed at an incorrect measurement by the programmer in 9 cases, broken electrode leads in 2 patients and an IPG failure in 1 patient. Leads and IPG was exchanged and inspected by light or electron microscopy. Each failure prognosis was confirmed by these examinations. The non invasive measurement of the stimulation pulse via surface electrodes turned out as an easy and accurate method for the detection of incomplete IPG malfunctions. Keywords— Deep Brain Stimulation, Device failure, non invasive inspection
I. INTRODUCTION Deep brain stimulation (DBS) has proven to be an efficient therapy in patients with late complications of medical treatment of Parkinson’s disease. Battery-powered multichannel devices are used for this FES application and the durability and lifetime had been increased to a high level. Nevertheless, a complete or partial loss of the therapeutic effect could occur and it is essential to identify if there is a medical or technical cause. In some cases this separation is not obviously and the diagnostic features provided by the programmer or the implantable pulse generator (IPG) are not sufficient. By this instance additional technical examinations had to be carried out in cooperation between medical and technical experts. In this paper we present a non invasive method to inspect each single output of the IPG and examples from the analysis of the explanted faulty components. II. MATERIALS AND METHODE The stimulation output of each channel could be inspected directly on the patient’s skin. The voltage drop
driven by the IPG is proportional to the stimulation current and the tissue impedance. A small device was developed to measure and display this value via surface electrodes. The device consists of an instrumentation amplifier, a trigger unit, a sample & hold circuit, an A/D converter and a LCD panel. The voltage drop caused by the stimulation pulse is sampled 10µs after the rising slope and averaged above 100 pulses. The complete time course could be displayed and stored additionally by the use of a laptop computer. To enable an easy handling during an ambulant or intra operative measurement the forehead above the nose and the cranial end of the sternum was selected as our measuring points. The common reference electrode was placed optional on neck or shoulder. The functional test for one IPG with eight channels takes less then five minutes. Each output was activated against the implant case with an amplitude of 1V which was usually below the sensitivity level. In case of faultless results only the single values were recorded for subsequent tests. The complete time course was stored If a failure was detected. The examination of an explanted lead was done by light microscopy (model SZH10, Olympus Hamburg Germany). The technical examination of a presumed faulty IPG was started with a functional test followed by a destructive inspection. If possible, amplitude, pulse width and frequency on each output channel of the IPG were measured in the functional test. In a second step the titanium case of the IPG was opened by laser cutting. This technology was chosen to minimize any additional mechanical stress during the opening procedure. A Nd:YAG laser (model LPM300, Lasag AG, Thun Switzerland) was used with a pulse repetition rate of 60Hz, a pulse width of 0,2ms and a voltage of 450V. The titanium case was cut in a shape like a semicircle above the electronic circuit. The semicircle was lifted and the metal sheet was bended back over the battery side of the IPG. The opened window allows a view to the thick film hybrid which holds the electronic components and the connection to the battery. Details could be inspected by light microscopy and electron microscopy (model XL 30 ESEM, Philips, Eindhoven, The Netherlands).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 651–653, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
652
H. Lanmüller, J. Wernisch and F. Alesch
III. RESULTS A non invasive inspection of the IPG function was carried out, if the clinician programmer indicates electrode impedances beyond standard values or after a complete or partial loss of the therapeutic effect. 12 patients were examined up till now. Broken electrode leads were found in 2 patients, an IPG failure in 1 patient. In all other cases an incorrect measurement by the programmer could be verified. The measured voltage between forehead and sternum was minimal 28mV and maximal 55mV in these patients. In the case of broken leads the voltage was below 8mV and increases back to normal values after lead replacement. The subsequent examination of the leads by light microscopy confirmed the non invasive measurements (see Fig. 2).
Fig. 1. Time course of the stimulation pulse measured via surface electrodes. Trace 3 (marked) indicates the failed channel left: output channel 0-3 before electrode replacement (amplitude 1V, pulse duration 120µs), right: output channel 0-3 after electrode replacement (amplitude 1V, pulse duration 90µs)
Fig. 2. Fracture of the lead by light microscopy
The examination of the failed IPG showed fractures in the battery connection. Two bond wires in parallel for each battery pole had been used for the connection to the thick film hybrid. Hairline cracks could be found in one bond of the minus pole and in both bonds of the plus pole. At higher magnification by electron microscopy the cracks seemed to spread through the whole bond wire. By applying a bond pull test (model Micropull III, Unitek Eapro B.V. Helmond Netherlands) both wires from the plus pole could be lifted with 0g-force, both wires from the minus pole passed the test with 5g-force. Summarizing, the connection to the minus pole was functioning, but both wires to the plus pole turned out as broken and the electrical connection to the battery was given just by the elasticity of the wires.
Fig. 3. Electrical connections between implant battery and thick film hybrid
Fig. 4. Images made by electron microscopy showing the connection between the battery minus pole and the thick film hybrid. Hairline cracks through the whole bond wire marked by arrows.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Troubleshooting for DBS patients by a non-invasive method with subsequent examination of the implantable device
653
The function of each single output could be inspected. Each failure prognosis was confirmed by the subsequent IPG examination. In 12 patients two broken leads and one IPG failure had been verified. Interestingly, we could not found a description of this method in literature. Maybe it had been used without publishing or remarked in passing without explanation. Further investigations are planed to locate the position of a lead interruption. A fracture of a lead or a contact failure in the connector should be identified. This will shorten and simplify the operation.
Fig. 5. Connection between the battery minus pole and the thick film hybrid, hairline crack starting from the surface of the bond wire.
IV. DISCUSSION
Author: Lanmüller Hermann Institute: Street: City: Country: Email:
Department of Biomedical Engineering and Physics Währinger Gürtel 18-20 Vienna Austria
[email protected]
The non invasive measurement of the stimulation pulse via surface electrodes turned out as an easy and accurate method for the detection of incomplete IPG malfunctions.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Study on Sensing System of Lower Limb Condition with Piezoelectric Gyroscopes: Measurements of Joint Angles and Gait Phases Norio Furuse1 and Takashi Watanabe2 1
2
Miyagi National College of Technology, 48 Nodayama, Medeshima, Natori, Miyagi, Japan Information Synergy Center, Tohoku University, 6-6-05, Aramaki-Aza-Aoba, Aoba-ku, Sendai, Japan
Abstract— Functional electrical stimulation (FES) training of paralyzed muscles is effective for incomplete spinal cord injured patients in the early period of the rehabilitation process. Information of lower limb joint angles and gait phases are very important to assist walking and to restore motor function with FES. It is considered that a small and inexpensive gyroscope is useful to construct a practical sensor system in clinical. In this paper, we examined the simultaneous measurement method of the lower limb joint angles and the gait phases by using the gyroscopes. From the result of the walking analysis with normal subjects, it was indicated that the sensor system could measure the joint angles with sufficient accuracy and could detect practicably the swing phase and the stance phase without mistake. Therefore, the sensing system of lower limb condition with appropriate accuracy that could be used in clinical would be constructed compactly at inexpensive by using the gyroscopes. Keywords— FES, gyroscope, gait phase, swing phase, stance phase
I. INTRODUCTION Functional electrical stimulation (FES) training of paralyzed muscles is effective for the great majority of incomplete spinal cord injured patients in the early period of the rehabilitation process [1]. Information of lower limb joint angles and gait phases are very important to assist walking with appropriate timing and to restore motor function with FES. Moreover, they are important information for evaluating ability and stability of the patient’s walking [1-2]. Acceleration sensors and gyroscopes are used in the field of mechatoronics etc. in recent years. They are suitable for clinical use because they are small and inexpensive and are easy to be attached on the body. As examples using those sensors, many methods of the measurements of the walking speed and the joint angle and recognizing the gait phase are reported [2-4]. We have also shown the possibility of measuring with appropriate accuracy the joint angles of the hip, the knee and the ankle during walking with piezoelectric gyroscopes [5-7]. In addition, we showed the possibility of detection of swing phase and stance phase by using output of the gyroscope that was attached to dorsum of foot for the measurement of the ankle joint angle [8]. From these results, it was found that the
joint angles and the gait phases could be measured with common sensors. However, the gait phases were occasionally detected by mistake in the previous method. In this paper, the detection method with a new algorithm was examined to improve the accuracy of the detection of the gait phases. To construct a sensing system of lower limb condition with the gyroscopes, it was examined that possibility of the simultaneous measurement method of the lower limb joint angles and the gait phases by using the gyroscopes. II. METHODS A. System Five piezoelectric gyroscopes (Murata, ENC-03J) were attached to the umbilicus: G1, lumbar: G2, thigh: G3, shank: G4 and dorsum of foot: G5 as shown in Fig. 1. The attached positions of the sensors were decided to measure the joint angles by past experimental examinations [7]. The positive directions of angular velocity measured by gyro-scopes are shown by arrows in Fig.1. The sensor signals were amplified, lowpass filtered (2nd, 22.6Hz, Q=0.71) and sampled at 120 Hz.
G2 (lumbar)
G1 (umbilicus)
Goniometer (hip) G3 (thigh) Goniometer (knee) G4 (shank)
Goniometer (ankle)
G5 (foot) Aluminum Plate
Electrode (heel)
Electrode (forefoot)
Fig. 1 Attachment positions of the sensors for the measurement. The positive directions of angular velocity measured by the gyroscopes are shown by arrows.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 689–692, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
690
Norio Furuse and Takashi Watanabe
歩行期 Gait phase
(foot) GG5 5(足背部) [deg/sec] [deg/sec]
G4 (shank) G 4(下腿部) [deg/sec] [deg/sec]
G3 (thigh) G 3(大腿部) [deg/sec] [deg/sec]
G2G (lumbar) 2(腰部) [deg/sec] [deg/sec]
G1G (umbilicus) 1(臍部) [deg/sec] [deg/sec]
In order to evaluate the value of the joint angles calculated by using the outputs of the gyroscopes, the joint angles of the hip, the knee and the ankle were measured with goniometers (Penny & Gilles, ADU301A) simultaneously. To evaluate the validity of the gait phase detected based on the output signal of the gyroscope, the gait phases during the walking experiment were detected with two aluminum electrodes and an aluminum plate simultaneously. The foil form aluminum electrodes were attached to the forefoot and the heel of a shoe. The aluminum plate put on the floor was the length of 8 m with the width of 1 m. Four gait phases (mid stance, heel-off, swing and heel-strike) were detected by electric contact condition between the electrodes and the plate. Three healthy subjects participated in the experiments. Their tasks were to walk 10 times with normal speed on the aluminum plate. The subjects were able to do about 6 steps by the right leg in one trial of the walking. Therefore, the total number of steps of each subject is 60. The measured data were analyzed offline using a personal computer.
100 0 -100
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
100 0 -100 500 0 -500
500 0 -500
500
B. Measurement method of lower limb Joint Angles The joint angles were calculated by integrating the angular velocity calculated from the difference value between two gyroscopes that were attached on both sides of a joint [5-6]. The following gyroscopes are used to calculate the each joint angle: G1&G3 (Hip1) or G2&G3 (Hip2) for the hip joint, G3&G4 for the knee joint and G4 & G5 for the ankle joint. A trapezoidal rule was adopted as the method of numerical integration. A possible reason of error in the joint angle calculation was the offset in the outputs of the gyroscopes. In order to remove the error in the joint angle calculation, the influence of the offset on the outputs of the gyroscopes was removed by the offline data processing. C. Detection method of gait phases The measured data together with the four gait phases were shown in Fig. 2. The outputs of the sensors were small on the trunk (G1, G2) and were large on the lower limb (G3, G4, G5) as seen in Fig. 2. In this paper, the detection of the swing phase and the stance phase were examined. The stance phase is the phase that consists of mid stance, heeloff and heel-strike. Based on this result, it was decided that the outputs of G4 or G5 were used for the detection of the gait phase while five gyroscopes were used to measure the joint angles. The outputs of G4 and G5 were considered to be effective in detecting the gait phase because their outputs during the gait varied in relation to the phase clearly. Moreover, it was thought that G5 could acquire the output signal that showed a high relation to the state of plantar region because the sensor was attached to the dorsum of the foot. The detection algorithms of the swing phase and the stance phase were made as follows based on the result of Fig. 2. The swing phase is detected by the gyroscope (G5) if the second negative peak value is detected after the stance phase is detected. However, the swing phase of the first step is detected if the first negative peak value is detected. The stance phase is detected by the gyroscope (G4 or G5) if its output becomes the negative value after the output reaches the positive peak value in the swing phase.
0 -500
4 3 2 1
III. RESULTS A. Measurement of the lower limb Joint Angles
0
1
2
3
4
5 6 時間 Time[sec] [sec]
7
8
9
Fig. 2 Angular velocities measured with the gyroscopes (subject A). The attachment positions of each gyroscope were referred to Fig.1. Gait phase: 1) mid stance, 2) heel-off, 3) swing and 4) heel-strike. The gait phases were measured by the aluminum electrode.
10
The comparisons between the joint angles measured with the goniometers and calculated ones by using the outputs of the gyroscopes are shown in Fig. 3. The waveform of the angles by the outputs of the gyroscopes looked like the ones with the goniometers. The differences between the lower limb joint angles by the outputs of the gyroscopes and the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Study on Sensing System of Lower Limb Condition with Piezoelectric Gyroscopes: Measurements of Joint Angles and Gait Phases 691 gyroscope
calculated by the outputs of the gyroscopes and the ones measured with the goniometers
0 -20
0
1
2
3
4
5
6
7
8
9
10
40 0 -20
Hip 0
1
2
3
4
5
6
7
8
9
10
100
Knee Ankle
50
0
1
2
3
4
5
6
7
8
9
Subject A RMS CC
Subject B RMS CC
Subject C RMS CC
G1,G3 G2,G3 G3,G4 G4,G5
2.74° 2.78° 2.80° 3.41°
2.86° 3.34° 4.00° 3.66°
2.05° 1.82° 3.64° 3.50°
0.969 0.970 0.995 0.941
0.937 0.900 0.989 0.948
0.987 0.988 0.984 0.941
10
120 100 80 60
0
1
2
3
4
5
6
7
8
9
10
Time [ sec]
4 3
ones with the goniometers were evaluated by Root Mean Square difference (RMS) and by correlation coefficient (CC). Calculated values of the RMS and the CC are shown in Table 1. It was indicated from the result of Table 1 that the hip joint angle could be measured with the gyroscopes more accurately than the other angles. B. Detection of gait phases An example of the result of detecting the swing phase and the stance phase by using the above-mentioned method is shown in Fig. 4. All the swing and stance phases could be detected without mistake in the all subject's walking experiments.
G 5(foot) [deg/sec]
Gyro.
0
with goniometers and calculated ones by using the outputs of the gyroscopes (subject A).
500 0 -500
G ait Phase
Joint
20
Fig. 3 Comparison between the lower limb joint angles measured
.
Table 1 The RMS and the CC between the lower limb joint angles
20
hase
A nkle [deg] K nee [deg] H ip2 [deg] G 3, G 4 G 2, G 3 G 4, G 5
H ip1 [deg] G 1, G 3
goniom eter 40
0
1
2
3
4
0
1
2
3
4
5
6
7
8
9
10
5 6 Tim e [sec]
7
8
9
10
1 0.5 0
Fig. 4 The swing and stance phases detected by the output of G5 (subject A). Gait phase: 0) stance phase and 1) swing phase. Solid line: gait phase detected by the aluminum electrode, broken line: gait phase detected by the gyroscope.
Table 2 The delay times of the gait phase detected by using the outputs of the gyroscopes to the gait phase measured with the aluminium electrode. The numerical value of the minus means that the method using the outputs of the gyroscopes detected the gait phase earlier than the method using the electrode. Detect phase Subject A (used gyro.) [msec]
Subject B [msec]
Subject C [msec]
Average [msec]
Swing (G5) Stance (G5) Stance (G4)
-50.0±13.1 1.4±17.0 1.9±10.7
-55.7±10.1 - 1.9±20.8 - 4.4±30.2
-47.9±14.6 - 1.9±16.7 - 3.1±19.7
-38.1±14.3 - 5.0±10.2 - 6.9±10.8
The method using the outputs of the gyroscopes detected the beginning of the swing phase earlier than the method using the aluminum electrode. The beginning of the stance phase was detected by the method using the outputs of the gyroscopes almost simultaneously with the method of using the electrode. The delay times of the gait phase detected by using the outputs of the gyroscopes to the gait phase measured with the electrode are shown in Table 2. It was indicated that the detection algorithm used in this paper detected the stance phase more exactly than the swing phase (p<0.01, t-test). IV. TABLE DISCUSSION In the examination result of the previous work [8], all the swing and stance phases were detected. However, the stance phase was detected occasionally with mistake in the previous work. This mistake of the detection was removed by setting the period of 0.3 seconds after detecting a change of gait phase. That is, the gate phase detection was inhibited during the period of 0.3 sec. In this paper, the mistake of the detection was removed by using new detection algorithm that recognized the increase and decrease of the output of the gyroscope. Therefore, the period that inhibited the detection became unnecessary in this method. The beginning of the stance phase was detected by the method using the outputs of the gyroscopes almost simultaneously with the method of using the electrode. However, the detection timing varied within about 10 or 30 msec as seen in Table 2. This variation will determine detection timing error. The beginning of the swing phase was detected
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
692
Norio Furuse and Takashi Watanabe
by the method using the gyroscope about 40 or 50 msec earlier then that of using the electrode. However, the detection timing error may be decreased within about the value of standard deviation of the delay time by adding the time difference as shown in Table 2 to the time of phase change detected by the gyroscope. The time difference has to be determined before the use of this method for each patient. The detection accuracy of the stance phase by the output of G4 and the one by the output of G5 were same degree. It was also indicated in this examination that the gait phase could be detected by only using G5. The swing phase detection using G4 was also tried using the same algorithm as that for G5 adjusting threshold values. However, the detection method by using G4 could not often detect the swing phase. It was indicated that the swing phase detection using G4 with this method was difficult. It had been indicated in the previous work that the knee and the ankle joint angles could be measured with appropriate accuracy by using the outputs of the gyroscopes [5]. The experimental results in this paper indicated that the hip joint angle could be measured with the gyroscopes more accurately than the other joint angles. Therefore, the sensing system of lower limb condition with appropriate accuracy that could be used in clinical would be constructed compactly at a low price by using the gyroscopes. V. CONCLUSIONS In this paper, we examined the measurement method of the lower limb joint angles and the detection method of the gait phases by using the gyroscopes. From the result of the walking analysis with the normal subjects, it was indicated that the sensor system could measure the joint angles with sufficient accuracy and could detect practicably the swing phase and the stance phase without mistake. Therefore, the sensing system of lower limb condition with appropriate accuracy that can be used in clinical can be constructed compactly at a low price by using the gyroscopes.
ACKNOWLEDGMENTS This study was partly supported by the Ministry of Education, Culture, Sports, Science and Technology of Japan
under a Grant-in-Aid for Scientific Research, and the Ministry of Health, Labour and Welfare under the Health and Labour Sciences Research Grants.
REFERENCES 1. Bajd T, Kralj A, Štefančič M and Lavrač N (1999) Use of Functional Electrical Stimulation in the Lower Extremities of Incomplete Spinal Cord Injured Patients, Artificial Organs, 235, 1999, pp.403-409 2. Williamson R and Andrews B J (2000) Gait Event Detection for FES Using Accelerometers and Supervised Machine Learning, IEEE Trans. Rehab. Eng., 8-3, 2000, pp.312-319 3. Pappas I P I, Popovic M R, Keller T, Dietz V and Morari M (2001) A reliable gait phase detection system, IEEE Trans. Neural Syst. Rehab. Eng., 9-2, 2001, pp.113-125 4. Simcoxa S, Parkerb S, Davisa G M, Smithc R W and Middletond J W (2005) Performance of orientation sensors for use with a functional electrical stimulation mobility system, J. Biomech., 38, 2005, pp.1185-1190 5. Furuse N, Watanabe T and Hoshimiya N (2005) Simplified Measurement Method of Lower Limb Joint Angles by using Piezoelectric Gyroscopes, Trans. of the Japanese Society for Medical and Biological Eng., 43-4, 2005, pp.538-543 (in Japanese) 6. Furuse N, Watanabe T and Hoshimiya N (2006) Gait Reeducation System for Incomplete Spinal Cord Injured Patients - Measurement of Hip Joint Angle by Piezoelectric Gyroscope -, Proc. of the 11th Annual Conference of the International Functional Electrical Stimulation Society, 2006, pp.228-230 7. Furuse N and Watanabe T (2006) Measurement of Hip, Knee and Ankle Joint Angles during Walking by using Piezoelectric Gyroscopes, Technical Report of IEICE, MBE2006-78, 2006, pp.49-52 (in Japanese) 8. Sasaki Y, Furuse N and Watanabe T (2006) A Basic Study on Detection of Swing and Stance Phases of Gait with Piezoelectric Gyroscopes, Proc. of the 13th Conference of Japan Functional Electrical Stimulation Association, 2006, pp.39-43 (in Japanese) Author: Institute: Street: City: Country: Email:
Norio Furuse Miyagi National College of Technology 48 Nodayama, Medeshima Natori, Miyagi Japan
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Data mining time series of human locomotion data based on functional approximation V. Ergovic1, S. Tonkovic2 V. Medved3 and M. Kasovic3 1
2
IBM Croatia / Software Group, Zagreb, Croatia Faculty of Electrical Engineering and Computing / ZESOI, Zagreb, Croatia 3 Faculty of Kinesiology / Biomechanics, Zagreb, Croatia
Abstract— In many of medical applications specially one considering time sequence data like data describing human gait, searching through large, unstructured databases based on sample sequences is often desirable. Such similarity-based retrieval has attracted a great deal of attention in recent years. Although several different approaches have appeared, most are not specialized for the problem of human locomotion. This paper gives an overview of one proposed approach how to efficiency process human gait by using Fourier’s series approximation, genetic algorithms for coefficients corrections and Spectral Signatures as data mining method. This paper shows how these methods fit into a general context of signature extraction and disorder recognition in the human gait case. Keywords— data mining, time series, human gait, approximation methods, disorder classification.
I. INTRODUCTION Currently, movement therapy decisions are based on limited observations and measurements from parts of the complicated movement system. The therapist often does not have a clear understanding of the integrated movement system and, therefore, cannot properly diagnose and treat the real underlying cause of the disability. The trial-anderror methods that are commonly used by the therapists are inefficient at best and ineffective or even deleterious at worst. New methods are needed to diagnose objectively and definitively the mechanism responsible for the movement disability and recommend targeted movement therapies. One area of potential application is the use of evolutionary methods such as genetic algorithms and data mining principles to extract and classify movement pattern and then propose movement therapy to the limited capabilities of the patients. For example, evolutionary methods can be used to explore feasible movement patterns that are customized to the specific muscle force profile of the patient and targets specific impairment or to set diagnose. These trajectories then can be used to prescribe movement therapies to improve the existing functions or design control systems to restore the lost movement. To our knowledge, the use of such approach in this area has not been explored fully yet. This paper demonstrates the potential uses of those methods
in movement recognition, extracting unknown correlations in existing data and diagnosis. II. HUMAN LOCOMOTION ANALYSIS APPROACH A. Time sequences for locomotion analysis Time sequences arise in many applications—any applications that involve storing sensor inputs, or sampling a value that changes over time. This is common case is laboratories dealing with human locomotion problems like human gait. This is a case in Biomechanics laboratory at Faculty of Kinesiology. A problem which has received an increasing amount of attention lately is the problem of similarity retrieval in databases of time sequences, so-called "query by example." Some possible uses with respect to human locomotion problems are: • • •
Identifying patients with similar gait patterns. Determining dissorders with similar sympthoms. Discovering new causes for specific gait pattern.
The running times of simple algorithms for comparing time sequences are generally polynomial in the length of both sequences, typically linear or quadratic. To find the correct offset of a query in a large database, a naive sequential scan will require a number of such comparisons that is linear in the length of the database. This means that, given a query of length m and a database of length n, the search will have a time complexity of O(nm), or even O(nm2) or worse. For large databases this is clearly unacceptable. Many methods are known for performing this sort of query in the domain of strings over finite alphabets, but with time sequences there are a few extra issues to deal with: • • •
The range of values is not generally finite, or even discrete. The sampling rate may not be constant. The presence of noise in various forms makes it necessary to support very flexible similarity measures.
The idea to help solve those issues is to describe human locomotion e.g. human gait by leveraging Fourier’s series
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 677–680, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
678
V. Ergovic, S. Tonkovic, V. Medved and M. Kasovic
and that way describe gait as a functional approximation. Genetic algorithms are used for fine tuning function’s coefficients for magnitude and phase shift for individual harmonics. B. Spectral Signatures Data mining method presented in this paper is not very recent, but it introduces some of the main concepts used for special domains and possible improvements of this method. During mid 1990-tees there was introduction of a method called the F -index in which a signature is extracted from the frequency domain of a sequence [1]. Underlying that approach are two key observations: • •
Most real-world time sequences can be faithfully represented by their strongest Fourier coefficients. Euclidean distance is preserved in the frequency domain (Parseval's Theorem) [2].
Based on this, scientists suggest performing the Discrete Fourier Transform on each sequence, and using a vector consisting of the sequence's k first amplitude coefficients as its signature. Euclidean distance in the signature space will then underestimate the real Euclidean distance between the sequences, as required. Figure 1 shows an approximated time sequence, reconstructed from a signature consisting of the original sequence's ten first Fourier components without genetic algorithm support. This basic method allows only whole-sequence matching. In 1994, there was introduction of the ST-index, an improvement on the F-index that makes subsequence matching possible. This method is relaying on sliding window algorithm. This algorithm works by anchoring the left point of a potential segment at the first data point of a time series, then attempting to approximate the data to the right with increasing longer segments. At some point i, the error for the potential segment is greater than the user-specified threshold, so the subsequence from the anchor to i–1 is transformed into a segment. The anchor is moved to location i, and the process repeats until the entire time series has been transformed into a piecewise linear approximation [3]. The pseudo code for the algorithm is shown in Table 1 and it is based on basic sliding window algorithm.
Fig. 1
Table 1
Segmentation based on sliding window
Algorithm Seg_TS = Sliding_Window(T, max_error) anchor = 1; while not finished segmenting time series i = 2; while calculate_error(T[anchor: anchor+i])<max error i = i + 1; end; Seg_TS = concat(Seg_TS, create_segment(T[anchor: anchor + (i - 1)]);anchor = anchor + i; end;
The main steps of ST -index approach are as follows: 1. For each position in the database, extract a window of length w, and create a spectral signature (a point) for it. Each point will be close to the previous, because the contents of the sliding window change slowly. The points for one sequence will therefore constitute a trail in signature space. 2. Partition the trails into suitable (multidimensional) Minimal Bounding Rectangles (MBRs), according to some heuristic. 3. Store the MBRs in a spatial index structure. To search for subsequences similar to a query q of length w, simply look up all MBRs that intersect a hypersphere with radius ε around the signature point Q . This is guaranteed not to produce any false dismissals, because if a point is within a radius of ε of Q , it cannot possibly be contained in an MBR that does not intersect the hypersphere [4]. To search for sequences longer than w, split the query into w-length segments, search for each of them, and intersect the result sets. Because a sequence in the result set R cannot be closer to the full query sequence than it is to any one of the window signatures, it has to be close to all of them, that is, contained in all the result sets. There are few commercial implementations of ST-approach in area of intelligent mining relation databases in the form of stored procedures like IDMMX.FindSeqRules stored procedure which is part of IBM DB2 Intelligent Miner tool. C. The Approach and data transformation Science ST-index approach is pretty slow on large volume data sets stored in relation database. Database should be organized to have as least as possible data per specific trial. By using Fourier’s series and genetic algorithm to determine coefficients on data representing right side pelvic obliquity expressed in degrees we managed to express movement via approximation function. That approach enables us to use trigonometric function to describe movement of individual body part (in this case pelvic obliquity) as function and potentially decrease number of data required in
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Data mining time series of human locomotion data based on functional approximation
Fig. 2 database. That way we should have better performance during analysis of movement and applying data mining principles like ST-index approach on sequenced data which is described as set of magnitudes and phases shifts. This principles were applied in on BTS 2000 Elite provided data for pelvic obliquity. This time series data was consists of 100 samples. Science human gait is periodic by nature we did approximation based on Fourier’s series.
679
which represents 0.0 for phase shift) and reduce dataset 5 times. This approach also enables studying phase shifts and magnitudes of individual harmonics to discover same signatures and patterns. Graphical representation of first five harmonics for magnitude and all eleven for phase shift is given by the applet screenshot represented on figure 3. This way we are able to faster analyze data and to save space for storing the data. By using this approach where we use magnitude and phase shifts as data set instead time samples we are able to use same ST-index method. Difference to original method is that data is already stored as spectrum and there is no additional computation required when performing data mining. Tools which are available today use basic time series data mining on data acquired from BTS Elite system. This approach slightly modifies architecture and includes additional component for performing data approximation by leveraging Fourier’s series and genetic algorithm before performing data mining based on Spectral Signature to perform
∞
f ( t ) = a 0 + ∑ ( a k cos kt + bk sin kt )
(1)
k =1
However this approach was modified and was based on cosinus elements with phase shift included.
f (t ) = a 0 + a1 cos( w0 + ϕ1 ) + .. + a n cos(nw0 + ϕ n ) (2) The result of this approximation is shown on figure 3 and with table 2. Java applet shows original curve and approximation. Magnitude and phase spectrums are also graphically shown but they are also displayed on table 2 for each harmonic. This way we managed to describe pelvic movement with 21 elements (22 elements minus one NULL element Table 2
Magnitude and phase for harmonics
Harmonics
Magnitude
Phase
0 1 2 3 4 5 6 7 8 9 10
9.6999 9.1803 2.0191 3.7768 2.5447 1.7860 1.1474 0.8644 0.9594 0.7611 0.4794
0.0 3.423 0.765 3.897 1.573 4.383 4.123 4.532 1.184 4.488 1.366
Fig. 3
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
680
V. Ergovic, S. Tonkovic, V. Medved and M. Kasovic
Fig. 4
Fig. 5 query by example or to extract new rules in existing set of data as shown in figure 4. Figure 4 shows the case for independent data mart. This is situation where data is extracted directly from Elite BTS 2000 system, however if we have choused to apply dependent data mart instead of BTS Elite directly, we would connect to data warehouse, science method stays the same and only data source is changed as shown on figure 5. By using this data mart it is possible to do non trivial extraction of implicit, previously unknown, and potentially useful information from data approximation of different human gait with different technical approaches such as: • • • •
clustering, data summarization, classification rules learning, detecting anomalies.
our understanding of human movement and help us devise new treatments. The main motivations for the use of this method is that it enables simple and well know techniques such as F-index and ST-index which are mostly explored and used in other areas to be used on modified data set which represents functional approximation of human gait. To our knowledge currently there are no examples of spectral signature usage in area of exploring human gait this way. This method can lead to the development of natural human movement, and it can solve optimization and learning problems that cannot be solved with existing methods in area of biomechanics. This method of computation is well suited to parallel processing that can reduce the computation time significantly specially in area of genetic algorithms. Despite great potential, this approach together with some other methods have yet to be applied extensively to studying or treating human movement disorders. This paper describes only one dimension considering human gait (right pelvic). Further research should be done in area of parallel processing different dimensions for specific movement.. This should change as new methods and software tools for evolutionary computation and biomechanical modeling and simulation become readily available. With those tools and methods we will be able to discover new patterns for targeted disorders considering human locomotion.
REFERENCES 1. 2.
III. CONCLUSION We have learned a great deal about human movement and its treatment by experimental measurements and quantitative analysis of its individual elements. To further our understanding of human movement and to develop new movement therapies and rehabilitation techniques, such as motor behaviors additional research must be performed to show interaction of different locomotion components. Evolutionary methods supported by Fourier series and data mining techniques introduced in this paper are representing intelligent systems that can further
3. 4. 5.
Agrawal, R et al. (1993) Efficient Similarity Search in Sequence Databases. Proc. 4th Int. Conf. on Foundations of Data Organization and Algorithms (FODO), Chicago, US, pp. 69–84. Shatkay, H. (1995). The Fourier transform: A primer. Technical Report CS-95-37, Brown University. Faloutsos, C et al. (1994) Fast Subsequence Matching in Time-Series Databases. Proc. of the 1994 ACM SIGMOD Int. Conf. on Management of Data, pp. 419–429. Last M et al. (2004) Data Mining in Time Series Databases, World Scientific Publishing Co, London Hand D. (2001) Principles of Data Mining,The MIT Press, London Author: Vladimir Ergovic Institute: Street: City: Country: Email
IBM Croatia Ltd Miramarska 23 Zagreb Croatia vladimir.ergovic©hr.ibm.com
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Kinematic and kinetic patterns of walking in spinal muscular atrophy, type III Z. Matjacic, A. Praznikar, A. Olensek, J. Krajnik, I. Tomsic, M. Gorisek-Humar, A. Klemen, A. Zupan Institute for rehabilitation, Republic of Slovenia, Linhartova 51, Ljubljana, Slovenia Abstract— Our knowledge on altered neurocontrol of walking due to weakness of various muscle groups of lower extremities is still limited. The aim of this study was to assess kinematic and kinetic walking patterns in a functionally similar group of seven subjects with spinal muscular athrophy, type III (SMA group) and compare them with normal data obtained in nine healthy subjects (CONTROL group) in order to identify characteristic compensatory changes. Kinematic and kinetic patterns were assessed during free walking of a SMA and CONTROL groups. The results showed characteristic changes in ankle plantarflexion moment and associated control of centre of pressure during loading response and midstance that facilitated minimization of external flexion moment acting on the knee and hip in SMA group. Additionally, we identified distinct and consistent changes also in the control of hip rotators and abductors that act in a way to rapidly bring hip in the extension early in the stance phase and delay weight shift onto the leg entering the stance phase, respectively. Keywords— instrumented gait analysis, neuromuscular disorders
I. INTRODUCTION Neuromuscular disorders (NMD) are heterogeneous, mostly progressive, group of diseases of motor unit with muscular weakness as the usual and predominant clinical sign. Muscle weakness is the major limiting factor that profoundly influences walking abilities in patients with NMD [1, 2]. However, clinical data on gait in patients with specific NMD is scarce. Clinical descriptions of walking in people with muscular dystrophy have been provided indicating that various adaptations may occur. In terms of spatio-temporal gait characteristics coping responses include decrease of walking speed, step length and swing time in order to reduce mechanical output requirements of weakened muscles. On the level of joint kinematics and kinetics various adaptations including forward pelvic tilt with lumbar lordosis, lack of knee flexion at loading response, rapid hip extension following initial contact, equinus foot position and lateral trunk motion resulting in waddling gait were observed. Seminal work of Sutherland et al. [1,2] that investigated walking of a group of Duchenne muscular dystrophy (DMD) patients in various phases of disease progression laid foundations to our current understanding of pathome-
chanics of gait in DMD. Different NMD exist in which muscle groups could be weakened in rather similar pattern. Spinal muscular atrophy (SMA), for example, is characterized similarly as DMD with progressive symmetric muscular weakness that affects more proximal than distal muscles of predominantly lower limbs [3]. Based on this similarity, a hypothesis that also similar compensatory changes in gait could be expected in both pathological conditions is plausible. Recently however, Armand et al. [4] investigated kinematic and kinetic gait patterns in DMD and SMA, type II by means of instrumented gait analysis, including two patients per each group. Their results indicated that distinctive differences in pathomechanics of walking in the two groups may exist which would be in concordance with more distinct distribution of muscle weakness in SMA [5], however due to small number of subjects and low walking speed their results were inconclusive. The objective of this study was to examine kinematic and kinetic patterns assessed in walking of a larger group of individuals diagnosed with spinal muscular atrophy, type III. People with SMA, type III can walk independently and in the early phase of disease also with walking speeds comparable to normal walking. Our aim was to determine characteristic compensatory mechanisms in relation to patterns obtained in a group of neurologically intact individuals. II. METHODS A. Subjects Experimental group consisted from seven adult subjects (age 39,7 ± 11,04 years, height 169 ± 10 cm and body mass 68 ± 17 kg), 4 males and 3 females that were diagnosed with SMA, type III that were selected from the group of 9 eligible subjects that reside in Slovenia. The remaining two subjects were excluded from the study because of evidently either too early or too progressed phase of disease as judged by observational clinical gait analysis and manual muscle testing of the lower limb muscle strength. In addition, nine healthy individuals with no history of musculoskeletal or neurological disease (age 33,11 ± 2,66, height 178 ± 12 cm and body mass 76 ± 14 kg), 6 males and 3 females were included as a control group. The study was approved by
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 681–684, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
682
Z. Matjacic, A. Praznikar, A. Olensek, J. Krajnik, I. Tomsic, M. Gorisek-Humar, A. Klemen, A. Zupan
Slovenian National Ethics Committee and the subjects provided informed consent. B. Experimental conditions Subjects were asked to walk across a 10m-gait laboratory walkway with their preferred speed. A VICON motion capture and analysis system (VICON 370, Oxford Metrics Ltd., Oxford, UK) was used to capture motion of lower limbs, pelvis and trunk. Reflective markers were attached to the subjects’ skin over designated landmarks according to the specifications provided by manufacturer of the system (Vicon Clinical Manager). Motion data were sampled at 50 Hz. Two AMTI force plates (AMTI OR-6-5-1000, Advanced Mechanical Technology Inc., Watertown, MA) that were positioned in the center of a walkway were used for recording ground reaction forces. Force data were sampled at 1000 Hz. At least three clear steps of each leg were captured for analysis. Gait velocity, stride length and cadence data were extracted. Ankle (dorsilexion/plantarflexion), knee (flexion/extension) and hip (flexion/extension, adduction/abduction and internal/external rotation) angles of rotation, joint moments and powers were calculated. Joint moments and powers were normalized for body mass and
reported in Nm/kg and W/kg, respectively. For each subject the averaged values from three trials for each leg were calculated and used in subsequent averaging and statistical analysis of the data for the whole group for each group separately. Gait cycle terminology as introduced by Perry [4] was adopted to define instants of characteristic peak values of kinematic and kinetic trajectories in the gait cycle. III. RESULTS The SMA group exhibited shorter stride length (1,07±0,15m), lower cadence (94,12±8,99 steps/min) and lower gait velocity (0.83±0,1 m/s) as compared to CONTROL group (1,37±0,14 m; 107,45±7,22 steps/min; 1,22±0,17 m/s). Kinematic patterns are shown in Fig. 1. Below we list differences between the SMA and control group. Posterior trunk and anterior pelvic inclination can be observed in SMA group throughout the whole gait cycle. After the heel strike a pronounced anterior rotation of pelvis characterizes loading response and midstance in SMA group. This internal rotation of pelvis is associated with rapid hip extension during first 25% of stance in SMA group. Hip is more flexed and adducted throughout the whole gait cycle. The
Figure 1. Kinematic gait patterns showing averaged values and one standard deviation for SMA group (thick line – mean values and dotted lines – standard deviations) and CONTROL group (thin line – mean values and shaded region – standard deviations). Horizontal axes show percent of a gait cycle.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Kinematic and kinetic patterns of walking in spinal muscular atrophy, type III
683
Figure 2. Kinetic gait patterns showing averaged values and one standard deviation for SMA group (thick line – mean values and dotted lines – standard deviations) and CONTROL group (thin line – mean values and shaded region – standard deviations). Horizontal axes show percent of a gait cycle.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
684
Z. Matjacic, A. Praznikar, A. Olensek, J. Krajnik, I. Tomsic, M. Gorisek-Humar, A. Klemen, A. Zupan
knee joint trajectory shows absence of knee flexion in the stance phase, while the ankle joint trajectory shows shift toward more plantarflexion, however the shape is similar in both groups and the stance phase in both groups begins with heel strike. Kinetic patterns are shown in Fig. 2. Hip flexion/extension moment profile shows smaller amplitude and shorter duration of hip extension moment in the first half of stance phase. Also smaller amplitude of hip flexor moment in the second half of stance phase is noted. In the first half of stance phase absence of hip internal moment can be seen, which allows associated anterior rotation of pelvis after the heel contact. The knee flexion/extension moment profile shows minimal extension moment generation throughout the whole gait cycle and particularly during the loading response. Ankle plantarflexion/dorsiflexion moment and power profiles show similar shapes in both groups with smaller amplitudes in SMA group. Knee power profile displays almost no power absorption/generation throughout the majority of stance phase in SMA group, while the hip power profiles show similar shape. Ground reaction forces show similar shapes in the horizontal and lateral directions. Vertical component shows distinctive difference during loading response where the body weight transfer onto the landing limb is delayed, which is associated with prolonged activity of contralateral hip abductors. IV. DISCUSSION Muscle weakness limits walking ability in people with SMA type III. Coping responses include decrease of walking speed, step length and swing time in order to reduce mechanical output requirements of weakened muscles. One of the most debilitating cases is simultaneous weakness of hip and knee extensors. This is because kinematic and kinetic patterns of walking must be such to minimize the need for knee and hip extensors output throughout the stance phase. One compensatory mechanism employed concerns control of COP and GRF in such a way that appropriate external moments imposed by GRF onto the knee and hip joints is achieved. This was described by Sutherland et al. [1,2] and the results of our study are in agreement with this. Additionally, the results of our study identified the second mechanism that corroborates with the first one and is represented by anterior rotation of pelvis after the leg enters stance phase and thus facilitates rapid hip extension. In this way positioning GRF in front of the knee and behind the hip joint is further facilitated. This compensatory mechanism
may explain increased internal rotation observed at the foot, ankle and the knee for which Sutherland et al. [1,2] did not find plausible explanation. Finally, a third compensatory mechanism being in synergy with the first two relates to decreased rate of weight acceptance immediately after foot contact, which can be seen from the time course of vertical component of ground reaction forces. This is achieved by prolonged activity of hip abductors on the contralateral side. Overall, our results show that the SMA subjects adopt control strategy, which minimizes external moments produced by GRF on the knee and hip. Our findings have direct application for rehabilitation and/or preventive exercise programs, which should apart from targeting maintenance of ankle plantarflexors strength also focus on maintaining ability of hip rotators and abductors.
ACKNOWLEDGMENT The authors acknowledge financial support from Slovenian Research Agency (contract P2-0228).
REFERENCES 1. 2. 3.
4. 5.
6.
Sutherland DH, Olshen R, Cooper L, Wyatt M, Leach J, Mubarak S, Schultz P. The pathomechanics of gait in Duchenne muscular dystrophy. Develop. Med. Child Neurol. 1981; 23: 3-22. Sutherland DH; Gait disorders in childhood and adolescence. 1984; Williams&Wilkins, Baltimore, MD. Zerres K, Davies KE. 59th ENMC International Workshop: Spinal Muscular Atrophies: recent progress and revised diagnostic criteria 17–19 April 1998, Soestduinen, The Netherlands. Neuromuscul Disord. 1999 Jun;9(4):272-8. Armand S, Mercier M, Watelain E, Patte K, Pelissier J, Rivier F. A comparison of gait in spinal muscular athrophy, type II and Duchenne muscular dystrophy. Gait&Posture 2005; 21: 369-378. Deymeer F, Serdaroglu P, Poda M, Gulsen-Parman Y, Ozcelik T, Ozdemir C.Segmental distribution of muscle weakness in SMA III: implications for deterioration in muscle strength with time.Neuromuscul Disord. 1997 Dec;7(8):521-8. Perry J. Gait analysis: Normal and pathological function. Thorofare, NJ: SLACK Incorporated; 1992. Address of the corresponding author: Author: Dr.Zlatko Matjacic Institute: Street: City: Country: Email:
Institute for rehabilitation, Republic of Slovenia Linhartova 51 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Gait E-Book – Development of Effective Participatory Learning using Simulation and Active Electronic Books A. Sandholm1,3, P. Fritzson1, V. Arora2, Scott Delp2, G. Petersson3 and J. Rose2 1
2
Institute of Technology/Department of Computer and Information Science, Pelab, Linköping, Sweden Stanford University/Department of Orthopaedic Surgery, Bioengineering and Computer Science, Stanford, USA 33 eHealth Institute, Kalmar, Sweden
Abstract— In this paper we outline an interactive electronic book that teaches high school students about human locomotion. Today the most common teaching methods are lectures or reading a static textbook where the student’s participation is passive and they do not engage in the learning process. When learning about human gait, students not only learn anatomy and kinesiology but also have the opportunity to grasp theoretical subjects such as mathematics, physics, biomechanics as well as concepts of modeling and simulation to carry out experiments. In this paper we outline an interactive electronic book where the student becomes engaged in the learning processes, they can add/remove text, images, video, create models and perform simulation in one environment. The Gait E-book combines the theoretical lecture with the interactive learning process of modeling and simulation. Two different simulation platforms will be supported in the E-book, OpenModelica and OpenSim. Modelica is a powerful modern equation based simulation language where students can focus on learning mathematical and physical behavior. The E-book is fully integrated with the OpenModelica environment which is the major Modelica open source implementation. To support musculoskeletal biomechanics, the E-book will be integrated with a subset of the functionality from the open source OpenSim modeling and simulation environment. Students will be able to create models, and carry out simulation both in Modelica and OpenSim in the same interactive notebook and from these results create both interactive 3D visualization and scientific visualization that give them the ability to understand the underlying biomechanics of human gait. The high school Gait Ebook edition will cover subjects such as, the gait cycle including the tasks of weight acceptance, single limb swing, limb advancement as well as gait kinematics and kinetics and muscle activity. Students will study physical subjects such as, mass, velocity acceleration, gravity, torque, kinetic and potential energy, lever arms, pendulums, conservation of energy, Newton’s laws and spring-mass systems.
tasks which have been defined using kinematic, kinetic and EMG data [1]. Kinematics, the joint motion during gait, is obtained using computerized 3D motion capture. Kinetics are the forces acting across the joints during gait and are calculated from the kinematic data and the floor reaction forces recorded from a force plate embedded in the floor. Electromyography (EMG) measures the patterns of muscle activity while walking. In this paper we introduce a research project that brings modern object-oriented equation-based modeling and simulation technology (Modelica) into the learning process, by providing high school students the opportunity to experiment with physical phenomena by using interactive electronic books to learn about human gait. These interactive electronic book courses will allow experimentation and dynamic simulation, such as rigid body simulations (musculoskeletal simulations) in the same document that contains text, links, pictures, video, virtual and scientific visualizations. The electronic book courses will provide problem based learning for high school students focusing on human gait that integrates applied sciences in physics, mathematics and human biology.
Keywords: Gait, Interactive Notebook, Modeling, Simulation
I. INTRODUCTION Gait is the primary means of human locomotion and is accomplished through a cycle of limb and trunk movements. The gait cycle, see Fig. 1, is composed of phases and
Figure 1. (a) The Gait Cycle and (b) Dynamic EMG graph showing average muscle activation times during the gait cycle
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 685–688, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
686
A. Sandholm, P. Fritzson, V. Arora, Scott Delp, G. Petersson and J. Rose
II. TECHNOLOGY A. Modelica In 1996 several first generation object-oriented mathematical modeling languages and simulation systems (ObjectMath [2], Dymola [3,4], Omola [5], NMF [6] gPROMS [7]) had been developed. However, the situation with a number of different incompatible object-oriented modeling and simulation languages was not satisfactory. Therefore, in the fall of 1996, a group of researchers from universities and industry started work towards standardization and making this objectoriented modeling technology widely available. This language is called Modelica [8,9] and designed primarily for modeling dynamic behavior of engineering systems. The language is intended to become a de facto standard. Compared to most widespread simulation languages available today this language offers three important advantages: • •
•
Acausal modeling based on differential and algebraic equations – more general than traditional causal blockoriented modeling. Multi-domain modeling capability, i.e., it is possible to combine electrical, mechanical, thermodynamic, hydraulic, etc., model components within the same application model. A general type system that unifies object-orientation, multiple inheritance, components/connectors, and templates/generics within a single class construct.
Today several commercial implementations of Modelica (i.e., subsets thereof) are available, example Dymola [4], and MathModelica [10]. In the E-Gait book project the OpenModelica [11] software is used which is the major Modelica open-source tool effort. The OpenModelica environment consists of several interconnected subsystems, as depicted in Fig. 2. Arrows Graphical Model Editor/Browser
Eclipse Plugin Editor/Browser Emacs Editor/Browser DrModelica NoteBook Model Editor
Interactive session handler
Textual Model Editor
Modelica Compiler
Execution
Modelica Debugger
denote data and control flow. Several subsystems provide different forms of browsing and textual editing of Modelica code. The debugger currently provides debugging of an extended algorithmic subset of Modelica. The graphical model editor is not really part of OpenModelica but integrated into the system and available from MathCore [10] without cost for academic usage. In the Gait E-book three parts of the OpenModelica subsystem is used. •
•
•
A Modelica compiler subsystem, translating Modelica to C code, with a symbol table containing definitions of classes, functions, and variables. Such definitions can be predefined, user-defined, or obtained from libraries. An execution and run-time module. This module currently executes compiled binary code from translated expressions and functions, as well as simulation code from equation based models, linked with numerical solvers. Notebook model editor. This subsystem provides a notebook editor. This functionality allows the usage of interactive hierarchical text documents with chapters and sections can be represented and edited. The OMNotebook support functionality for Modelica model simulations, text, images and interactive linking between documents.
B. OpenSim OpenSim [12] is a new open source software project that aims to provide a high-quality, easy-to-use, bio-simulation tool for modeling and simulation motions and forces in a neuromusculoskeletal system. Currently the OpenSim software is aimed towards biomechanics scientists, clinical user or software developer that are used to work with systems containing rigid bodies controlled using constraints and forces. In the Gait E-book a subset of the OpenSim functionality has been selected to create a suitable musculoskeletal modeling and simulation environment for high school education. The OpenSim interface has also been simplified to include the basic and necessary functions to make it easier for students to use the Gait E-book. When using the Gait E-book OpenSim plug-in, students will be able to create and control musculoskeletal simulations, save simulation results, animation and visualizations in the OMNotebook format, without handling different interfaces or file formats. Fig. 3a shows an image of an OpenSim beta version.
Fig. 2. OpenModelica architecture.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Gait E-Book – Development of Effective Participatory Learning using Simulation and Active Electronic Books
Fig. 3a OpenSim software, 3b, Skeleton and muscular simulation C. OMNotebook OMNotebook [11] is one of the first open source software systems that makes it possible to create interactive WYSIWYG books for teaching and learning programming. It has currently been used for course material (DrModelica [11]) in teaching the Modelica language, (see Fig. 4), but can easily be adapted to electronic books teaching other programming languages, or even other subjects such as physics, chemistry, etc., where phenomena can be illustrated by dynamic simulations within the book. This could
687
substantially improve teaching in a number of areas, including biomechanics. Traditional teaching methods are often too passive and don’t engage the student. Typical examples are traditional lecturing and reading a textbook on a subject matter. While this is a good method, it does not facilitate the learning process. However, learning modeling/simulations requires interaction and programming exercises in order to grasp the concept. A third option is to make the book active, to run programs and exercises within the book, and mix lecturing with exercises and reading in the interactive book. Traditional documents, e.g. books and reports, essentially always have a hierarchical structure. They are divided into sections, subsections, paragraphs, etc. Both the document itself and its sections usually have headings as labels for easier navigation. This kind of structure is also reflected in electronic notebooks. Every notebook corresponds to one document (one file) and contains a tree structure of cells. A cell can have different kinds of contents, and can even contain other cells. The notebook hierarchy of cells thus reflects the hierarchy of sections and subsections in a traditional document such as a book. In the Gait E-Book project an interactive notebook has been developed to see if different technology can be combined to create a suitable educational environment, where students don’t have to focus on technology but instead focus on the information. As we have earlier described, the musculoskeletal simulation will be supported through the OpenSim system which is integrated into the OMNotebook using a client-server socket system, (see Fig. 5). When learning about human gait students have to be able to view filmed material, both to learn “normal” gait and to detect gait disorders. In the Gait E-Book, the movie support has been added by using the ffmpeg [13] library. The ffmpeg project is an open source project that enables the Ebook to support multipliable file formats. It does not limit the student to use a locked file format. The OpenSim OMNotebook plug-in for viewing 3D skeleton/muscular models, Fig. 3b [14], and simulation result will be supported through the OpenInventor library [15] and through The Visualization Toolkit [16]. III. THE GAIT E-BOOK CURICULUM The Gait E-book focuses mainly on teaching junior and senior high school students who have previously taken a basic physics and mathematics course. The curriculum is constructed so that each chapter starts with a brief gait introduction followed by an introduction to the applicable physical laws which explain the motion.
Fig. 4 Bouncing ball example in OMNotebook
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
688
A. Sandholm, P. Fritzson, V. Arora, Scott Delp, G. Petersson and J. Rose
At this stage a basic Modelica or OpenSim model is introduced that the student can study, simulate and change to get an understanding of the physics concept. Currently seven chapters are planned for the complete Gait E-book, first an introduction to gait including the gait cycle, weight acceptance, single limb support, gait kinematics, and muscle activity. This introductory chapter is constructed so that the students shall understand how human locomotion is performed. This is then followed by an introduction to physics and four chapters that focus on different aspects of gait and the application of basic physics concepts such as mass, velocity, acceleration, gravity, lever arms, kinetic & potential energy, pendulum and conservation of energy. Concepts such as Newton’s first, second and third law are covered and the concept of mass-spring simulation system is introduced and used to create simulation models. The final chapter engages the student to integrate the information in a puzzle format. One example is the chapter on gait kinetics, which is described using both potential and kinetic energy, by first introducing center of mass and then using a simple physics example like the bouncing ball example, (Fig. 4). Students can easily grasp the concept of the two different types of energy and change parameters like mass and velocity to see how the energy changes. This gives the student a simple example that they can easily visualize as they use the Modelica model. When the student has grasped these concepts they are introduced to the idea of energy conservation and have the opportunity to calculate the total energy in the bouncing ball and draw the conclusion that in each bounce the ball looses energy, the same way we utilize energy every time we take a step. IV. CONCLUSIONS In this paper we outline the basic ideas of the Gait Ebook and how it is planned to be used in the future. The interactive E-book system has been used successfully in both undergraduate/graduate and workshop courses for the Modelica language. The Gait E-book takes this further, integrating two complex simulation environments with innovative teaching concepts, An early prototype is being
developed for evaluation in high school environments both in the USA and in Sweden.
ACKNOWLEDGMENT .This project has been funded by a Planning Grant from the Wallenberg Global Learning Network.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Rose J, Gamble J (2006) Human Walking, third edition. Lippincott Williams & Wilkins Fritzson P, Viklund L, Fritzson D, Herber (1995) J. High-Level Mathematical Modelling and Programming, IEEE Software, 12(4):7787 Elmqvist H (1978) A Structured Model Language for Large Continuous Systems. Ph.D. thesis, TFRT-1015, Dept. of Automatic Control, Lund Institute of Technology, Lund, Sweden Elmqvist H, Bruck D, Otter M. (1996) Dymola—User's Manual. Dynasim AB, Lund, Sweden Mattsson S, Andersson M. (1992) The Ideas Behind Omola. In Proceedings of the 1992 IEEE Symposium on Computer-Aided Control System Design (CADCS ’92), Napa, California, USA Sahlin P (1996) Modelling and Simulation Methods for Modular Continuous Systems in Buildings. Ph.D. thesis, Dept. of Building Science, Royal Inst. of Technology Stockholm, Sweden Barton P, Pantelides C (1994) The Modelling of Combined Discrete/Continuous Processes. AIChemE Journal, 40, pp. 996–979 Fritzson P (2004) Principles of Object-Oriented Modeling and Simulation with Modelica 2.1, 940 pp, Wiley-IEEE Press Modelica Association. The Modelica Language Specification Version 2.2, March 2005. http://www.modelica.org MathCore Engineering AB (2006) MathModelica User’s Guide. www.mathcore.com Fritzson P et al (2006) OpenModelica Users Guide and OpenModelica System Documentation, www.ida.liu.se/projects/OpenModelica OpenSim at https://simtk.org/home/opensim FFMPEG, http://ffmpeg.mplayerhq.hu/ Simm, http://www.musculographics.com/ Coin3D (OpenInventor), http://www.coin3d.org/ The Visualization Toolkit, http://public.kitware.com/VTK Corresponding author Author: Institute: Street: City: Country: Email:
Anders Sandholm Department of Computer and Information Science Linköpings universitet Linköping Sweden
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Two-level control of bipedal walking model A. Olensek and Z. Matjacic Institute for Rehabilitation, Republic of Slovenia, Ljubljana, Slovenia
Abstract— This paper presents a two-level control strategy for bipedal walking model that accounts for implicit control of push-off and power absorption on the between-step control level and tracking of imposed holonomic constraints on kinematic variables via feedback control on within-step control level. The proposed control strategy was tested in a biologically inspired model with minimal set of segments that allows evolution of human-like push-off and power absorption. We evaluated the performance of the biped walking model in terms of how variations in torso position and gait velocity relate to push-off and power absorption. The results show that the proposed control strategy can accommodate to various trunk inclinations and gait velocities in a similar way as seen in humans. Keywords— push-off, power absorption within-step control, between-step control
I. INTRODUCTION Even though biomechanics of human locomotion is well understood [1,2] and human walk appears plain, we do not yet have a full understanding of control principles that underlie body support and forward propulsion in legged locomotion. This has motivated a rapid progress in design of numerous biped walking simulation models [3-6] that allow us to investigate how various control principles effect biped walking and how they relate to human walking. It is aimed for such models to adopt some general aspects of human locomotion such as body support and upright torso position as well as human-like walking. Inspired by human locomotion by far most energy efficient biped walking models are passive dynamic [5] and ballistic [6] walking models. They can generate natural dynamics and require little effort to control but simultaneously lack the robustness and insensitivity to disturbances. On the other hand, more advanced biped walking models have their control system organized around high-level trajectory generator with low-level servoing implemented via PD control or feedback control [3]. The trajectories may be determined via analogy with human walking or calculated through optimization of certain cost criteria [4]. Such approach has proven to produce valuable results in models with similar anthropomorphic structure as in human and a priori knowledge on specific movement at hand.
However, trajectory tracking has proven weak in closedloop system as it makes the system time-varying and when disturbed the system has to regain synchrony with the reference trajectory. Instead, it has been shown that tracing an orbit parameterized with respect to a scalar-valued functions of the states of the robot, considerably improves the stability [3]. Feedback control therefore involves defining a set of walking premises as a set of kinematic constraints that lead to exponentially stable walking when imposed on the robot via feedback control. The above control principle has been implemented and tested in simulation models with few standard simplifications. Commonly it is assumed to model the contact point of the stance leg with the ground as pivot point. To alleviate the control of biped walking models we confine their locomotion to sagittal plane only. The models are also lacking a double support phase by assuming an instantaneous transition from support phase to swing phase. Consequently such models cannot adequately account for push-off and power absorption as seen in human locomotion where lower extremity joint moments and powers contribute significantly to forward propulsion during push-off at the end of single support phase and power absorption during double stance as well as to overall stability of human locomotion [7]. Incorporating such human-like behavior into control of biped walking model represents a considerable challenge. The main aim of this paper is to develop a human-like control strategy for a biped walking model that incorporates implicit control of push-off at the end of single support and power absorption during double support. The performance of the biped walking model in different walking modes is compared to human walking in terms of ground reaction forces. II. BIPED SIMULATION MODEL The modeling approach presented in this paper is closely related to work of Grizzle et al. [3]. The robot is considered bipedal and planar with five degrees of freedom. It is assumed to have two telescopic legs that are connected at hip by ideal revolute joints and are carrying the torso segment. There is a mass at the center of each leg and two masses at the hips and the end of a torso segment respectively. Finally a force actuator is applied at each leg and two torques between the torso and each leg, but not at the contact point of
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 673–676, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
674
A. Olensek and Z. Matjacic
q
B. Contact phase model
T
We assume a rigid contact [8]. Hence, the angular momentum is conserved, leading to a relation connecting the velocity vectors just before q − and just after q + the impact
mT , IT
zH
mH q2 z sw
⎡M ⎢ ⎣⎢ Γc
LT
xH
mL , I L L1 xsw
∂Ψc and Fc represents tangent and normal ∂q forces at the tips of both legs. While the configuration of the walker remains constant during contact velocity discontinuities may arise.
z st xst
B. Double support phase model
Fig. 1. Schematic representation of the biped model the leg with the ground as it is considered as pivot point. A representative model structure is shown in Figure 1. A gait cycle will be divided into phases of single support and double support. The transition from the single support to double support phase is referred to as contact phase while the transition from double support to single support phase is referred to as the take-off phase. A.
Single support phase model
Let q = (q1 , q 2 , qT , L1 , L2 , x H , z H )T be the set of coordinates describing the configuration of the robot with respect to a world reference frame and u = (T1 , T2 , F1 , F2 )T the torques between the torso and each leg and forces in each leg respectively. The stance leg contacting the ground throughout the single support phase adds two supplementary constraints in the form Ψst (q ) = 0 . They are introduced into dynamic equations via Lagrange multipliers: T M (q )q + C (q, q )q + G (q ) = Bu + Γss λ ss
∂Ψss and λ ss is a vector of Lagrange multipliers ∂q equal to negative ground reaction forces during single support. The model is written in the state space form by: Γss =
x = f ss (x ) + g ss ( x)u
Both legs in contact with the ground throughout the double support phase introduce four constraints. They are expressed in matrix form Ψds (q ) =0 and introduced into dynamic equations via Lagrange multipliers: T M (q )q + C (q, q )q + G (q ) = Bu + Γds λ ds
(2)
(4)
∂Ψds q = 0 and λ ds is a vector of Lagrange ∂q multipliers equal to negative ground reaction forces during double support. The model is written in the state space form by:
where Γds q =
x = f ds (x ) + g ds ( x )u
(5)
C. Take-off phase model As in contact phase the configuration of the walker remains constant during the take-off phase and the following relation may be solved for velocities just after the take-off: ⎡M ⎢ ⎣⎢Γtop
(1)
where M (q ) is the inertia matrix, C (q, q ) is the matrix of centripetal and Coriolis terms, g is the gravity vector,
where x =[q q ]T .
(3)
where Γc =
q1
L2
− ΓcT ⎤ ⎡q + ⎤ ⎡ Mq − ⎤ ⎥⋅⎢ ⎥ = ⎢ ⎥. 0 ⎦⎥ ⎣⎢ Fc ⎦⎥ ⎣⎢ 0 ⎦⎥
where Γtop =
T ⎤ ⎡ + ⎤ ⎡ − Γtop q Mq − ⎤ ⎥⋅⎢ ⎥=⎢ ⎥ 0 ⎦⎥ ⎣⎢ Ftop ⎦⎥ ⎣⎢ 0 ⎦⎥
(6)
∂Ψtop
and Ftop represents tangent and nor∂q mal forces at the tip of the leg, that remains in contact with the ground in succeeding single support phase. III. CONTROL STRATEGY This section develops the two-level control strategy for a biped walking machine. On lower level we adopt similar
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Two-level control of bipedal walking model
675
control principle as presented by Grizzle et al. [3] i.e. to impose kinematic constraints via feedback control, henceforth within-step control. Our proposal is to adaptively modify these constraints after each gait cycle in betweenstep control in such a manner to adjust forward propulsion to achieve desired gait velocity and step length control. A. Within-step control in single support phase In human walking one observes that the torso is maintained at nearly vertical position, the swing leg behaves roughly as mirror image of the stance leg, vertical hip movement is minimized and the end of the swing leg traces an approximately parabolic trajectory. These observations have been used to build a set of control objectives in the form of the following output functions: ⎤ ⎡qT − r1 ⎢q + q − r ⎥ 2⎥ sw := h(q) y = ⎢ st ⎥ ⎢ z sw − r3 ⎥ ⎢ ⎦ ⎣ L st − r4
(7)
Tracing reference trajectories r1 and r2 ensures constant angle of the torso with respect to the vertical, say qT ,d , and forward advancement of the hips while the swing leg moves from behind the stance leg to in front of it as mirror image of the stance leg. Later definition implies that q st is monotonically increasing function during single support phase. The legs are assumed telescopic and their length move around nominal leg length Lleg ,nominal . Telescopic movement of the swing leg is determined to assure sufficient swing leg clearance until the next contact. Upon defining a desired step length the contact time can be calculated at q st = q st ,d . Lengthening and shortening as governed by reference trajectory r4 determines a telescopic movement of the stance leg. It is defined as a fifth order polynomial r4 = Lst ,d (qst ) such that it assures continuity in position and velocity at the start of single support phase, the leg reaches a nominal leg length at the middle and the end of q st range and the leg lengthening velocity at q st = q st ,d equals the desired L st ,d , which is determined on the higher between step control level. The control objective is to drive the outputs y = h(q) to zero. Following the standard Lie derivative notation [9], the overall feedback applied is given by
(
) (
u = − Lg ss L f ss h −1 L2f ss h + K D, ss L f ss h + K P, ss h
)
(8)
where Lg L f h(q ) is the decoupling matrix and is assumed invertible and K D, ss and K P, ss positive definite gain matrices.
B. Within-step control in double support phase In double support phase we will continue controlling the torso angle yet suspend the mirrored-like behavior of stance and swing. To assure invertibility of decoupling matrix the torso angle control will be encoded as qT − r1 ⎡ ⎤ y=⎢ ⎥ := h(q ) ⎣qT + ηq sw −r 2 ⎦
(9)
and choosing the constant η << 1 avoids singularity of decoupling matrix. Following the steps in previous section the torques applied to hip motors are then expressed as
(
) (
⎡ Tst ⎤ −1 2 ⎢ ⎥ = − Lg ds L f ds h L f ds h + K D, ds L f ds h + K P, ds h ⎣Tsw ⎦
)
(10)
Forward dynamics assumes exponentially increasing Fsw and parabolic function for decreasing Fst . Double support phase is considered terminated when Fst reaches zero value.
C. Between-step control Between-step control introduces adaptive variation of the desired stance leg lengthening velocity at the end of single support phase L st ,d in a sense that greater L st ,d necessitates more pronounced push-off. Such control strategy allows us to influence forward propulsion to assure constant gait velocity. Between-step control can be expressed as k −1 k −1 k −2 Lkst ,d = Lkst−,1d + k p (v gait − v gait ,d ) + k d (v gait − v gait ) (11)
k is the average gait velocity in the k-th step. where v gait
IV. RESULTS The performance of between-step control is shown in Figure 2. We notice considerable discrepancy between k v gait and v gait ,d in first few steps until the adaptive control of L st , d takes effect leading to a gradual convergence to a stable walking at desired gait velocity and somewhat shorter step length than desired afterwards.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
676
A. Olensek and Z. Matjacic
On the other hand when torso inclined anteriorly more power absorption and less pronounced push-off are needed to maintain constant gait velocity (Figure 3b).
0.8
Lkst ,d 0.4 0
V CONCLUSION
1.4 k v gait
1 0.6 0
30 cycle
60
Fig. 2. Within-step control performance in first 60 walking cycles. v gait , d = 1m / s
The main contribution of this paper is introduction of between-step control in a way that enables adaptive control of gait velocity. In contrast to similar bipedal models where the kinematically based trajectory tracking has been predominantly used, we placed the proposed two-level control strategy within kinetic framework. The proposed control strategy has proven capable generating human-like behavior in push-off and power absorption pattern to account for desired gait velocity and torso position variations.
Figure 3a shows that pronounced push-off as well as power absorption is necessary if walking is to be faster. GRFx / N 1200
0
-300
This research was supported by Slovenian search.Agency (grant P2-0028)
GRFz / N
300
ACKNOWLEDGEMENTS
REFERENCES
600
1.
0 0
50
0
100
50
100
v gait , d = 0.9m / s
v gait , d = 1.1m / s
v gait , d = 1.0m / s
human
a
2. 3.
GRFx / N 300
1200
0
600
4. 5. 6. 7.
0
-300
0
50 100 % of gait cycle
0
q T ,d = 0rd q T ,d = 0.1rd
Re-
50 100 % of gait cycle q
T ,d
= 0.2rd
8. 9.
Zajac F E, Neptune R R, Kautz S A et al. (2002) Biomechanics and muscle coordination of human walking Part I: Introduction to concepts, power transfer, dynamics and simulations. Gait Posture, 16: 215-232 Zajac F E, Neptune R R, Kautz S A et al. (2003) Biomechanics and muscle coordination of human walking Part II: Lessons from dynamical simulations and clinical implications. Gait Posture, 17:1-17 Grizzle J W, Abba G, Plestan F et al. (2001) Asymptotically stable walking for biped robots: analysis via systems with impulse effects IEEE Trans Automatic Control, 46: 51-64 Gilchrist L A, Winter D A et al. (1997) A multisegment computer simulation of normal human gait. IEEE Trans Rehab Eng, 5: 290-299 McGeer T (1990) Passive dynamic walking Int J Robotics Research, 8: 68-83 Mochon S, McMahon T A et al. (1980) Ballistic walking. J Biomech, 13: 49-57 Winter D A (1983) Biomechanical motor patterns in normal walking. J Motor Behav, 15: 302-330 Hurmuzlu Y, Marghitu D B et al. (1994) Rigid body collisions of planar kinematic chains with multiple contact points. Int J Robotics Res, 13: 82-92 Isidori A (1989) Nonlinear control systems: an introduction. SpringerVerlag, Berlin, 2nd edition
human
b
Fig. 3. Relation between gait velocity and power absorption and push-off (a) and relation between torso position and power absorption and push-off (b). For comparison average human ground reaction forces are shown (adopted from winter et. al. [7])
Author: Andrej Olenšek Institute: Street: City: Country: Email:
Institute for Rehabilitation, Republic of Slovenia Linhartova 51 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Vertical unloading produced by electrically evoked withdrawal reflexes during gait: preliminary results J. Emborg, E. Spaich and O.K. Andersen Center for Sensory-Motor Interaction (SMI), Aalborg University, Denmark Abstract— The objective of this study was to investigate the capability of Force Sensitive Resistors (FSR) to characterize perturbations of gait produced by electrically evoked withdrawal reflexes. Reflexes were evoked by cutaneous electrical stimulation at 4 locations on the sole of the left foot and delivered at 3 different phases in early swing. Force changes were measured by 4 FSR's attached to each foot. The vertical ground reaction force was estimated by summing the force signals for each foot separately. Response measures were mean peak force and peak slope evaluated in 5 gait sub-phases immediately after the stimulus: 1st double support, 1st right single support, 2nd double support, 1st left single support and 3rd double support. The results showed that perturbation of gait could be detected by the selected features and that stimulation led to a weight shift from the ipsilateral to contralateral leg starting with a more rapid unloading of the ipsilatteral leg, supported by a rapid loading of the contralateral leg in the 1st double support. This is followed by a 48N higher peak force for the contralateral leg and 34N lower peak force for the ipsilateral leg in 1st right single support and in the 2nd double support phases. Further, the peak force in the 1st right single support shows a significant dependency with the stimulation phase. Keywords—Nociceptive withdrawal reflex, Ground reaction force, Human locomotion, Reflex modulation.
II. METHODS A. Experimental setup
Subjects: Eight healthy male volunteers without known neurological disorders participated in the study (mean age 26, range 21-46 years, mean weight 75,5kg, range 71-81 kg). Informed consent was obtained from all subjects and the Helsinki Declaration was respected. Electrical stimulation: The nociceptive withdrawal reflex was elicited by cutaneous electrical stimulation delivered in random order to four sites on the sole of the left foot (Fig. 1 A) [6,7]: the 3rd metatarsophalangeal (MP) joint, the medial arch of the foot, the plantar side of the calcaneus, and the posterior side of calcaneus. The stimulation was delivered through self-adhesive electrodes (2.63 cm² surface area, Ag–AgCl, AMBU, Denmark), with a common reference electrode (7 x10 cm electrode, Pals, Axelgaard Ltd., USA) placed on the dorsum of the foot. Each stimulus
Left foot
A
Right foot F5
F1
FSR S1 F2
F3
I. INTRODUCTION
F6
F7
Stim sites
S2
F4 S3
F8
S4 P1
B
P2
P3
Tredmill 3km/h
Total vertical force
Recent studies of the lower limb nociceptive withdrawal reflex (NWR) elicited by painful electrical stimulation of the sole of the foot have indicated a modular organization of the NWR [2,3,7]. Reflex responses have been characterized by evaluating EMG from lower limb muscles and angular displacements of the hip, knee and ankle joints. In [1] force measurements were assessed but only during symmetrical stance. To examine if reflex force changes can be observed during gait perturbed by electrical stimulation, this study will examine the applicability of force sensitive resistors (FSR) to characterize changes in the vertical ground reaction force (GRF) in healthy subjects. The FSRs are placed at anatomically discrete points, [4,5,8] This study also evaluated the FSR’s capability to provide feedback signals in a closed loop system for control of the stimulation parameters in a NWR rehabilitation system.
& MATERIALS
1.dbl.sup W1
1.R.single.sup W2 Right leg
P1 P2 P3 Onsets
2.dbl.sup W3
1.L.single.sup W4
3.dbl.sup W5
Left leg
Time
Fig 1A: Positioning of stimulation electrodes and FSR, (S1: 3rd MP, S2: Arch of the foot, S3: Calcaneus, S4: Calcaneus Posterior), (F1&F5: Toe, F2&F6: 1st MP, F3&F7: 4th MP, F4&F8: Calcaneus). B: Sketch of onset phases and support types during normal unperturbed gait (P1: 10% heel off to toe off, P2: 50% heel off to toe off, P3: Toe off on the ipsilatteral leg)
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 669–672, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
670
consisted of a constant current pulse train (Noxitest, Aalborg, Denmark) of five individual 1 ms pulses delivered at 200 Hz. This burst was repeated four times at a frequency of 15Hz [7]. The stimulation intensity at the individual electrode sites was normalized to the pain threshold for one burst. The pain threshold was determined using a staircase method consisting of a series of increasing and decreasing stimuli [8], the threshold was determined with the volunteers in sitting position. Stimulation phase: Stimulations were delivered in three phases of the gait cycle; phase 1: 10% of the heel off/toe off period, phase 2: 50% of the heel off/toe off period, and phase 3: Toe off on the ipsilatteral leg. These phases were calculated based on average timing from 15 steps of normal, unperturbed gait measured prior to the experiment. Stimulation was delivered in a random sequence, repeating each combination of stimulation site and phases seven times, resulting in a total of 84 stimulations. The inter stimulus interval was randomized to be between 8-12 steps. To keep a constant cadence, gait was performed on a treadmill with a velocity of 3km/h and the subjects were instructed to maintain a constant step length. Force measurements: Force changes between the sole of the foot and the floor were recorded from four anatomically discrete points, under each foot, by means of FSRs. The FSR’s (LuSense, PS3, Standard 174, diameter 2,48cm², thickness 0,2mm, measurement range 2,5-500N) were attached firmly with adhesive tape to the great toe, distal ends of 1st and 4th MP joints and the medial process of calcaneus, (Fig. 1, A). The FSRs signals were sampled at 2kHz, displayed on a monitor and stored for later analysis. Data was acquired starting 4 seconds prior to the stimulation onset and ending 4 seconds after the onset. B. Data Analysis The FSR voltage signals containing an unperturbed control step, and the 1st and 2nd post-stimulation steps were low-pass filtered (Butterworth, 25Hz, sixth order, no phase lag), and converted to force by a first order linear polynomial, with coefficients determined prior to each experiment by measuring voltages to a static force of respectively 100N and 450N. The sum of forces for all unperturbed steps from the right and left legs of each subject, separately, (estimates of the vertical component of GRF) were averaged and the peak value was used to normalize the recordings to 100% bodyweight in order to decrease the interlimb and intersubject variability. The force response was quantified by two features, Peak Force: The peak force change between the unperturbed control step and the post stimulation steps. Peak Slope: The peak force change calculated in a 10ms sliding window.
J. Emborg, E. Spaich and O.K. Andersen
The analysis was performed in five time windows determined by the type of support present during the gait cycle, (Fig. 1, B): W1:1st double support, W2:1st right single support, W3:2nd double support, W4:1st left single support, and W5:3rd double support. The support phases were determined by applying thresholds to the signals recorded from the force sensors F1, F4, F5 and F8. Statistical analysis: The capability of each feature to detect changes produced by the perturbation (electrical stimulation) was examined by comparing each feature from perturbed vs. unperturbed steps across subjects, phases and sites in each analysis window. The comparison was made by paired t-tests (PTT). When PTT showed significance, two way repeated measures analysis of variance (RM ANOVA) was used to analyze the effect of stimulation site and phase on the force response. Student-Newman-Keuls Method (SNK) was used for post-hoc pair wise comparisons. Results are presented as mean ± standard error of the mean (SEM). P<0.05 was considered as statistical significant. III. RESULTS Peak Force was capable of detecting perturbations in both legs in W2 and W3. In W2, a 48±7N unloading of the ipsilateral leg (PTT, p<0,001) combined with an additional loading of the contralateral leg 34±6N were observed (PTT, p<0,001). In W3, the unloading of the ipsilateral leg was 28±6N (PTT, p<0.01) associated with a contralateral additional loading of 41±5N (PTT, p<0.01). Peak Slope shows differences in the loading and unloading velocity and was calculated only in the first double support (W1) as only responses evoked by stimulations delivered in phase 1 realistically can give a response within this window. This feature could detect perturbations in W1 produced by stimulation in phase 1 manifested by a more rapid unloading of the ipsilateral leg, with a peak slope difference between the unperturbed and the perturbed leg of 8±2N/10s (PTT, p<0.001). Loading of the contralateral leg was achieved with an additional slope increment of 4±1N/10s (PTT, p<0,001). A. Site and phase modulation of the response No reflex modulation was observed in Peak Force for the ipsilateral leg in W2 and W3. For the contralateral leg, Peak Force shows that the response depended on the phase during the 1st single support in this leg (i.e. W2, RM ANOVA, p<0.01) where the mean peak force for phase 1, across stimulation site was 44±8N higher than for phase 3 stimulation (SNK, P<0.01) and
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Vertical unloading produced by electrically evoked withdrawal reflexes during gait: preliminary results
Average vertical load shift
Total vertical load [Kg]
80 75
Mean weight
70 65
Left max force (Control)
60
Left max force (Stim)
55 50
Right max force (Control)
45
Right max force (Stim)
40 35 30 25
W1
W2
W3
W4
W5
Analysis Window
Fig 2 : Average (across subjects) Peak Force values extracted in each of the five windows (W1 to W5). Error bars indicates SEM for the calculated values. The lines connecting datapoints are interpolated. Values are normalized to the average subject weight (75.5kg). Stimulation was delivered on the sole of the left foot.
Table 1
Stimulations during gait led to a weight shift from the ipsilatteral to the contralateral leg in the 1st double support and 1st right support phases indicated by a faster unloading slope and a lower peak force in the ipsilateral leg. This was associated with a faster loading slope and a higher peak force in the contralateral leg. This study showed a clear phase dependency, since a strong significance for phase modulation was seen in the 1st right single support phase. The study added further evidence for a modular organization of the nociceptive withdrawal reflexes, since a near significant tendency for a stimulation site modulation was observed in the 1st double support phase of the gait cycle. Even though significant changes were observed in this study, it still remains to be analyzed if the variability within subjects is low enough to allow FSRs to be used in a closed loop system controlling stimulation of the NWR .
Detected perturbations
REFERENCES
Leg
Feature
W
Mean
SEM
P-value
Ipsilatteral
Peak slope
1
-8
2
<0.0001
Peak force
2
-48
7
<0.0001
Peak force
3
-28
6
<0.0016
Peak slope
1
+4
1
<0.0001
Peak force
2
+34
6
<0.0001
Peak force
3
+41
5
<0.005
Contralateral
671
Significant differences between unperturbed control step, and poststimulation step. Analyzed in different windows (W). P-values are for paired t-tests.
27±8N larger than phase 2 stimulation (SNK, P<0.05). In W3 no site or phase modulation was observed. Peak Slope was analyzed for site modulation in W1, for phase 1 only and showed near significantly (RM ANOVA, p=0.052) faster unloading velocity for the ipsilateral leg when stimulation was delivered at site 4 compared with site 3. (SNK, p=0,061). The peak force changes in the contralateral leg did not depend on stimulation site. IV. CONCLUSIONS This study provides evidence that Force Sensitive Resistors mounted on the sole of the foot can be used to characterize perturbations produced by electrically evoked withdrawal reflexes during gait.
1.
2.
3.
4.
5.
6.
7.
Andersen, O.K., Spaich, E.G., Madeleine, P., Arendt-Nielsen, L. (2005). Gradual enlargement of human withdrawal reflex receptive fields following repetitive painful stimulation. Brain Research 1042 (2), pp. 194-204 Andersen, O.K., Sonnenborg, F., Matjačić, Z., ArendtNielsen, L. (2003). Foot-sole reflex receptive fields for human withdrawal reflexes in symmetrical standing position. Experimental Brain Research 152 (4), pp. 434-443 Andersen, O.K., Sonnenborg, F.A., Arendt-Nielsen, L. (1999). Modular organization of human leg withdrawal reflexes elicited by electrical stimulation of the foot sole. Muscle and Nerve 22 (11), pp. 1520-1530 Kiriyama, K., Warabi, T., Kato, M., Yoshida, T., Kobayashi, N. (2004). Progression of human body sway during successive walking studied by recording sole-floor reaction forces. Neuroscience Letters 359 (1-2), pp. 130-132 Kobayashi, N., Warabi, T., Kato, M., Kiriyama, K., Yoshida, T., Chiba, S. (2006). Posterior-anterior body weight shift during stance period studied by measuring sole-floor reaction forces during healthy and hemiplegic human walking. Neuroscience Letters 399 (1-2), pp. 141-146 Spaich, E.G., Hinge, H.H., Arendt-Nielsen, L., Andersen, O.K. (2006). Modulation of the withdrawal reflex during hemiplegic gait: Effect of stimulation site and gait phase. Clinical Neurophysiology 117 (11), pp. 2482-2495 Spaich,E.G., Collet,T., Arendt-Nielsen,L. and Andersen,O.K. (2005). Repetitive painful stimulation evokes site, phase, and frequency modulated responses during the swing phase of the gait cycle : preliminary results. In: Gantchev,N., (ed.) From Basic Motor Control to Functional Recovery IV. Sofia: Marin Drinov Academic Publishing House, pp. 136-140. ISBN/ISSN: 954-322-095-6. Notes: Presented at the Motor
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
672
8.
9.
J. Emborg, E. Spaich and O.K. Andersen Control Conference 2005, MCC 2005, 21-25 September 2005, Sofia, Bulgaria. E.G. Spaich, O.K. Andersen, L. Arendt-Nielsen (2004). Tibialis Anterior and Soleus Withdrawal Reflexes Elicited by Electrical Stimulation of the Sole of the Foot during Gait. Neuromodulation 7 (2), 126–132. Warabi,T., Kato,M., Kiriyama,K., Yoshida,T., Kobayashi,N. (2004). Analysis of human locomotion by recording sole-floor reaction forces from anatomically discrete points. Neuroscience Research 50 (4), pp. 419-426
Author: Institute: Street: City: Country: Email:
Jonas Emborg Center for Sensory-Motor Interaction (SMI) Fredrik Bajers Vej 7 D3 DK-9220 Aalborg Denmark
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A standard tool to interconnect clinical, genomic and proteomic data for personalization of cardiac disease treatment M. Giacomini1, F. Lorandi1 and C. Ruggiero1 1
Department of Communication, Computer and System Sciences, University of Genoa, Italy
Abstract— Within the post genomic era a new type of medicine was born: “personalized medicine”. It makes it possible to study how to synthesize new drugs for the treatment of different kind of diseases at the individual level. It is necessary to handle a large amount of data coming from genomics and proteomics analysis. For this purpose we propose a way to make data available in a standardized format, using the international standard HL7. Another important topic is the link between clinical and genomics/proteomics data to have a general view in order to facilitate their analysis. Keywords— Pharmacogenomics, personalized medicine, HL7, Web services.
I. INTRODUCTION February 2001 was an important date in the history of medicine. This was the date of the publication of the paper in Nature which reported the initial sequencing of the Human Genome. In practical terms this date represents the dawn of the “New Medicine”, i.e. molecular based medicine. A new branch of pharmaceutics , called pharmacogenomics, has taken place. The pharmacogenomics deals with the influence of genetic variation on drug response in patients by correlating gene expression and/or singlenucleotide-polymorphisms with a drug’s efficacy or toxicity. By doing so, pharmacogenomics aims to develop rational means to optimise drug therapy, with respect to the patient’s genotype, to ensure maximum efficacy with minimal adverse effect. Such approaches promise the advent of “personalized medicine”, in which drug and drug combinations are optimised for each individual’s unique genetic makeup [1]. This idea triggered many works all over the world with strong competition amongst groups of scientists. In this context the European Union decided to finance a project (CardioWorkBench) which has the aim to improve the target selection/validation process and optimise drug design for cardiovascular diseases. It has two main goals: the first one is the identification of new molecular therapeutic targets and new bioactive compounds that can have drug potentiality. The second goal is more theoretical and involves the possibility to perform an ‘in silico’ investigation of the drug effects by using an integrative system biology approach.
The efficacy of a “personalised medicine” depends on: • •
the relation between the disease’s features and the individual’s features; the possibility of a great diffusion of this data in order to involve a great number of groups with different skills that make possible an optimal use of data itself.
For these reasons it is necessary to have: • •
an “intelligent” link between clinical data and genomics/proteomics information; a standardized tool for the communication of the available information.
The objective of this document is to propose a solution, within CardioWorkBench project, to both aspects of the problem, using a standard widely common in health care: HL7 (Health Level 7). II. MATERIALS AND METHODS The standard used is HL7 version 3 and it is an objectoriented standard based on a specific information model called Reference Information Model (RIM). The RIM encompasses the HL7 domain of interest as a whole. The RIM is a coherent, shared information model that is the source for the data content of all HL7 messages. It is intentionally abstract allowing it to represent the richness of the information topics that must be shared throughout the health system [2]. The HL7 Technical Committee defined a set of specific domain derived from the RIM. Each one represents a particular area of interest in health care. The domain of interests in this situation is the Clinical Genomics Domain. It addresses requirements for the interrelation of clinical and genomic data at the individual level. Much of the genomic data is still generic, for example the human genome is in fact the DNA sequences believed to be the common sequences in every human being. The vision of “personalized medicine” is based on those correlations that make use of personal genomic data such as the SNPs (Single Nucleotide Polymorphisms) that differentiate any two individuals and occur about every thousand bases. Beside normal differences, health conditions such as drug sensitivi-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 693–695, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
694
M. Giacomini, F. Lorandi and C. Ruggiero
ties, allergies and others could be attributed to the individual SNPs or to differences in gene expression and proteomics [3]. Using this domain it is possible to solve the problem of the “intelligent” link between clinical data and genomics/proteomics information in a standardized way. In order to make this data accessible to all the partners of the project, it is necessary to have a web based solution: a web service. It is a software system designed to support interoperable machine to machine interaction over a network. Web services are frequently just application programming interfaces (API) that can be accessed over a network, such as the Internet, and executed on a remote system hosting the requested services [4]. III. RESULTS All consortium components agreed to share local data related to the project in a common centralized database. This
database is located in the protected area of the MEDINFO laboratory at DIST and it is implemented in SQL server 2005. All this data is provided to MEDINFO in a plain excel file, whose structure has been agreed with each submitting partner. In order to insert this data into the project database and to make explicit all semantic links present in the submitted data coming from different sites, we have designed specific data parsers.. In this consortium many groups are in charge of performing data mining sessions of the global corpus of the data in order to define possible molecular targets, that should be specific for the disease, but, even more important, specific for one patient (or at least a group of patients with similar genetic and proteomic profiles). Since it is foreseeable that in the future, this data mining activity will be performed also by other groups (of course after the end of the project), it is mandatory to export this data in a standardized format. For this purpose we have decided to use HL7 because of its generality features.
Fig. 1 Data Management Schema
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A standard tool to interconnect clinical, genomic and proteomic data for personalization of cardiac disease treatment
Specifically, we propose to use • • •
general HL7 schemas; Clinical Genomics Domain; Order and Observations Domain;
to export patient’s data coming from different sources (clinical ward, proteomics laboratories, genetic laboratories…). HL7 solves the problem of the standardization, but make also possible to link data to each specific provider which will be responsible for any further control. It is also useful to keep data in an anonymous format and at the same time, to have access to further personal data (if needed) via the responsible data provider. The output structure (Web service and SOAP) makes it possible for us control the data’s access, only the project’s partners are allowed to view the information. Moreover, the SOAP protocol with specific XML formatted data files, will make available both real data and also their semantic schema (XSDs) to the present colleagues in the consortium (and to possible future colleagues) in charge of further data mining studies. IV. CONCLUSIONS The work presented here is within a project financed by the European Union and it is aimed to export data in a formatted and standardized way. The project is at its first year, so only a little experimental data is present in the database. Moreover, because of the incomplete and non uniform data insertion, not all foreseeable links within the contents of the fields are now evident. With data present at the moment, a first prototype of HL7 message will be available soon. Public news about the availability of the first HL7 messages will be published
on the paper section of the (http://www.cardioworkbench.eu).
695
CardioWorkBench
ACKNOWLEDGMENT Presented work has been financed by the European Union with the CardioWorkBench project defined as follows: • • •
Proposal/Contract no.: PL 018671; Sixth Framework Programme Priority 1; Life Sciences, Genomics and Biotechnology for Health.
REFERENCES 1. 2. 3. 4.
O’Shaughnessy K.M (2006) HapMap, pharmacogenomics, and the goal of personalized prescribing, British Journal of Clinical Pharmacology 61:6 783-786 Beeler G Case J, Curry J, Hueber A, Shakir A.M (2006) HL7 Reference Information Model, HL7 version 3.0 September 2006 ballot site Shabo A, Elkin P, Kaufman J, Whyte S (2006), Domain: Clinical Genomics, HL7 version 3.0 September 2006 ballot site Meier J.D, Vasireddy S, Babbar A, Mackman A (2004) Improving .NET application performance and scalability, Microsoft MSDN library Author: Institute: Street: City: Country: Email:
Mauro Giacomini Dept. of Communication Computer and System Sciences- University of Genova Via All’Opera Pia 13 Genova Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Adaptive Altered Auditory Feedback (AAF) device based on a multimodal intelligent monitor to treat the permanent developmental stuttering (PDS): A critical proposal Manuel Prado, Laura M. Roa Biomedical Engineering Group, Network Center of Biomedical Research in Bioengineering, Biomaterials and Nanomedicine (CIBER-TEC) and University of Seville, Spain Abstract— Despite first references to stuttering appear in ancient Egyptian and China civilizations, the mechanisms underlying Permanent Developmental Stuttering remains unknown. This paper reviews concisely the causal hypothesis and treatments of stuttering, and presents the specifications that a stuttering therapeutic device should verify in the light of the current scenario of this disorder. As a result of the analysis, an adaptive altered auditory feedback device based on a multimodal intelligent monitor, within the framework of a knowledge-based telehealthcare system, is shown. A critical analysis, based partly on the successful outcomes of a similar intelligent monitor, supports the feasibility of this novel proposal, as well as its capacity to assist in the translation of knowledge between research and clinic. Keywords— Adaptive stuttering therapy, Multimodal monitoring, Wearable intelligent device, Multitier architecture, Telehealthcare.
I. INTRODUCTION Stuttering is a disorder that affects approximately 1 % of world adult population, with men prevalence higher than women (1/4) [1]. This paper is about permanent developmental stuttering (PDS), which starts during the childhood and develops during the maturation and evolution of the subject that suffers it. This is the most usual type of stuttering. An integrative conception and multicausal origin of stuttering that joins external (environmental) with internal (innate or biological) factors, supports current stuttering therapies, which accordingly combine direct treatments (logopedic) with indirect treatments (psychological). A review of the current scenario around the world can be seen in [2]. A new research line of studying the stuttering appeared at the second halt of 1990s, with the well-known work of Fox et al. [3]. By contrasting stuttering with fluent speech using positron emission tomography (PET), combined with chorus reading for inducing fluency, the authors found a lack of left-lateralized activation of the auditory system, characterized by an underactivation of a frontal-temporal system
implicated in speech production (left-side), and a overactivation of the premotor cortex (operculum and insula) with right cerebral dominance. However, the origin of stuttering remains unknown, and authors continue the debate concerning the nature, cause or effect (compensatory or other one), of the observed brain images [4]. This obscurity explains the many, variable and depending on speech language professional, number of stuttering therapies, and the high percentage (close to 100 %) of relapse in adults with PDS. This lack of success has promoted on the one hand the development of self-help groups, in attendance or virtual, thanks to the new technologies on information and communications (TICs). The importance and awareness of this disorder is growing, as demonstrate the tri-annual world congresses promoted by the International Stuttering Association (ISA) and the European League o Stuttering Associations (ELSA), since 1986, as well as the creation of local associations such as the Fundación Española de la Tartamudez (www.ttmespana.com), instituted in February 2002. This paper proposes and discusses a new technology founded on a knowledge-based telehealthcare, which seeks to deliver a multimodal, personalized, and adaptive therapy to treat the stuttering, and to help discovering knowledge that throws light on the etiology. As a consequence of the complexity and extension of this research line, the present paper must be understood as an introductory and brief proposal. II. CAUSAL HYPOTHESIS The goal of this Section is justifying the causal hypothesis of stuttering as a basis of the therapeutic solution that we propose. Although the proposed device is not restricted to that model, because it also seeks to give support for discovering the etiology of the disorder, the statement of an initial biomedical framework simplifies the criteria on which technical requirements will be based.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 712–715, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Adaptive Altered Auditory Feedback (AAF) device based on a multimodal intelligent monitor
In a simplistic manner, the reason of the neuronal patterns differences between fluent and PDS subjects can be classified in three cases. The first one refers directly to a neuronal flaw in those areas. A proposed model concerning to this possibility suggests that the right hemisphere overactivation is due to a compensatory mechanism associated with a disturbed timing activation in speech-relevant areas [5-7]. As a second possibility, the pattern differences could be a secondary effect related to anticipation, avoidance or any similar behavior of PDS subjects. This line of research suggest that the subject should be able to re-learn to control his speech by means of speech- (direct) and/or psycho-therapy. However, considering that this approach has been tested from the beginning of 20th century without success, this hypothesis seems flawed. A third possibility is that the observed pattern differences be induced by a neuronal distortion or maladjustment in other brain area. This other neuronal maladjustment has not necessary to be innate, neither involves really a flaw. The relevance of brain amygdala in classic conditioning, rapid responses to conditioned stimuli, and its involvement in fear behavior, with diffuse projections to a variety of autonomic and skeletomotor control centers, including the periaqueductal grey, which mediates the freezing response, suggests that this circuit could be responsible of PDS by means of a reactive inhibition. To the best of our knowledge this kind of hypothesis has been only proposed by Dodge [8]. Moreover, in our opinion this hypothesis could explain other type of dysfluencies that are less known because they do not affect to oral but other communicative tasks, like writing and even playing music. In summary, we consider such reactive inhibition model as the most plausible hypothesis to explain the stuttering in agreement with Dodge [8]. III. METHODS Any new therapeutic proposal should be based on a causal hypothesis consistent with the well-known characteristic of stuttering. This should allow discovering knowledge to improve, validate, or reject the basis model, in such a way that fills the gap between the clinical application and the scientific research on PDS. This is the first requirement that results from the previous scenario. Assuming the reactive inhibition model of stuttering, associated with the limbic system, the therapy needs to be delivered in the usual environment of the PDS subject. These two requirements compel to search a discreet and wearable technological solution with enough processing capacity (intelligence) to extract knowledge concerning a particular subject and context.
713
The effect that altered auditory feedback (AAF) has on the reduction of severe dysfluencies, and on the normalization of PDS neural patterns [9, 10], together with the feasibility to develop wearable, non-invasive, and discreet AAF devices [11], point to this technique as a good therapeutic proposal. We propose a methodology based on the paradigm of knowledge-based telehealthcare, whose foundation was presented in earlier works within the chronic renal area [12, 13]. The main characteristic of this healthcare paradigm is the capability to generate real-time and adaptive knowledge. It is achieved using a multitier processing architecture, where the first (composed) layer is defined by a set of multimodal intelligent monitors that send the measured and processed data towards the second layer, based in turn on systemic dynamic mathematical models [14], encapsulated into computational components, called PPI. A recent study has shown the advantages of that knowledge-based paradigm [15]. Our therapeutic proposal will use an AAF device based on a multimodal intelligent monitor, real-time adapted to each PDS subject, with the aim of minimizing dysfluencies. The multimodal capacity must allow monitoring other biomedical signals besides vocal sound, such as EEG and human body accelerations, under request of speech language professionals. Multimodal capacity seeks optimize the adaptive function of the AAF device, as well as generate a more complete image of the PDS subject evolution, in agreement with the first requirement. IV. RESULTS This Section describes several relevant issues concerning the technology of the AAF device based on a multimodal intelligent monitor. This one is an extension of the patented accelerometer-based human movement monitor previously cited [15, 16]. The AAF device therapeutic proposal differs from the human movement monitor only in the wireless communication technologies among internal devices and the use of an intelligent headset (IHS). The last element is an extended headset that implements an adaptive AAF algorithm. Therefore, the IHS comprises an intelligent sensor (IS) together with an effector element (headphone). Accordingly, the term monitor refers to both except when the contrary is indicated. The monitor is composed by a personal server element (PSE) and a set of ISs associated with different biosignals. Last ones communicate with the PSE using a wireless personal network (WPAN), under a star topology (PSE is the center). The monitor architecture involves two layers, set by the ISs and the PSE, respectively. The ISs measure and perform a first analysis of the signals, computing the associ-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
714
ated features that are sent to the PSE. This approach reduces the dimensionality of the data to be processed by the PSE, making easier to adopt a real time processing approach. This distributed processing also optimizes the resource allocation and reduces the volume of data transmitted between layers, which in turn helps to reduce the power consumption and the device size [16]. This strategy makes easier to fulfill the wearability and discretion requirements. Other details of the intelligent monitor related the healthcare framework can be seen elsewhere [12, 13, 16]. Any IS device includes sensors, a microcontroller for management and signal processing, a wireless transceiver and the associated antenna, and auxiliary circuitry that depends on IS functions. The ability of IS to be adapted to the subject and context evolution is related to processing capacity of the microcontroller. The advantages of this capability were successfully shown with the falling detection function of the human movement monitor [15]. This feature allows a real-time adapting of the AAF parameters to the PDS subject evolution. Despite the design reduces the IS – PSE (and IHS – PSE) data volume, the maximum channel capacity must allow transferring the complete signals waveforms under special circumstances: tests, research tasks, or around IS detected events. In the case of the human movement monitor the events alerts to PSE of possible impacts, whereas in the AAF device based on a multimodal intelligent monitor, the events can be set to detect speech blocking patterns from a multimodal viewpoint, that is, the event can be triggered by an IS different of the IHS. Most of the biosignals needed in telehealthcare, including EEG, require a sampling rate value lesser than 102 S/s approximately. This allows the use of very low cost microcontrollers, as the one used in the intelligent acceleration unit presented in [16]. In addition, these signals can be transmitted by low data rate communication protocols like the modern Zigbee [17], with a maximum data rate of (2.4 GHz) 250 Kbps. The maximum signal sampling rate depends on the required accuracy and overhead, but a signal with sampling rates up to 10 KS/s, approximately, can be transmitted. This value includes accelerometer signals associated with high energy human physical activities. However, the IHS needs a channel with capacity to transmit audio signals, and a higher processing capacity than normal ISs. With that objective, the communication technologies used between PSE and ISs within the multimodal intelligent monitor have to be extended, taking advantage that the earlier PSE [15] used Bluetooth (BT) to link with the healthcare center via a remote access unit. Accordingly, the PSE of the AAF device based on a multimodal intelligent monitor implements a new single Wi-
Manuel Prado, Laura M. Roa
bree/BT solution. This one takes advantage of the lower consumption of Wibree compared with BT 2.0, despite the lower data rate of Wibree (1 Mbps vs. 3 Mbps) (www.wibree.com). High speed IS devices, like the IHS will use BT or Wibree, as a function of the availability. The proposed IHS is based on the BlueCoreTM 5Multimedia single-chip solution (http://www.csr.com/products/bc5range.htm). This integrated circuit (IC) is a programmable single-chip Bluetooth 2.0 (and v 2.1 ready) solution with on-chip and specialized Digital Signal Processor (DSP), stereo CODEC, and Flash. The chip includes also a lithium battery charger & switchmode DC-DC converter. It is a low cost circuit designed for headsets, whose DSP will execute the AAF algorithm, besides to calculate the signal’s features for sending to the PSE. The adaptive capacity of the AAF algorithm is supported on the distributed processing of the audio signal by IHS and PSE. The IHS can be worn partly or fully into the ear canal, with or without the pinna support, depending of the discretion level. A more detailed technological description of the remaining elements exceeds the scope of the present work. V. DISCUSSION AND CONCLUSIONS The feasibility of the proposed device is associated with the ability to accomplish the requirements of functionality, cost, size, wearability, autonomy and discretion, which in turns depends on the state of the art in electronic technology (e.g. sensors, microprocessors and communications) and signal processing techniques for speech, EEG and accelerometers. Previous works have shown the feasibility of an accelerometer-based human movement monitor, compliant with the same requirements of the AAF device based on a multimodal intelligent monitor [15, 16]. Current advances in microprocessors, Micro-Electro-Mechanical System (MEMS) sensors, communications, signal processing and embedded devices should make easier to keep those specifications despite the higher data rates and processing capacities required for the IHS. The processing capacity and functionality of embedded current headset solutions such as the BlueCoreTM 5Multimedia IC seem to provide a proper hardware basis for the IHS, overcoming the limitations in cost, real-time adaptability and knowledge extraction of the AAF device published in [11]. Other issues of the IHS refer to the speech processing technique. The ability to recognize speech, and voice impairments has been widely demonstrate [18-20]. Moreover, recent works have achieved success even on pervasive
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Adaptive Altered Auditory Feedback (AAF) device based on a multimodal intelligent monitor
speech recognition [21]. Accordingly, the architecture of our proposed AAF device seems able to support real-time speech processing algorithms, dividing the task between the DSPs of IHS and PSE, and focusing the DSP of IHS mainly to alter the auditory feedback in a monaural way. Regarding the feasibility of the EEG-IS, the acquisition of EEG signals needs scalp electrodes, which could require an electrolyte gel for electrical conductivity, and as little hair as possible in the target zone. This is a limiting factor in our proposal. Ideally, the electrode elements would be inconspicuous and be wearable as easily as clothing. However, there is a strong interest to develop electrode sensors that will be friendly to use and inexpensive. This is pushing this research line with successful results [22]. Regarding the possibility to monitor EEG of speech related brain areas during stuttered speech, a recent work has demonstrated that the EEG artifacts associated with the speech musculature and ocular activity can be removed [23]. It is relevant to remark that the AAF device proposed in this work is considered a therapeutic system and not a prosthetic device, as is usual in current AAF devices. This is based on the assumption that stuttering is caused by a reactive inhibition supported by the brain amygdala, and there are many cues that point to the possibility to remodel weights of diffuse projections associated to fear conditioning and similar emotional states [5, 24, 25].
REFERENCES 1.
Craig, A., et al., Epidemiology of Stuttering in the Community Across the Entire Life Span. J Speech Lang Hear Res, 2002. 45(6): p. 1097-1105. 2. Limongi, F.P., et al., Stuttering Research and Treatment Around the World. The ASHA Leader, 2005: p. 6-41. 3. Fox, P.T., et al., A PET study of the neural systems of stuttering. Nature, 1996. 382(6587): p. 158-162. 4. Fox, P.T., Brain imaging in stuttering: where next? Journal of Fluency Disorders, 2003. 28(4): p. 265-272. 5. Neumann, K., et al., Cortical plasticity associated with stuttering therapy. Journal of Fluency Disorders, 2005. 30(1): p. 23-39. 6. Sommer, M., et al., Disconnection of speech-relevant brain areas in persistent developmental stuttering. Lancet, 2002. 360(9330): p. 380-3. 7. Preibisch, C., et al., Evidence for compensation for stuttering by the right frontal operculum. Neuroimage, 2003. 20(2): p. 1356-64. 8. Dodge, D.M., A Reactive Inhibition Model of Stuttering Development & Behavior: A Neuropsychological Theory Based on Recent Research. 2006, http://telosnet.com/dmdodge/reactinh.html (Last accessed Feb 21, 2006). 9. Saltuklaroglu, T., et al., A temporal window for the central inhibition of stuttering via exogenous speech signals in adults. Neuroscience Letters, 2003. 349(2): p. 120-4. 10. Kalinowski, J. and T. Saltuklaroglu, Choral speech: the amelioration of stuttering via imitation and the mirror neuronal system. Neuroscience and Biobehavioral Reviews, 2003. 27(4): p. 339-47.
715
11. Stuart, A., et al., Self-Contained In-the-Ear Device to Deliver Altered Auditory Feedback: Applications for Stuttering. Annals of Biomedical Engineering, 2003. 31(2): p. 233. 12. Prado, M., et al., Virtual Center for Renal Support: Technological Approach to Patient Physiological Image. IEEE Transactions on Biomedical Engineering, 2002. 49(12): p. 1420-1430. 13. Prado, M., et al., Renal telehealthcare system based on a patient physiological image: a novel hybrid approach in telemedicine. Telemedicine Journal and e-Health, 2003. 9(2): p. 149-165. 14. Roa, L. and M. Prado, Simulation Languages, in Wiley Encyclopedia of Biomedical Engineering, M. Akay, Editor. 2006, John Wiley and Sons, Inc. p. 4152. 15. Prado, M., L.M. Roa, and J. Reina-Tosina, Viability study of a personalized and adaptive knowledge-generation telehealthcare system for nephrology (NEFROTEL). International Journal of Medical Informatics, 2006. 75(9): p. 646-657. 16. Prado, M., J. Reina-Tosina, and L. Roa. Distributed intelligent architecture for falling detection and physical activity analysis in the elderly. in 24th Annual International Conference of the IEEEEMBS and Annual Fall Meeting of the BMES. 2002. Houston, TX, USA. 17. ZigBee Specification. 2006, ZigBee Alliance. 18. Umapathy, K. and S. Krishnan, Feature analysis of pathological speech signals using local discriminant bases technique. Medical and Biological Engineering and Computing, 2005. 43(4): p. 457464. 19. Piccioni, M., S. Scarlatti, and A. Trouve, A Variational Problem Arising from Speech Recognition. SIAM Journal on Applied Mathematics, 1998. 58(3): p. 753-771. 20. Godino-Llorente, J.I. and P. Gomez-Vilda, Automatic Detection of Voice Impairments by Means of Short-Term Cepstral Parameters and Neural Network Based Detectors. IEEE Transactions on Biomedical Engineering, 2004. 51(2): p. 380-384. 21. Alewine, N., H. Ruback, and S. Deligne, Pervasive speech recognition. Pervasive Computing, IEEE, 2004. 3(4): p. 78-81. 22. Stanford, V., Biosignals offer potential for direct interfaces and health monitoring. Pervasive Computing, IEEE, 2004. 3(1): p. 99-103. 23. Tran, Y., et al., Using independent component analysis to remove artifact from electroencephalographic measured during stuttered speech. Medical and Biological Engineering and Computing, 2004. 42(5): p. 627-633. 24. Herry, C., et al., Extinction of auditory fear conditioning requires MAPK/ERK activation in the basolateral amygdala. European Journal of Neuroscience, 2006. 24(1): p. 261-269. 25. Gruart, A., M.D. Munoz, and J.M. Delgado-Garcia, Involvement of the CA3-CA1 Synapse in the Acquisition of Associative Learning in Behaving Mice. The Journal of Neuroscience, 2006. 26(4): p. 1077-1087.
Author: Institute Street: City: Country Email:
Manuel Prado Biomedical Engineering Group, Escuela Superior de Ingenieros Camino de los descubrimientos s/n 41092, Sevilla Spain
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Data Presentation Methods for Monitoring a Public Health-Care System Aleksander Pur1, Marko Bohanec2, Nada Lavrač2,3, Bojan Cestnik4,2 Ministry of the Interior, Ljubljana, Slovenia Jožef Stefan Institute, Ljubljana, Slovenia 3 University of Nova Gorica, Nova Gorica, Slovenia 4 Temida, d.o.o., Ljubljana, Slovenia 1
2
Abstract— This paper present methods of data presentations that enable performance and activity monitoring of a health care system. The methods enable visual discovery of typical and atypical patterns, anomalies and outliers in the data. The methods were successfully implemented in a monitoring system developed for monitoring the primary healthcare system of Slovenia, to be used by the national Ministry of Health. Keywords— Data presentation methods, data visualization techniques, information graphics, Health Care System.
I. INTRODUCTION According to the World Health Report [1], a health-care system (HCS) is a system composed of organizations, institutions and resources that are devoted to producing a health action. A HCS contributes to good health, responsiveness to the expectations of the population, and fairness of financial contributions to health care [1]. The assessments of these contributions request the careful monitoring of performances and activities in HCS. This paper focuses on the HCS of Slovenia which is divided into the primary, secondary and tertiary health-care levels. The primary health care (PHC) system is the patients’ first entry point into the HCS. It is composed of four sub-systems: general practice, gynecology, pediatrics and dentistry. The paper illustrates the developed data presentation methods applied to performance and activity monitoring of the Slovenian HCS. These methods are included into the developed HCS monitoring model used at the primary health-care level, taking into account the physical accessibility to health care providers for patients, the availability of health care resources for patients and the rate of unregistered patients (the patients who have not have chosen their personal general practitioner) living in a certain area (community/region of Slovenia). This application was commissioned by the Ministry of Health of the Republic of Slovenia, who needs a holistic overview of the primary healthcare network in order to make management decisions and apply appropriate management actions, as well as evaluate PHC target achievement. The term “data presentation” used in this paper in its broad sense includes - mostly visual -
presentations of both single data elements and the presentation of more complex patterns, and it does not distinguish between “data”, “information” “pattern” and “knowledge”. II. METHOD Our approach to HCS monitoring is based on a model composed of hierarchically connected modules. Each module is aimed at monitoring a particular aspect of the HCS, which is of interest for decision-makers and managers of the system. Typical aspects about HCS are, for example, the accessibility to providers for patients, the qualification of physicians, their workload and their geographical distribution. Each module involves a number of monitoring processes, which are gathered according to a given monitoring goal. Each monitoring process includes one or more methods for data presentation; the same output data can be presented by different methods. Besides the output data presentation methods, each monitoring process is characterized by the monitoring objectives, input data, data collection methods, constraints on the data, data dimensions, data analysis methods, output data, target criteria or target values of outputs, security requirements and the users of the monitoring system. Among these components, the data analysis methods transform the input data to output data presented by some data presentation methods according to the given monitoring objectives. This approach is not limited to any particular data presentation or analysis method. In principle, any data presentation method can be used, such as pivot tables, charts, network graphs and maps. The same holds for data analysis methods, which can include Structured Query Language (SQL) procedures, On Line Analytical Process (OLAP) techniques for interactive knowledge discovering, as well as knowledge discovery in data (KDD) and data mining methods [2] for discovering important but previously unknown knowledge. The approach used in the HCS monitoring model is appropriate for hierarchically organized data presentations, as - in order to improve the comprehensibility of the model the HCS modules are hierarchically structured. The modules at the top level represent the main monitoring proc-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 708–711, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Data Presentation Methods for Monitoring a Public Health-Care System
esses/activities. The modules at a lower level are connected to a particular module at a higher level. Each connection represents a data channel that connects outputs of the lowerlevel modules with the inputs of a higher-level module. In principle, the hierarchy is constructed so that the output data of lower-level processes can help to explain the output data of monitoring processes at a higher-level. This functionality can be provided by the systems of menus, buttons and icons included in data presentation screens. The system helps the user to move from data presentation of higher-level monitoring process to appropriate data presentation of lowerlevel process or vice versa. III. SHORT REVIEW OF THE HCS DATA The central component of the HCS monitoring system is a data warehouse, which is composed of unique database entries from the following existing sources: •
• • •
Slovenian Social Security databases: the data about health-care providers together with assigned patients per individual general practitioner, the patients with social security, and the data about health care centers, the database of Slovenian physicians and dentists (provided by the Slovenian Medical Chamber), the database of the National Institute of Public Health containing data about Slovenian health centers, and the database of the Slovenian Statistics Bureau concerning the demographic and geographic distribution of citizens and communities in Slovenia.
This data warehouse contains real HCS data for the year 2006. IV. DATA PRESENTATIONS METHODS INCLUDED IN THE HCS MONITORING MODEL
Each monitoring processes of our HCS monitoring model include one or more data presentation methods that present quantitative, ordinal or nominal data. In accordance with the object-oriented taxonomy of medical data presentations [3] these methods are divided into five major classes: list, table, graph, icon, and generated text, which can be further divided into eight subclasses as follows: a list presents text arranged in a one-dimensional sequence, a table presents items arranged in an n-dimensional grid, a graph is a spatial arrangement of point and lines with respect to axes conveying information, an icon presents a small stylized pictorial symbol and a generated text is related to computerized creation of text from coded data.
709
The data presentation methods in our HCS monitoring model are divided in accordance to data organization into three categories: text only, tables and information graphics. Information graphics - or “infographics” - are visual representations of information, data or knowledge, such as a graph, chart, flowchart, diagram, map and signage systems. The presentation methods are also divided in accordance with the used techniques into static and dynamic. In this paper the term static is aimed at data presentations that could be spread on the paper without data loss. On the contrary, the dynamic presentations provide full functionality only on a computer screen. Thus a typical static graph is presented as a picture without interactive functions and a typical dynamic graph is presented by OLAP techniques with drill-down, slice and dice functions. Some of these data presentations used in HCS monitoring model are described in this section. The model includes a lot of different data presentation methods, such as multidimensional maps, histograms and diagrams, but because of size limitation this paper is limited only to the four methods described below. A. Simple graph-based static presentation of physicians’ qualifications The aim of this data presentation is to enable monitoring of physicians’ and dentists’ qualification for the job they actually perform. The main performance indicator is the physician’s specialization degree, granted by the Slovenian Medical Chamber, which must be verified every 7 years. The specialization degree is a prerequisite for getting a license for employment in a certain area of medicine. To monitor the suitability of physicians for the job they perform we have used a social network visualization technique available in the social network analysis program Pajek (“Spider” [4]). The monitoring of physicians’ suitability is achieved by the monitoring of three variables: SPEC (specialization), LIC (license), and OPR (the type of patients that the physician is in charge of, categorized by patient type). The motivation for this analysis is based on the observation that physicians with a certain specialization may get different licenses, and while a certain license assumes that the physician will only deal with patients of a certain patient category, in reality she may be in charge of different types of patients (e.g., a pediatrician may provide health services to grown up patients, although she has a specialization in pediatrics and a license in pediatrics). The Pajek diagram (Fig. 1) shows well the typical (thick lines – a high number of physicians) and atypical (thin lines – a low number of physicians) cases, which enable abnormality detection and further analysis of individual discovered anomalies.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
710
Aleksander Pur, Marko Bohanec, Nada Lavrač, Bojan Cestnik
Fig. 2. The holistic aspect of physicians presented by OLAP techniques Fig. 1. The qualifications of physicians for the job they are performing
B. Dynamic presentation based on OLAP techniques Some data presentations implemented in the HCS monitoring model are based on OLAP techniques, as shown in Fig. 2. The scatter plot at the right-hand side of Fig. 2 shows the average age, average workload, and average dispersion of physicians in communities for different specializations. The x-axis shows the average age of physicians, while the average workload is shown along the y-axis. The communities are shown by shapes and colors of data points as explained in the legend. The size of these points is proportional to the average dispersion of physicians in the community. The specialization and gender of physicians could be selected using combo boxes in the top left corner. At the left-hand side of Fig. 2, the same data is represented by the pivot table. This multidimensional visualization clearly shows outliers and anomalies in the HCS. Considering the hierarchical design of the HCS monitoring model, the detailed information about outliers and anomalies can be discovered by lower-level data presentations.
Our opinion is that these methods can be used with OLAP-based visualizations because some interesting relations omitted by OLAP techniques can be found without slice/dice, drill down and up activities. Table 1. Association rules showing relations between communities and dentists. Rule [workload: large] ==>Rogaška Slatina [age:60+]+[workload: small]+ [gender:M] ==>Maribor [age:60+]+[workload: small]+ [gender:M] ==>Murska Sobota [age:to40]+[workload: small]+ [gender:M] ==>Kranj
Supp. 0.21%
8%
Conf.
Lift 19.3
0.62%
20%
2.4
0.31%
5%
2.5
0.41%
5.97%
2
D. Dynamic presentation of association rules Usually, association rules are presented by texts and/or tables and arranged according to the parameters, such as
C. Tabular and textual presentation of association rules The HCS monitoring model includes also monitoring processes based on association rules discovery techniques aimed at discovering interesting relations between items in the health care data [5]. These rules can be presented by tables as shown in Table 1 that includes selected rules focused on the relations between communities and dentists. For example, the community Kranj is characterized by the man dentists younger than 40 years that are under loaded (see the rules 4). These rules can also be presented as a computer generated text, such as (rule 4): 5.97% of man dentists, age up to 40 years, workload small are working in the Kranj community that is 2 times more than expected.
Fig. 3. Association rules presented by a matrix of rectangles
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Data Presentation Methods for Monitoring a Public Health-Care System
Fig. 4. Detailed information about a selected association
support, confidence and lift. When we try, for example, to find associations between the number of visits to HCS providers and age-gender grouped patients from communities, or associations between illnesses caused job absences of age-gender based groups of individuals working in certain industrial branches and their illness we deal with a limited number of association rules with a predefined structure. A clear visualization of all these rules can be achieved by a matrix of rectangles association rules presentation where each association between two sets is presented by a rectangle (Fig. 3). The size and color of rectangles depends of the association rules parameters support and lift. Thus, the size of a rectangle depends of the support parameter (the number of items included in the column and row), and the color of the rectangle depends of the lift parameter (the ratio of confidence to the expected confidence) [5]. A detailed information about each association can be shown by moving the mouse, thus moving the screen pointer on the selected rectangle (Fig. 4). For example, the matrix in Fig. 4 shows the relations between the visits to a HCS provider of age-gender characterized patient group and the communities in which they live. The red rectangles present strong relations (with lift higher than 4), and the size of rectangles depends of the number of visits to certain HSC providers (the selected column) of gender-age grouped patients from these communities (the selected row). A detailed explanation of each relation can be listed in the message box (Fig. 4). This kind of visualization enables us to visualize a large number of relations on a single screen. Therefore this presentation enables that some interesting relations can be found at a glance.
711
monitoring in a HCS. Some of them can be used for discovering typical and atypical cases, such as the visualization of physicians’ qualifications by the Pajek diagram (Fig. 1), discovering the anomalies by a matrix of rectangles (Fig. 3) and discovering of outliers by multidimensional graphs (Fig. 2). These methods are included in a HCS monitoring model made for the primary health-care level of Slovenia, while conceptually the model is not limited to any particular data presentation and analysis method. Our experiments have also proven the utility of the hierarchically designed HCS monitoring model, as it enables the user to track down interesting (unusual or unexpected) processes and activities in a HCS.
ACKNOWLEDGMENT We gratefully acknowledge the financial support of the Slovenian Ministry of Education, Science and Sport, and the Slovenian Ministry of Health which is the end-user of the results of this project.
REFERENCES 1.
2. 3. 4. 5.
World Health Organization (2000) World Health Report 2000: Health Systems. Improving Performance, Http://www.who.int/whr/2000/en/whr00_en.pdf, Accessed January 26, 2007 Han J, Kamber M (2001) Data Mining: Concepts and Techniques, Morgan Kaufmann Publishers Starren J, Johnson S (2000) An Object-oriented Taxonomy of Medical Data Presentations. Journal of the American Medical Informatics Association Volume 7 Number 1 Jan / Feb 2000 Batagelj V, Mrvar A (2006) Program for Analysis and Visualization of Large Networks. Reference Manual, University of Ljubljana, Ljubljana Srikant R, Agrawal R (1996) Mining Quantitative Association Rules in Large Relation Tables, IBM Almaden Research Center, San Jose
Author:Aleksander Pur Institute:Ministry of the Interior Street:Stefanova, 2 City:Ljubljana Country:Slovenia Email:
[email protected]
V. CONCLUSION This paper describes four of the developed methods of data presentations that enable performance and activity
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
GATEWAY: Assistive Technology for Education and Employment D. Kervina1, M. Jenko1, M. Pustisek1 and J. Bester1 1
Laboratory for Telecommunications, Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
Abstract— The two-year Gateway project, carried out by partners from four European countries, aimed to provide a comprehensive and unbiased source of information on Assistive Technology (AT) for young people seeking education and employment. Major target audiences were also educators/guidance practitioners and employers. After research, content was first collected and filtered, and then published in English, French, Slovak and Slovene on the Gateway website at http://www.gateway2at.org. The website features exhaustive practical information on available AT solutions, disabilities in general, case studies of successful AT users, and advice on AT use, funding and supply. In the development phase, much effort was made to ensure proper accessibility of the website itself. Principles of user-friendly design were respected and accompanied by advanced options like special page views for the disabled. Gateway web content is expected to be of great help to all target groups independent of their capabilities. Keywords— Assistive Technology, Accessible Web Design, Disability, Education, Employment
I. INTRODUCTION In autumn 2006, the two-year Gateway project was successfully completed (GATEWAY: Guidance for Assistive Technologies in Education and the Workplace Advancing Young People with Disabilities) [1]. It was designed to address the problem of low inclusion of people with disabilities in 3rd level education and the workplace by promoting the use of Assistive Technology (AT). The project was funded through the Leonardo da Vinci Community Vocational Training Programme in response to the lack of national and EU-level information and guidance on how to remove barriers that prevent people with disabilities from participating optimally in education and employment. The work on the project was coordinated by the Dun Laoghaire Institute of Art, Design & Technology, Dublin, Ireland, and carried out by partners with expertise in the area of AT and disability from four different European countries: Belgium, Ireland, Slovakia and Slovenia. The latter was represented by the Laboratory for Telecommunications at University of Ljubljana’s Faculty of Electrical Engineering. The official launch of Gateway took place at the Higher Options Conference in the Royal Dublin Society in September 2006.
There is a wide spectrum of AT solutions available on the market today. They vary from the simplest gadgets like enlarged switches to complex systems, for example induction loops for the hard of hearing or eye-tracking systems for people with motor impairments. In spite of this rich supply, there is, however, little impartial information and advice to be found by present or potential AT users and their educators, guidance practitioners and employers. The main goal of the Gateway project was to fill this blank and design a website for a professional and motivating presentation of what can be achieved by people with special needs who require AT to reach their own potential. The objective of this paper is to provide an introduction to the project’s aims and results, to describe the process of AT information collection and practice assessment, and to discuss the technology used for an accessible web content presentation. II. TARGET GROUPS Main target audiences of the Gateway Project were the following: young people with disabilities, educators/guidance practitioners, and employers. As found in the report on “The Employment Situation of People with Disabilities in the European Union” [2], 8-14% of European population are directly affected by some form of disability and 36% of this group is under the age of 45. Furthermore, the report also states that “The general pattern observed across the EU is that disabled people have a relatively low education level compared with non-disabled people.” In the case of employment, “… only 42% of people with disabilities are employed compared to almost 65% of non-disabled persons.” Other statistics also reveal a low level of e-inclusion of people with disabilities: internet use rates of disabled persons rarely exceed 50% of the overall penetration [3]. For all these reasons, young people with disabilities can be identified as being at a disadvantage to non-disabled groups due to barriers preventing access to both higher education and employment. Educators and guidance practitioners at 3rd level education institutions were also identified as lacking support and know-how required for a successful assistance at, first,
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 737–740, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
738
D. Kervina, M. Jenko, M. Pustisek and J. Bester
identifying individuals’ needs in terms of AT and, second, exploiting all available ways of funding. Employers were targeted as a broad group. Being a crucial link in the process of fighting high unemployment rates of the disabled, employers were presented with information covering all types of disabilities and technologies suitable and applicable to a wide variety of industries and job types. III. GATEWAY WEBSITE The most important direct result of the Gateway project is an accessibly designed website [4], which is intended to help all the discussed target groups with raising the level of AT awareness and use. The website offers following information: • •
• • • • • • •
General information on various types of disabilities. A thorough list of available AT solutions varying from simple/low-cost to complex/high-tech. A product’s description is generic and includes general information, technological requirements, an estimate of cost, and pictures. Two search engines: an AT search engine for identification of appropriate disability-specific AT solutions and a general website search. Case studies of people successfully using AT in their work and study. Advice on how to make one’s workplace more accessible in a cost-effective way. A country-specific list of sources of funding for AT and advice on how to apply. A country-specific list of sources of advice. A country-specific list of AT supplier details. Links to other AT information resources and Gateway project partners.
According to Gateway’s target audiences, the website was designed with 3 “virtual doors”. This means that all information, including the browsing experience as such, is tailored differently for each target group. Upon opening the start page of the portal (Fig. 1), a visitor simply clicks on one of the given options and then receives only the AT solutions, case studies, advice etc. relevant to his background i.e. AT user, educator/guidance practitioner or employer. Various types of information like groups of AT solutions, specific disability information, case studies etc. are interlinked with numerous cross-references. This enables userfriendly navigation in cases when a visitor tries to inform himself/herself on AT from different perspectives and thus constantly jumps from one section to another.
Fig. 1 Start page of Gateway website IV. CONTENT COLLECTION Information on AT solutions and practices was collected using various methods of research. The work was split among partners. Each partner studied a specific group of AT solutions, conducted case studies in their environment and collected country-specific information related to AT funding and advice. Laboratory for Telecommunications was assigned the task of analyzing AT solutions for the physically disabled. We prepared documentation containing general information, technical requirements and price range for following AT devices and software: • • • • • • •
alternative keyboards, computer speech output, eye-tracking systems, mouse emulation software, touch screens, voice recognition software, word prediction.
Main resources of information were the web, information obtained from interviews and internal documentation. At conducting case studies and collecting information on AT funding and advice in Slovenia, we received a lot of much appreciated help from numerous individuals and organizations, among them from following: • •
Union of Associations of the Blind and the WeakSighted of Slovenia, Union of Associations of the Deaf and Hard of Hearing of Slovenia,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
GATEWAY: Assistive Technology for Education and Employment
• • • •
University of Maribor, Faculty of Electrical Engineering and Computer Science, ZUIM – Education of Handicapped Youth, The Slovene Association of Disabled Students, National Assembly of Slovenian Organizations of the Disabled.
Interviews with altogether 9 disabled persons were carried out in Slovenia. Their AT experience, educational and professional environment and background, and personal opinion on AT and the position of the disabled in the society were then summarized in case studies, featured on the Gateway website along with individuals’ basic personal information and photographs. Upon request the persons interviewed were left anonymous and were presented on the website without photographs and with their names changed. Additional sources of information for our part of the work on the project were, again, the Web, various state and non-governmental organizations who help finance AT and other equipment for the disabled in Slovenia, and Slovenian AT suppliers. V. WEBSITE DEVELOPMENT AND ACCESSIBILITY ISSUES All information and content collected and filtered in the course of the project was made available to public on the Gateway website at http://www.gateway2at.org. The project team’s goal was to present highly useful information in a clear and user-friendly manner. Special care was taken to ensure proper accessibility of web content to the disabled public, itself the key target audience of the whole project. By reaching optimal compliance with the W3C Web Content Accessibility Guidelines (WCAG) [5, 6], the project team tried to make sure that the website itself becomes an enabler, and not a barrier to accessing information and resources online. Pages of Gateway website were produced in the PHP programming language (PHP: Hypertext Preprocessor). PHP scripts, accessed by users’ web browsers, enable the display of static content while using a dedicated database as a source of dynamic data. The database, which runs in background, is an SQL database management system MySQL® (SQL: Structured Query Language). It is managed with phpMyAdmin, a tool written in PHP designed to handle the administration of MySQL over the Web. Gateway website also features a PHP SQL-based search tool for an easy identification of potentially useful AT solutions based on disability/disabilities entered. A general site search engine is included as well. It is powered by Sphider, an application written in PHP, using MySQL as its back end database.
739
The Gateway online content is available in all project partners’ languages: English, French, Slovak and Slovene. The website first appeared in English and was afterwards translated into other 3 languages. This process included not only a mere translation of database content but also reprogramming php scripts, designing additional languagespecific graphic elements and a translation of Sphider, the GPL search engine (GPL: General Public License). Laboratory for Telecommunications also included 6 additional case studies, which appear only in the Slovene version of Gateway in addition to 10 featured in all languages. In terms of accessibility, Gateway website is designed according to good practice principles. To name some elements of accessible web design that were taken into account during the design and translation phases, here are the following: • • • • •
• • • •
• •
Accessibility in terms of design: Text and other content provide adequate contrast in both default and inverted color views. Proper contrast is ensured also in case of user’s color blindness. Font is set to Verdana, which is easy to read and features no potentially disturbing decoration or serifs. Tables are used only for display of tabular data and never serve design purposes. The latter causes misinterpretations of information if Braille Displays or screen readers are used. Functional accessibility features: Basic view and navigation options/links are available at the very top of each page. This too makes browsing easier for blind users. Informative text descriptions (“alt” tags) are added to all non-text content. The style/view of the whole website can be set to either default or contrast (Fig. 2). In case of the latter, successive sections of content are displayed vertically so that the next is positioned beneath the previous one. Such elimination of the horizontal dimension makes navigation easier for blind people using AT. Upon changing the style/view, the whole page reloads using another Cascading Style Sheet (CSS). CSS are, unlike tables, appropriate means of webpage design and are separated from the content itself. The content of every page is divided into comprehensible subsections. Navigation through the website is clear and consistent.
Gateway website is hosted by a server located at the Dun Laoghaire Institute of Art, Design & Technology, Dublin, Ireland.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
740
D. Kervina, M. Jenko, M. Pustisek and J. Bester
Fig. 2 Start page of Gateway website in contrast view VI. CONCLUSIONS Through participation in the Gateway project, we learned that much of the disabled population is not sufficiently aware of the potential benefits of the use of AT. Individuals presented in our case studies successfully use AT in everyday life, but still complain over the lack of effective AT training and public funds. In many cases, adoption of AT and, consecutively, success in education and professional life depends largely on one’s own initiative and financial circumstances. This said, AT educators and guidance practitioners are of great help. They, however, need more useful information related to AT practice and welcome Gateway as a valuable source of competent advice. Employers are often legally obliged to accept disabled workforce and not always do they see it as a potential. Therefore, Gateway made an effort to contribute to changing this opinion by advising on the use of AT in the workplace and justifying it both financially and through showing achievements of disabled persons who managed to reach their potential.
Even though the project is formally completed, it is our intention to carry out some additional promotional activities in Slovenia. Upon releasing the Slovene version of the Gateway website, we invited target groups to test it and received extremely positive responses. This gives us additional motivation for future promotion and confidence that the project results will be acknowledged and appreciated by a broad audience. One important reason for this optimism is also the compliance of Gateway website with accessibility criteria already discussed in this article. Accessible web design not only enables those often ignored who might, ironically, profit most from new technologies, but is also an indicator of a web content provider’s level of awareness and social culture. As such, inclusive design is becoming a legally enforced standard for public websites in many countries. As we learned from web content development on this and other projects, basic accessibility can be reached with no more than some awareness and attention in the design phase and implies little or no additional cost. Advanced accessibility options like separate page views featured on the Gateway website are, of course, welcome, but in no way indispensable in making the Web at least potentially accessible for all.
REFERENCES 1.
2. 3. 4. 5. 6.
Long S, McLoughlin A, Hughes J (2005) Guidance for Assistive Technologies in Education and the Workplace Advancing Young People with Disabilities: Opening Doors For Young People With Disabilities. AAATE Conference Proceedings, Lille, France, 2005 The Employment Situation of People with Disabilities in the European Union, EC, DG Employment & Social Affairs, August 2001 Internet Use Rates of Disabled Persons, SIBIS GPS 2002, SIBIS GPS-NAS 2003 Gateway (2007) at http://www.gateway2at.org World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI) (2007) Web Content Accessibility Guidelines 1.0 at http://www.w3.org/TR/WAI-WEBCONTENT Bobby/WebXACT (2007) at http://webxact.watchfire.com Author: Institute: Street: City: Country: Email:
Damir Kervina Faculty of Electrical Engineering, University of Ljubljana Trzaska 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
How do physicians make a decision? Kaiser Niknam1, Mahdi Ghorbani Samini1, Hedyeh Mahmudi1, Sahar Niknam2 1
2
Biomedical Research Institute, Shahrud, Iran Shahrud University of Technology, Shahrud, Iran
Abstract— this paper presents a mathematical model describing how the physicians actually make a diagnostic decision. Next to a formalization of diagnostic decision making as done by the physicians, this paper shows that how can we design and develop a new approach to medical diagnosis. The model was challenged to diagnose a series of actual patients. Real clinical data was entered into the model and its decisions were compared with internist’s diagnoses. The results indicated good performance with respect to physician’s point of view, although the model had low specificity. The proposed method is effective and can be applied to simulate medical diagnosis as done by the physicians. Keywords— Medical Diagnosis, Medical Decision Making, Fuzzy Logic, Decision Aid, Decision Analysis.
I. INTRODUCTION The most important clinical actions are not procedures or prescriptions but the judgments from which all other aspects of clinical medicine flow [1]. For this reason, medical informaticians have spent much attention to formalize the medical diagnosis process using mathematical models, according to the various methods of decision-making and reasoning under uncertainty. But medical diagnosis is a complex human process that is difficult to represent in an algorithmic model. The history of algorithmic medical diagnosis is the history of intensive collaboration between physicians and mathematicians. One of the first approaches which is proposed to deal with decision making in medical filed was the Bayes Formula [2]-[3]. The basic Bayes formulation assumes that there is a single cause of the patients’ problems, and that it must be one of a set of known hypotheses. Furthermore, it is assumed that the findings or symptoms associated with diseases are conditionally independent. The restrictive assumptions of the Bayesian approach make effective computation possible, but also make the models unrealistic for many real-world medical problems. Therefore, several research groups searched for alternative Bayesian approaches. An especially appropriate formulation of the probabilistic inference problem was worked out by Pearl et al in the early 1980’s [4]. The result of this research is now known as the theory of Bayesian Networks. The computation of the large Bayesian networks is hard to handle, and so far, only some large medical diag-
nostic applications have been successfully implemented [5][6]. As a result, some systems were introduced to a non probabilistic and un-formalized reasoning model, which are based on Evidence Theory. One of the most famous systems based on the evidence theory is MYCIN [7]-[8]. In MYCIN, belief and disbelief measures have been chosen as the confirmation and disconfirmation measures and the certainty factor has been proposed for combining degrees of belief and disbelief into a single number. Zadeh’s theory of fuzzy sets was another attempt to define vague medical entities as fuzzy sets and provide the means for approximate reasoning. In this method, the relationships between symptoms and diseases are described by fuzzy relations of either statistical or judgmental origin [9]-[11]. Another attempt to formulize medical diagnosis was Case-Based Reasoning (CBR) method [12]-[13]. The underlying idea is the assumption that similar problems have similar solutions; so if a suitable measure of similarity exists, the new case can be related to one or more similar past cases in an appropriately indexed database. Though this assumption is not always true, it holds for many practical domains. In addition to the above mentioned models, several innovative techniques have been introduced to develop more formal models that add Artificial Intelligence abilities to the successful but more arbitrary heuristic explorations of the previous ones, such as Neural Networks, Markov Model and so on [14][16]. Finally, it is necessary to say much more researches have been done yet on how doctors should make decisions than on how they actually do [1]. Thus, much of what we know about clinical reasoning comes from empirical studies of non-medical problem-solving behavior. Hence, in this paper, we will try to introduce a model describing how the physicians actually make a diagnostic decision. II. FORMULATION PROCESS This section tries to present ideas that aim at the design and development of a mathematical model to formalize diagnostic decision making as done by the physicians. Therefore, it is necessary to know how a physician, consciously or unconsciously, evaluates distinct diagnoses conditional on the basis of a given set of patient’s symptoms. According to the proposals from many experts, the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 696–699, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
How do physicians make a decision?
697
physicians normally have an imperfect knowledge of how they solve diagnostic problems [17]-[18]; but they usually follow some general principles when they are confronted with an actual case [19]-[20]. In fact, when a physician is confronted with an actual case, he takes into consideration the strength of confirmation and the strength of disconfirmation (exclusion) of the patient’s symptoms with respect to distinct diagnoses. Strength of confirmation is the degree of the acceptance of a diagnosis conditional on the basis of a set of symptoms exhibited by the patient in comparison with other diagnoses. Correspondingly, strength of disconfirmation is the degree of the exclusion of a diagnosis on the basis of the (expected) symptoms that they are not exhibited by the patient. Every physician has his individual criteria to assess the strength of confirmation (and also, disconfirmation) of the patient’s symptoms, but the following criteria for the strength of confirmation are nearly accepted by all of the physicians:
We are now able to design and develop a mathematical model to satisfy these conditions. Our model can be stated as the following mathematical formulation: Let Σ = {σ 1 , σ 2 , …, σ m } and Δ = {δ 1 , δ 2 , … , δ n } be nonfuzzy sets of all symptoms and diagnoses, respectively. Let also R : Δ × Σ → [0,1] be a fuzzy relation denoting the frequency of occurrence of a certain symptom with a certain diagnosis. It is clear from the etiology of some (but not all) diseases that we can encode the clinicians’ knowledge of diagnosis-symptom relationships into frequency of occurrence relations with a high degree of accuracy [21]. Now let K ⊆ Δ × Σ be a set of all fuzzy relations which relate the diagnoses into the symptoms. K can then be considered as our medical knowledge base. Associate with each symptom σ i ∈ Σ , a fuzzy set of diagnoses:
•
that cause the symptom σ i , and associate with each diagnosis δ j ∈ Δ , a fuzzy set of symptoms
•
• •
Knowledge Extension Principle: The more (less) diagnoses a physician associates with a certain symptom, the less (more) is the strength of confirmation of the symptom for a certain diagnosis. Frequency Principle I: The more (less) often a certain symptom occurs in a certain diagnosis, the more (less) is the strength of confirmation of the symptom for that diagnosis. Sufficiency Principle: If a symptom is observed in only one diagnosis, the existence of that symptom absolutely confirms the respective diagnosis. Monotonicity Principle I: If two symptoms confirm the existence of a diagnosis independently, the existence of both of them together, confirms the respective diagnosis more than the case that one of them confirms it.
Also, the following criteria for the strength of disconfirmation are nearly accepted by all of the physicians: •
•
• •
Domain Extension Principle: The more (less) symptoms occur with a certain diagnosis, the less (more) is the strength of exclusion of a not exhibited symptom for the respective diagnosis. Frequency Principle II: The more (less) often a certain symptom occurs in a certain diagnosis, the more (less) is the strength of exclusion for non existence of the respective symptom. Necessity Principle: If a symptom is always occurring with a diagnosis, and the symptom is not found in the patient, the respective diagnosis should be excluded. Monotonicity Principle II: If non existence of two symptoms disconfirm a diagnosis independently, non existence of both of them disconfirm the respective diagnosis more than every one.
e(σ i ) = { (δ j , μ K (δ j , σ i )) | (δ j , σ i ) ∈ K
c(δ j ) = { (σ i , μ K (δ j ,σ i )) | (δ j ,σ i )∈ K
}
}
(1)
(2)
that it causes, called the profile for that diagnosis [22]. Let also e(S ) =
∪ e(σ )
(3)
∪ c(δ )
(4)
i
σ i ∈S
and c (D ) =
j
δ j ∈D
for sets S ⊆ Σ and D ⊆ Δ . Therefore, we define a diagnosis δ ∈ Δ explains a set of symptoms S ⊆ Σ to some degree that it can satisfy the above mentioned principles. In accordance with our definition, we introduce several symbols as follows: R (δ | S ) = Ranking Measure, or the degree of confirmation of a diagnosis δ ∈ Δ on the basis of a given set of symptoms S ⊆ Σ compared to other diagnoses; II. E(δ | S ) = Excluding Measure, or the degree of disconfirmation (exclusion) of a diagnosis δ ∈ Δ on the basis of a given set of symptoms S ⊆ Σ that they were expected to be exhibited by the patient in disease δ , but they are not exhibited. III. C(δ | S ) = Covering Measure, or the degree of covering of a given set of symptoms S ⊆ Σ by a diagnosis δ ∈Δ ;
I.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
698
Kaiser Niknam, Mahdi Ghorbani Samini, Hedyeh Mahmudi, Sahar Niknam
Ranking measure is motivated by the problem of how much a diagnosis is acceptable and covering measure (and similarly, excluding measure) is motivated by the problem of how to interpret the expected symptoms that they are not exhibited by the patient. Hence, the task of diagnosis in our model will be reduced to calculating these two measures and then, collecting them into an acceptability measure for each diagnosis based upon a given set of symptoms. There are many ways of defining and interpreting the ranking, excluding, and covering measures. We then define them in the unit interval (respectively) as follows: ∀σ ∈ c(δ ) : R (δ | σ ) = R (δ | S ) =
and
μ K (δ , σ ) = e(σ )
∨
∀σ ∈S ⊆ c (δ )
R (δ | σ )
∀σ ∈ c(δ ) : E(δ | σ ) = 1 − and
E(δ | S ) =
μ K (δ , σ )
∑( μ) (δ , σ ) K
j
∀δ j ∈e σ
1 − μ K (δ , σ ) (1 − μ K (δ , σ i ))
∑
∀σ i ∈c (δ )
∨
∀σ ∈S ⊆ c (δ )
(5)
(6)
E(δ | σ )
∀S ⊆ c(δ ) : C(δ | S ) = 1 − E(δ | ¬S ) where ¬S = {σ | σ ∈ su pp(c(δ )) and σ ∉ S }
(7)
Here, ∨ is a fuzzy t-conorm where x ∨ 1 = 1 (absorbent element). Obviously, all of ranking, excluding, and covering measures belong to the unit interval. According to the above Possibility Function sentences, we make the P(δ , S ) : {R (δ , S ), C(δ , S )} → [0,1] which describes the degree of acceptance (explanation) of a diagnosis δ ∈ Δ conditional on a given set of symptoms S ⊆ Σ as follows: P(x, y ) : [0,1]× [0,1] → [0,1]
⎧ y1 ≤ y 2 ⇒ P(x, y1 ) ≤ P(x, y 2 ) ⎪ ⎪ x ≤ x2 ⇒ P(x1 , y ) ≤ P(x2 , y ) where ⎨ 1 ⎪P ( x, 0 ) = 0 ⎪⎩P( 1 , y ) = 1
(8)
Any definition of possibility function and corresponding t-conorm allow the user to evaluate the diagnoses according to the needs and decisions about the interpretation of the knowledge base; however, any choice of them will be compatible with our principles and hence, it can demonstrate how the physicians make diagnostic decisions. III. MODEL EVALUATION As stated earlier, there are many ways to define the possibility function and t-conorm operator. Then we propose the following definitions for them:
⎧0.96 x + 0.04 y, ⎪0, ⎪ P (x, y ) = ⎨ ⎪1, ⎪⎩undefined ,
x ∈ [0,1[, y ∈ ]0,1] x ∈ [0,1[, y = 0 x = 1, y ∈ ]0,1]
(9)
x = 1, y = 0
1 x ∨ y = min ⎧⎨1, [x ω + y ω ]ω ⎫⎬ , ω = 0.40 ⎭ ⎩
(10)
where x and y are the fuzzy membership degrees. It is completely obvious that the proposed possibility function and the corresponding t-conorm satisfy our conditions. In order to evaluate our model, we built Avicenna®, a software program, based on the proposed model in this paper and it was challenged to diagnose a series of actual patients each of whom had been referred to an internist and in which of whom a diagnosis had been established. All of the entered cases were real clinical cases and included all of the “noise” of the actual diagnostic evaluation. We just omitted the cases which their established diagnoses were not present in the Avicenna® knowledge base. The cases translated into the language provided by the program. Because of the limitations of program’s language, some data could only be approximated, or could not be entered at all. After this validation stage, 250 selected cases were entered into software and the system produced a ranked list of possible diagnoses for each case. We then calculated the following scores to characterize the program performance [23]-[24]: Sensitivity: For each case, the presence or absence of the correct diagnosis within the most possible diagnoses on the list generated by program is scored 1 or 0. Length of Diagnoses: For each case, program produced a list of potential diagnoses. The length of diagnoses score is the number of most possible diagnoses included in the diagnosis list. Length of Symptoms: For each case, the length of symptoms score is the number of symptoms that were entered into the software.
Table 1 shows the mean values for these three scores. The results showed that the delivered diagnoses by the internist had obtained the maximum possibility (highest ranking) in 215 of 250 cases (sensitivity = 0.86), although approximately six most possible diagnoses per case appeared on the generated list by the program (low specificity). Also, it showed that the mean length of symptoms was about five.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
How do physicians make a decision?
699
Table 1 Mean (SD) Scores for the Model Score
Mean
SD
Sensitivity Length of Diagnoses Length of Symptoms
0.86 6.10 5.49
0.35 19.86 3.34
IV. CONCLUSION In this paper, we have presented a mathematical model to formalize diagnostic decision making as done by the physicians. We demonstrated how a physician, consciously or unconsciously, evaluates distinct diagnoses conditional on the basis of a given set of patient’s symptoms. We also showed how we can apply these principles in a mathematical formulation of medical diagnosis. The experimental results of the model indicated good performance when compared with internist’s diagnosis. The main aspect of this paper is to put forward a new decision support approach in the medical evaluations as same as an actual formalization of decision making as done by the physicians. Many papers have been published yet on how physicians should make decisions, but in this paper we demonstrated how they actually do. We described and formalized the physicians’ decision making process and showed this formalization satisfies our conditions. Also, we showed that this model makes correct diagnostic decisions in most cases with respect to internist’s point of view, although the program usually produces more than six diagnoses for every case. We introduced some scores to evaluate the model performance. The proposed scores showed the model feasibility and they were almost higher than the similar ones for DXplain, QMR, Iliad, and Meditel, according to the results of a study published by Berner et al [23]-[24].
6. 7. 8.
9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.
REFERENCES 1. 2. 3. 4. 5.
Braunwald E, Fauci AS, Kasper DL, eds. Harrison’s Principles of Internal Medicine - 15th Edition. 2001; McGraw-Hill Publishing. Warner HR, Haug P, Bouhaddou O, et al. ILIAD as an expert consultant to teach differential diagnosis. In proceedings of the 12th Annual Symposium on Computer Applications in Med. 1987; 371-376. Hazen GB, Huang M. Large-sample Bayesian posterior distributions for probabilistic sensitivity analysis. Med Dec Mak 2006; 26:512-534. Pearl J. Fusion, Propagation, and Structuring in Bayesian Networks. Artificial Intelligence. 1986; 29: 241-288. Onisko A, Druzdzel MJ, Wasyluk H. A Bayesian Network Model for Diagnosis of Liver Disorders. In proceedings of the 11th Conference on Biocybernetics and Biomedical Engineering. 1999; 842-846.
24.
Nikovski D. Constructing Bayesian Networks for Medical Diagnosis from Incomplete and Partially Correct Statistics. IEEE Transactions on Knowledge and Data Engineering. 2000; 12(4): 509-516. Shortliffe EH. MYCIN: A rule-based computer program for advising physicians regarding antimicrobial therapy selection. In proceedings of the ACM National Congress, SIGBIO Session. 1975. Shortliffe EH, Davis RW, Axline SG, Buchanan BG, Green CC, Cohen SN. Computer-based consultations in clinical therapeutics: Explanation and rule-acquisition capabilities of the MYCIN system. Computers and Biomedical Research. 1975; 8(8): 303-320. Adlassnig KP. Fuzzy Set Theory in Medical Diagnosis. IEEE Transactions on Systems, Man, and Cybernetics. 1986; 16: 260-265. Innocent PR, John RI. Computer aided fuzzy medical diagnosis. Information Sciences: an International Journal. 2004; 162(2): 81-104. John RI, Innocent PR. Modeling uncertainty in clinical diagnosis using fuzzy logic. IEEE transactions on systems, man, and cybernetics. 2005; 35(6): 1340-50. Yearwood J, Pham B. Case-Based Support in a Cooperative Medical Diagnosis Environment. Telemedicine Journal. 2000; 6(2): 243-250. Holt A, Bichindaritz I, Schmidt R, Perner P. Medical applications in case-based reasoning. Knowledge Eng Rev. 2005; 20: 289-292. Reggia JA, Nau DS, Wang PY. Diagnostic expert systems based on a set covering model. International Journal of Man-Machine Study. 1983; 19; 437-460. Kordylewski H, Graupe D, Liu K. A novel large-memory neural network as an aid in medical diagnosis applications. IEEE Transactions on Information Technology in Biomedicine. 2001; 5(3): 202-9. Sonnenberg FA, Beck JR. Markov Models in Medical Decision Making. Medical Decision Making. 1993; 13(4) 322-338. Ledley RS, Lusted LB. Reasoning foundations of medical diagnosis. Science. 1959; 130: 9-21. Elstein AS, Schwarz A. Clinical problem solving and diagnostic decision making: selective review of the cognitive literature. BMJ. 2002; 324: 729-732. Grabner G. Einige Gedanken zur computerunterstützten Diagnostik. Technical Report MES-1996. Department of Medical Computer Sciences, University of Vienna Medical School. 1996. Bögl K. Design and Implementation of a Web-Based Knowledge Acquisition Toolkit for Medical Expert Consultation Systems. Ph.D. thesis, Technical University of Vienna. 1997. Sadegh-Zadeh K. Fundamentals of clinical methodology: 2. Etiology. Artificial Intelligence in Medicine. 1998; 12: 227-270. Vinterbo S, Ohno-Machado L. A genetic algorithm approach to multidisorder diagnosis. AIIM. 2000; 18: 117-132. Berner ES, Webster GD, Shugerman AA, et al. Performance of Four Computer-Based Diagnostic Systems. NEJM. 1994; 330(25): 17921796. Berner ES, Jackson JR, Algina J. Relationships among Performance Scores of Four Diagnostic Decision Support Systems. Journal of the American Medical Informatics Association. 1996; 3(3): 208-215. The address of the corresponding author: Author: Sahar Niknam Institute: Street: City: Country: Email:
Shahrud University of Technology Farhangian, 5th alley, No. 16 Shahrud Iran
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Informational Internet-systems in Ukrainian healthcare – problems and perspectives. A.A. Lendyak1 1
Lviv national medical university named after D. Halytskyy, Lviv, Ukraine
Abstract— Today can be noticed the rash development of medicine and pharmacy. Nowadays appears the necessity of modernizations in systems of medical and pharmaceutical information of the post-USSR countries because of the wide range of diseases, methodic of treatment and quantity of available drugs on the markets. Existing systems in modern circumstances, in conditions of limited labor and financial resources, and large implantation of evidence-base medicine, often can’t effectively satisfy the requirements of population and health professionals in medical and pharmaceutical information. The work was based on the study of Ukrainian project Doctor.UA. Introducing of Doctor.UA is scheduled in several stages. First of all in January, 2003 was opened fouryears test project, which was the part of corporative site of the company (http://apteka-doctor.com). It was named PreDoctor.UA and it in simplified form modeled Doctor.UA. During first decade of January, 2007 the results of activity of PreDoctor.UA were drew up. For deciding given tasks was analyzed following information: statistics systems, usabilitytesting, interviewing the visitors. In 2006, 97% of interviewed visitors (n=300) positively appreciate the idea of launching Doctor.UA. As a result of conducted analysis of activity of PreDoctor.UA was stated the necessity of large-scale implantation of Doctor.UA, that is planned to be done during 2007-2009 (in 3 stages). Internet-systems of medical and pharmaceutical information proved their practicability of using in post-USSR countries. In this region such systems effectively solve the main tasks, satisfy requirements of visitors and have a significant popularity. Keywords— medical and pharmaceutical information, Internet-systems, Doctor.UA, Ukraine.
I. INTRODUCTION Today can be noticed the rash development of medicine and pharmacy. Nowadays appears the necessity of modernizations in systems of medical and pharmaceutical information of Ukraine because of the wide range of diseases, methodic of treatment and quantity of available drugs on the markets. Existing systems in modern circumstances, in conditions of limited labor and financial resources, and large implantation of evidence-base medicine, often can’t effectively satisfy the requirements of population and health professionals in medical and pharmaceutical information [1]. The objective of this work was to determine the efficiency, performance requirements and financial practicability of
using new for Ukraine systems of medical and pharmaceutical information, which are based on the data communications via Internet. II. MATERIAL AND METHODS The work was based on the study of Ukrainian project Doctor.UA (non-commercial project which provides health professionals and citizens by quality medical and pharmaceutical information through the Internet, project is launching by JSC “Apteka Doctor”). For deciding given tasks were applied following methods: statistical analyses, usability-testing, interviewing the visitors. III. RESULTS Introducing of Doctor.UA is scheduled in several stages. First of all in January, 2003 was opened four-years test project, which was the part of corporative site of the company (http://apteka-doctor.com). It was named PreDoctor.UA and it in simplified form modeled Doctor.UA. Pre-Doctor.UA has realized following functional units (some units – partially): − reference information for visitors in different directions of medicine; − information about drugs which are available on the market (incl. instructions for using); − teleconsultations; − information about healthy lifestyle and role of preventive maintenance; − news of medicine and pharmacy; During first decade of January, 2007 the results of activity of Pre-Doctor.UA were drew up. Statistical data of attendances of the whole project and separate it services was explored. The results from three different statistics servers were studied and the index of average attendance of the project was determined: it forms 406 persons daily, the growth was rather essential from at average 27 visitors/daily in 2003, then 244 – in 2004 and 650 – in 2005, to 704 visitors/daily – in 2006. During that time system has processed 33584 inquiries for granting the instructions for using determined drug.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 700–703, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Informational Internet-systems in Ukrainian healthcare – problems and perspectives.
Qualified specialists have given over 1000 teleconsultations in 12 fields of medicine and pharmacy. Technically the service has been functioning with the usage of personal software “Pre-Doctor.UA – Consult”. This software solution allows to organize asynchronous test consultations in the mode “patient’s question-doctor’s answer” [2]. Asking a question the user gets a unique password of it, which allows to update the question, if necessary. It provides the possibility to have a dialogue between a doctor and a patient, at the same time such important conditions are observed as user authorization (the doctor is confident, that he communicates with the patient who has originally asked the question (under condition of personal password preservation by the user)) and nonparticipation in the dialogue of other nonprofessionals in the sphere of public health services, except for the inquirer. As for doctors` answers there is a system of step licensing of medical consultants. Each question is directed to a certain theme (which subsequently can be changed by the administration or a doctor). And only doctors who have the right (license) for answering in the given subject, see this question on their consultant administration interface and can answer it. Besides, each doctor is given a certain qualification (medical consultant; field doctor; high qualified doctor). According to it the answer can be updated by the doctor who has given it or by the doctor of higher qualification. If the answer is updated by the doctor of the same or low qualification, this answer passes the stage of an expert judgement. If the expert finds the answer being correct, then it will be added. Besides, software capabilities allow carrying out the following operations: − use of emergency modes of communication with a medical consultant (triggering of automatic questionnotification function on a doctor’s mobile phone); − gathering of answers statistics (including average answering time per year/month; average time necessary for the doctor to answer and absolute answers quantity); − drawing up of doctors ratings on the basis of statistics; − filing of consultations, ample opportunities of search in archives (by number of the question, keyword, temporary kill-files); − highly probable prevention of spam occurrence in questions from users, including automatic verification of all forms that should be completed, automatic check of obscene expressions in questions, restriction on quantity of questions that can be asked from one IP in a unit of time. For the project Doctor.UA a new software version – “Doctor.UA – Consult”, which will expand existing capabilities of the version “Pre-Doctor.UA – Consult” (first of all, in the direction of statistics), is being already prepared.
701
Besides, new functionality will be implemented in the new system, namely the following: transmission of photomaterials; − opportunity of internal discussion of the published answer by the doctors (consultation) at the stage of publication preparations; − Consultation system between the doctors concerning practice cases (internal consultations). The “thematic portfolio” of given consultations is shown in table 1. On the basis of Pre-Doctor.UA activity and the analysis of the Ukrainian part of the Internet, it is possible to formulate the basic problems of medical-pharmaceutical Internet systems in post-Soviet countries in comparison with alternative information systems, and also the ways to solve these problems (typical for all projects or by the example of PreDoctor.UA and Doctor.UA): 1. Number of Internet users. The global network does not cover the majority of the Ukrainian population. In Ukraine the quantity of Internet-users in 2003 was 3,8mln people [3]. Regrettably there wasn’t published other official information about the quantity of Internet-users after 2003, but prevalence of the Internet in post-Soviet countries tends to powerful growth, and already today the Internet audience is a significant part of the population of these countries. Moreover, it is important, that the majority of the Ukrainian audiences, which are characterized by such parameters as active living position, solvency, striving for extracting information, are already Internet users [4]. Table 1. The analysis of given teleconsultations by a thematic principle. Theme
Number of consultations, quantity
Theme in percentage, %
All about medicines:
136
13.1
Gynecology:
314
30.3
Dermatology:
81
7.8
Cosmetology:
21
2.0
Andrology:
68
6.6
Narcology:
38
3.7
Neurology:
25
2.4
Pediatrics:
48
4.6
Traumatologyorthopedy:
17
1.6
Endocrinology:
18
1.7
Others:
272
26.2
In total:
1038
100%
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
702
2. Confidence in medical-pharmaceutical Internet. Unfortunately, no significant studies of this problem were arranged in post-Soviet countries, but it is necessary to present the analysis data of the company “Datamonіtor”. In 2002 this organization carried out a large-scale research in concern with trust to medical information sources [5]. The company experts have interrogated over 4.5 thousand people in France, Germany, Italy, Spain, Great Britain and the USA and have found out, that 57% of those who was in search for medical information for last 12 months used the Internet. For comparison, 76 % of the interrogated confided in personal doctor, in books, magazines and TV - 73% of them, and in friends and family - only 53%. Undoubtedly, a similar Ukrainian research should be carried out, but it is possible to draw preliminary conclusions that the level of trust is sufficiently high. 3. Financing of projects. At the stage of large-scale implementation, such projects require rather massive financing. For the projects Pre-Doctor.UA and Doctor.UA this problem was solved owing to the fact that at the given stage financing of projects is carried out exclusively out of funds provided by the owner company. Though the project Doctor.UA has good independent financial prospects, and in the future it is planned to involve funds of grants, advertisers and probably investments of other companies with preservation of a controlling interest so as to achieve self-repayment as soon as possible, and in long-term prospect - to come to the level of profitability. In 2006, 97% of the interrogated visitors (n=300) have appreciated the idea of Doctor. UA launch, functioning concept of which certainly helps to preserve observance of the following basic principles: − adequacy and relevance of the placed data (at reprinting with references to source materials); − definite division of access to the information for the population and professionals in the sphere of public health services; − observance of all normative-ethic rules as well as international standard principles of providing medicalpharmaceutical information via the Internet (for example, Health On the Net foundatіon - HONcode) [6]. Here it is important to mention, that in spite of the fact that at present the project by 100 % belongs to one commercial company, even by the time of its launch Pre-Doctor.UA was clearly declared, and the independence principle (including all promotional materials that were precisely separated from the main part of the site) was implemented; The information structure Doctor.UA has been designed in view of the analysis of Pre-Doctor.UA activity as well as the analysis of market needs in the sphere of medical and
A.A. Lendyak
pharmaceutical information. This project should be characterized by the following structure functional units: 1. Legislation Databases and Databases of Legal Regulation of Public Health Services, including those containing the standards of medical aid delivery; 2. Information for a patient concerning fields of medicine and concrete nosologic units; 3. The Global list of medical products; 4. The List of domestic manufacturing firms and representatives of foreign manufacturers; 5. Information about medical products available in the market (Medicine Register, patient information leaflet); 6. Information about interaction of various medical products; 7. Monographs; grants; reference books; 8. Internet versions of publications in one-field scientific and popular scientific medical and pharmaceutical paper periodicals; 9. Integrated working results of the Central Authorities for Medicines Quality Control; 10. The results of medicine series analysis obtained by the State Organization for Medicines Quality Control; 11. Clinical integrated data on revealing of undocumented side effects of medical products; 12. Teleconsultations for visitors and thematic groups for discussion; 13. Overall information about the healthy way of life and the role of preventive measures; 14. Help information concerning medical-pharmaceutical establishments of Ukraine; 15. Statistics; 16. Information about medical and pharmaceutical news, both in Ukraine and world-wide. IV. CONCLUSIONS 1. As a result of the carried out analysis of PreDoctor.UA activity, it has been established that there is a necessity of large-scale implementation of Doctor.UA, what is planned to be done during the years 2007-2009 (in three stages). 2. The medical-pharmaceutical Internet systems have proved their usability in Ukraine. Such systems effectively solve primary problems, satisfy visitors` needs and have significant popularity. Today realization of such projects requires investment; nevertheless they have good independent financial prospects in the future.
ACKNOWLEDGMENT The author thanks to Professor Parnovskiy B. L. and Professor Yagensky A. V. for their advices.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Informational Internet-systems in Ukrainian healthcare – problems and perspectives.
The author also thanks to the team of Ukrainian national Internet-project Doctor.UA
4. 5.
REFERENCES 1. 2. 3.
Lendyak A. A. Internet-phramacy– Lutsk: Volyn regional publishing, 2006.– 150p. Vladzymyrskyy A. V. Clinical teleconulting. The reference for the doctors. Second edition.– Donetsk: Nord LTD, 2005.– 107с. State committee of the communication and informatization of Ukraine Госкомсвязи Украины.–2004.– http://www.stc.gov.ua/
6.
703
The level of using of the Internet in Ukraine.-2006.Socialogical report of Kyiv international institute of sociology– http://www.regnum.ru/news/744809.html Health websites gaining popularity.–2002.– http://news.bbc.co.uk/2/hi/health/2249606.stm About Health On the Net (HON): Background.–2007.– http://www.hon.ch/Global/ Author: Institute: Street: City: Country: Email:
Artur Lendyak Lviv National Medical University B. Hmelnitskogo str. 1a Lutsk Ukraine
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
O3-DPACS: a Java-based, IHE compliant open-source data and image manager and archiver M. Beltrame1, P. Bosazzi2, A. Poli1, P. Inchingolo1 1
Open Three Consortium, Higher Education in Clinical Engineering, DEEI, University of Trieste, Trieste, Italy 2 Open Three Consortium, Clinical Unit of Radiology, University of Trieste, Italy
Abstract— Within the Open Three Consortium (O3) an open source Image-Data Manager/Archiver, called O3DPACS, has been studied, developed and experimented in the routine of European and US hospitals. The O3 Consortium is an international open-source project constituted in 2005 by Higher Education in Clinical Engineering (HECE) of the University of Trieste; it deals with the multi-centric integration of hospitals, RHIOs and citizen (care at home and on the move, and ambient assisted living). O3-DPACS is the evolution of the DPACS (Data & Picture Archiving and Communication System) project, started in 1995 at the University of Trieste, with the goal to develop an open, scalable, cheap and universal system with accompanying tools, to store, exchange and retrieve all health information of each citizen at hospital, metropolitan, regional, national and European levels, thus offering an integrated virtual health card of the European Citizens, in a citizen-centric vision. O3-DPACS offers many additional features with respect to those programmed with the original DPACS project, to account for the new needs of our inclusive and ICT-based society, in order to manage health data in an integrated environment of hospitals, RHIOs and citizens. In particular, O3-DPACS is open source, Java-based, fully adherent to the “Integrating the Healthcare Enterprise” (IHE) international interoperability project, internationalized in many languages, working with any operating system and any data-base system. Keywords— open-source, distributed health care, IHE, PACS, internationalization.
I. INTRODUCTION The research work on PACS at the University of Trieste, carried out by the Group of Bioengineering and ICT (BICT) and the Higher Education in Clinical Engineering (HECE), started in 1991 with the project Open-PACS, after a CommView AT&T Philips multi-site PACS system was installed at Trieste’s Cattinara and Maggiore Hospitals in 1988. Our work aimed to overcome the limitations of the PACS System and to open the proprietary installation by developing versatile open source tools (essentially gateways and client workstations) for external communications with the PACS [1]. In this way, it has been possible to distribute images in Trieste throughout the hospital departments and surgery rooms of the three hospitals and to the bioengineer-
ing and medical physics research centers. Once reached the intrinsic limits due to the Commeview PACS architecture, a project was started in 1995 on a totally new system named DPACS (Data and Picture Archiving and Communication System) [2]. The goal of DPACS was “the development of an open, scalable, cheap and universal system with accompanying tools, to store, exchange and retrieve all health information of each citizen at hospital, metropolitan, regional, national and European levels, thus offering an integrated virtual health card of the European Citizens” in a citizen-centric vision. From the 1998 the DPACS system was running routinely at the Cattinara Hospital of Trieste for managing all radiological images (CT, MRI, DR, US, etc.) as well as the connection with the stereo-tactic neurosurgery. Some mono-dimensional signals such as ECGs were also integrated into the system. Over the years, DPACS server and workstations were enriched with the sections of anatomo-pathology, anesthesia and reanimation, clinical chemistry laboratory and others. At the beginning of 2000s’, the DPACS client and server applications were progressively forwarded to the new emerging necessities of the health care, health management and assistance for the world citizen, based on e-health-driven home-care, personal-care and ambient assisted living (AAL). These new goals of the DPACS project evidenced new needs in its specification. In this paper, the steps we did to develop the new servers, in particular the data & image manager and archiver O3-DPACS , are described. II. MATERIALS AND METHODS The new needs in the specification of the DPACS project, evidenced by its new goals, were the following: 1) to have a multi-lingual approach to both client and server managing interfaces and to the presentation of medical contents; 2) to have a simple data & image display client interface, automatically updatable, highly portable from a PC or a Mac or a Linux workstation to a palm or a cellular-based communicator;
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 732–736, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
O3-DPACS: a Java-based, IHE compliant open-source data and image manager and archiver
3) to be able to connect with a wide variety of communication means, both fix and mobile; 4) to offer a highly modular data & image manager/archiver, independent of the platform (Unix/Linux, Windows, Mac) and of the selected data-base; 5) to improve the interoperability of both server and client system components among them and with all the other information systems’ components in the hospital and in the health enterprise; 6) to have an efficient and effective tool to “create” the integrated virtual clinical record in the hospital as well as at home or during the travel of a citizen. The solutions we adopted for both clients and servers to fit all these new specifications can be summarized as follows: 1) multi-language support; 2) high scalability and modularity; 3) use of Java and Web technologies at any level; 4) support of any platform; 5) high level of security and safety management; 6) support of various types of data-bases and application contexts; 7) treatment of any type of medical information, i.e. images, data and signals; 8) interoperability through full compliance to the “Integrating the Healthcare Enterprise” (IHE) world project; 9) open-source. Starting from 2000, all nine characteristics have been implemented on the DPACS servers, the last one offering a GNU GPL (Generic Public License) license At the “EuroPACS-MIR 2004 in the enlarged Europe” meeting [3] organized in Trieste by HECE in September 2004 to respond to a special commitment of the EuroPACS society, the DPACS 2004 server [4] was presented, together with other new tools, and used to create at the conference a virtual hospital. One of the results of “EuroPACS-MIR 2004 in the enlarged Europe” has been the creation of the “Open Three (O3) Consortium” Project (see www.o3consortium.eu), operated one year after by HECE together with BICT’s laboratories HTL and OSL at DEEI and by the group of the Radiology Department of Padova. The mission of O3 was (and is still today) to make research, development and deployment of open-source products for the three domains of the tomorrow’s e-health, in the frame of the European ehealth programs: hospital, territory and home-care / mobilecare /ambient assisted living (AAL) in a citizen-centric vision [5]. O3 Consortium is today an important international reality, where O3-XXX products are studied, developed and implemented at international level.
733
The initial success of O3 was obtained by an improving DPACS 2004 and the RIS of Padova called MARiS [6], creating, respectively, O3-DPACS and O3-MARIS. III. RESULTS A. O3-DPACS Insights and IHE profiles implementation O3-DPACS [7] has many features in common with the other O3 projects: higher scalability, modularity, use of Java at any level, support of any platform, interoperability through full compliance to the “Integrating the Healthcare Enterprise” (IHE) world project, high level of multilanguage internationalization. O3-DPACS is a Java J2EE (Java 2 Enterprise Edition) application. It has been realized as a modular collection of services, as summarized in Fig. 1. As communication protocols, DICOM is used mainly for clinical data, signals and images and HL7 for administrative data. The main action of the system is that of a Data & Image & Manager/Archiver. As shown on the left column of Fig. 1, DICOM and HL7 communication modules connect the external world with the modules of the following services: Storage: to store DICOM objects. as 1) images, 2) data, such as reports in the form of DICOM Structured Reports and Presentation States, and 3) waveforms - such as ECG, EMG, EEG etc. - as DICOM waveform data; Query/Retrieve: to execute query and retrievial of the stored objects via the DICOM protocol; Modality Procedure Performed Step: to receive messages about the completion status of exams and link them with the stored data;
Fig. 1 The modular structure of O3-DPACS
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
734
DICOM Storage Commitment: to verify that data is properly stored and to confirm this to modalities; HL7 message interpretation: to manage administrative data, such as identifying information exchanges, or checking for the re-alignment of inconsistent patient information, i.e., due to a first-aid procedure. Proceeding with the research, many new actors have been implemented in O3-DPACS, to allow the organization of flows and actions of medical reporting and the exchange of documents and images across healthcare enterprises. So, according to the IHE Actors’ definition, O3-DPACS is currently offering the following actors: Image Manager/Archiver, to archive and manage radiological images and added objects allowing queries and several kinds of access to information; Data Manager/Archiver, to archive and manage clinical data/waveforms and added objects; Performed Procedure Step Manager, to manage messages from modalities, indicating exams’ advancement state; Audit Record Repository, to archive and manage the messages recording the “events” of all actors; Secure Node, to provide system authentication and access control of user and of any connected node; Time Client, to synchronize with a trusted remote clock. Imaging Document Source, to exchange documents and images across healthcare enterprises. All the main IHE integration profiles necessary to work with these actors have been developed and implemented: Access to Radiology Information, for specifying access modality to DICOM data, so that they can be found and retrieved in a coherent way; Consistent Presentation of Images, to manage the Presentation States, i.e., the objects that state how radiological images are to be viewed; Consistent Time, to synchronize time for all the enterprise-wide systems; Patient Information Reconciliation, to solve some common problems about patient registration in the hospital enterprise; Basic Security, to manage secure communication via TLS (Transport Layer Security Charter) and to perform extensive logging of the related operations; Scheduled Workflow, to assure cooperation in a real radiology workflow; Evidence Documents, for optimal management within the radiological workflow of all added objects created in referral steps (report, notes and graphic objects); Key Image Notes, to create notes on the details of some images to make them more recognizable, as well as to apply and visualize them in a simple and intuitive way;
M. Beltrame, P. Bosazzi, A. Poli, P. Inchingolo
Workflow Reporting, to allow any step in the complex referral procedure to be monitored and organized so as to guarantee maximum simplicity and transparency; Cross-Enterprise Document Sharing for Imaging, to retrieve clinical images from other health enterprises; Audit Trail Node Authentication, to manage security in all the communications between the heterogeneous systems. B. O3-DPACS-WEB The right side of Fig. 1, shows that O3-DPACS offers three different Web services: 1) for remote assistance; 2) for system’s configuration and 3) for web-based users’ access. In this paper we will present and discuss the third one. Web access to the user is fundamental for the goals of O3, since it allows easy information flow outside the hospital(s) towards territorial healthcare providers or general practitioners. The main requirement for letting images go outside the hospital is that each medical doctor can connect at the available speed and can retrieve images at the required quality level: some of them needs diagnostic quality for verifying the report, others consulting quality for simple reference or to give the report a visual support. Working towards this goals a first web module for O3DPACS has been developed, using JAVA technologies like JSP (JavaServer Pages), Servlet and Applet, and implementing the access to DICOM information with full adherence to WADO (Web Access to DICOM Objects) Standard (DICOM part 18) (Fig. 2). To ensure the user is not in direct contact with the data storage level, O3-DPACS uses to expose methods in the middle layer to allow clients to retrieve data. This approach allows separation for presentation and management of DICOM data so that, providing interfaces are not changed, every improvement in the PACS DICOM engine is automatically transferred to each client.
Fig. 2 The four-layers’ connection via Web between users and servers.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
O3-DPACS: a Java-based, IHE compliant open-source data and image manager and archiver
Fig. 3 Traffic between O3-DPACS and O3-WEB O3-WEB presentation layer is made with a collection of JSPs and JavaBeans that run in the server side presentation layer. These components can call O3-DPACS business logic using JNDI (Java Naming and Directory Interface) lookup for finding O3-DPACS services and then execute business methods that ensure performance and coherence for the database access. The modulation of the quality of transferred images is implemented at the user interface, allowing the user to choose the quality of the images he is going to view (Fig. 3). The next step will be to link the choice of the quality to user authentication, in order to implement a user defined visualization profile. After the selection, the reviewing physician can gather access to a Jpeg browser, or to an applet that is the re-elaboration in an applet form of our opensource workstation O3-RWS. The applet can obtain the DICOM images through the WADO retrieve protocol and give the control of the image to the user, who can change the visualization parameters as in a radiological workstation. IV. DISCUSSION AND CONCLUSIONS The O3-DPACS system is Java-based, fully adherent to IHE, internationalized in many languages, working with any operating system and any data-base system, open to the outside of the hospitals thanks to the new web-based services, and first of all, it is fully open source, one of the most appreciated characteristics of the last leading projects [8]. Web PACS systems have been very recently depicted as the solution for the multicenter sharing of imaging [9], and according to Koutelanis [10], their advantages are multiple and detected in different fields of functionalities. First, there is no need for a specialized DICOM workstation, since only the browser is needed, second, the use of a Java web-based architecture allows the use of smaller code and get great stability and scalability, allowing portion of the code to be executed on the client side, avoiding congestion and possible server crashes.
735
About the improvement of the implementation of our Web services to access O3-DPACS, several points are under discussion. First, we are investigating how the same goals can be achieved by using AJAX (Asynchronous JavaScript and XML) technologies. AJAX is a well know technology for sending asynchronous request to the web server, for example it allows updating a graph or an image avoiding the full page refresh. This is promising for allowing image manipulations tools also for the JPEG visualization tools without an applet, whether it is probably still not capable to substitute the DICOM applet viewer, causing also too much server load. Second, the successful development of this tool verified the efficiency of the multi-tier development process. O3-DPACS exposes the interfaces for manipulating meaningful DICOM objects in the business layer, such as find and retrieve a study or a series. Then, the web developer can obtain the information with little cost and overhead, and easily focus on the web development process and the presentation of data. With the only exception the exposing the interfaces, this approach needs no changes, for O3-DPACS engine, which always can be queried using HL7 and DICOM open standards. Third, compression should be used internally of the data being used. As a matter of fact, since DICOM data in O3-DPACS will be delivered as lossless compressed data in case of DICOM retrieval, and JPEG images will be compressed with a client-triggered quality, we feel there is no need of further compression to gather efficient client-server communication. In conclusion, the progresses we did with O3-DPACS seem to be a real and efficient first-step answer to the new needs of our inclusive and ICT-based society, in order to manage health data in an integrated environment of hospitals, RHIOs and citizens. A lot of RST, within an international cooperation frame, has obviously to be still done in the next years to reach the ambitious goal of the citizencentric healthcare.
REFERENCES 1.
2. 3. 4. 5.
Diminich M., Inchingolo P., Magliacca F., Martinolli N. (1993). Versatile and open tools for LAN, MAN and WAN communications with PACS. In: Comput. Biomed.. Held, Brebbia, Ciskowski, Power Eds, Comp.. Mech. Pub., Southampton, pp. 309-16. Fioravanti F., Inchingolo P., Valenzin G., Dalla Palma L. (1997). The DPACS Project at the University of Trieste. Med Inform., 22:301-14. Inchingolo P., Pozzi Mucelli R. (eds) (2004). EuroPACS-MIR 2004 in the Enlarged Europe. EUT, Trieste, ISBN: 88-8303-150-4. Inchingolo P. et al. (2004). DPACS-2004 becomes a java-based opensource modular system. Idem, pp. 271-6. Inchingolo P. (2006) The Open Three (O3) Consortium Project. In: Open Source Strategy for Multi-Center Image Management, https://www.mcim.georgetown.edu/MCIM 2006
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
736 6. 7.
8. 9.
M. Beltrame, P. Bosazzi, A. Poli, P. Inchingolo Saccavini C. (2004) The MARIS project: open-source approach to IHE radiological workflow software. Idem, pp. 285–7. Inchingolo P. et al. (2006). O3-DPACS Open-Source Image-Data Manager/Archiver and HDW2 Image-Data Display: an IHEcompliant project pushing the e-health integration in the world, Comput. Med. Imag. Graph., Elsevier Science 30: 391-406 Bui A:A: et al. OpenSourcePACS: an extensible infrastructure for medical image management. IEEE Trans Inf Technol Biomed. 2007 Jan;11(1):94-109. Hernandez J. A. et al. (2007). Web-PACS for Multicenter Clinical Trials. Information IEEE Trans. Techn. Biomed., 11, 87-93.
10. Koutelakis G., Lymperopoulos D. (2006) PACS through Web Compatible with DICOM Standard and WADO Service: Advantages and Implementation. In: EMBS '06, pp. 2601-5. Author: Marco Beltrame Institute: Street: City: Country: Email:
SSIC-HECE, DEEI, University of Trieste Via Valerio, 10 Trieste Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
O3-RWS: a Java-based, IHE-compliant open-source radiology workstation G. Faustini, P. Inchingolo Open Three Consortium, Higher Education in Clinical Engineering, DEEI, University of Trieste, Trieste, Italy Abstract— Within the Open Three Consortium (O3) an open source radiological reporting workstation, called O3RWS, has been studied, developed and experimented in the routine of European and US hospitals. The O3 Consortium is an international open-source project constituted in 2005 by Higher Education in Clinical Engineering (HECE) of the University of Trieste; it deals with the multi-centric integration of hospitals, RHIOs and citizen (care at home and on the move, and ambient assisted living). O3-RWS has been studied and developed with the goal to give a solution for the needs of the physician, who wants to have an easy-to-use, light and complete solution for the radiology reporting and report creation. O3-RWS, a very versatile platform-independent radiology workstation, providing user authentication and being easy to use also for private users, is able to retrieve, visualize and manage medical images; in an universal version, it is going to be able to deal with vital signs like ECG, hemodynamical and pneumological data. O3-RWS has common features with other O3 projects: higher scalability, modularity, use of Java at any level, support of any platform, interoperability through the full compliance to the “Integrating the Healthcare Enterprise” world project and high level of multi-language internationalization. O3-RWS is currently available in Croatian, English, German, Italian, Slovenian and Russian languages and it is going to be available in French and Spanish languages in the near future. Keywords— open-source, medical reporting workstation, citizen-centric health-care, interoperability, internationalization.
I. INTRODUCTION After having developed in 1991-1994 open source Unixbased workstations [1] to make the radiological images, stored in a first-generation Commview AT&T PACS, available in the metropolitan area of Trieste, in 1995 the Group of Bioengineering and ICT (BICT) and the Higher Education in Clinical Engineering (HECE) of the University of Trieste started the project DPACS (Data and Picture Archiving and Communication System). The goal of DPACS was “the development of an open, scalable, cheap and universal system with accompanying tools, to store, exchange and retrieve all health information of each citizen at hospital, metropolitan, regional, national and European levels, thus offering an integrated virtual
health card of the European Citizens” in a citizen-centric vision [2]. From the 1998 the DPACS system was running routinely at the Cattinara Hospital of Trieste for managing all radiological images (CT, MRI, DR, US, etc.) as well as in the connection with the stereo-tactic neurosurgery. Some monodimensional signals such as ECGs were also integrated into the system. Multi-monitor (1, 2 and 4 monitors) DPACS workstations were developed together with the DPACS server, with the technology available at that time, running on a Windows-based platform. Over the years, DPACS server and workstations were enriched with the sections of anatomo-pathology, anesthesia and reanimation, clinical chemistry laboratory and others. At the beginning of 2000s’, the DPACS client and server applications were progressively forwarded to the new emerging necessities of the future health care, health management and assistance to the world citizen, based on ehealth (telemedicine) driven home-care, personal-care and ambient assisted living (AAL). These new goals of the DPACS project evidenced new needs in its specification. In this paper, the steps we did to develop the new client workstations, in particular the lastly developed O3-RWS open-source one, are described. II. MATERIALS AND METHODS The new needs in the specification of the DPACS project, evidenced by its new goals, were the following: 1) to have a multi-lingual approach to both client and server managing interfaces and to the presentation of medical contents; 2) to have a simple data & image display client interface, automatically updatable, highly portable from a PC or a MAC or a LINUX workstation to a palm or a cellularbased communicator; 3) to be able to connect with a wide variety of communication means, both fix and mobile; 4) to offer a highly modular data & image manager/archiver, independent of the platform (UNIX/LINUX, WINDOWS, MAC) and of the selected data-base;
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 727–731, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
728
5) to improve the interoperability of both server and client system components among them and with all the other information systems’ components in the hospital and in the health enterprise; 6) to have an efficient and effective tool to “create” the integrated virtual clinical record in the hospital as well as at home or during the travel of a citizen. The solutions we adopted for both clients and servers to fit all these new specifications can be summarized as follows: 1) multi-language support; 2) high scalability and modularity; 3) use of Java and Web technologies at any level; 4) support of any platform; 5) high level of security and safety management; 6) support of various types of data-bases and application contexts; 7) treatment of any type of medical information, i.e. images, data and signals; 8) interoperability through full compliance to the “Integrating the Healthcare Enterprise” (IHE) world project; 9) open-source. Starting from 2000, the first eight characteristics have been implemented on the DPACS clients and servers. At the “EuroPACS-MIR 2004 in the enlarged Europe” meeting organized by HECE in Trieste in September 2004 [3] to respond to a special commitment of the EuroPACS society, the DPACS 2004 server [4] and the HDW2 client [5] were presented and used to create at the conference a virtual hospital, together with other devices of DPACS and of other companies. At that time only the server DPACS 2004 implemented also the ninth characteristics, i.e. open source with GNU GPL (Generic Public License) license, while the client HDW2 – an universal workstation - was created to be diffused with a commercial license. One of the results of “EuroPACS-MIR 2004 in the enlarged Europe” has been the creation of the “Open Three (O3) Consortium” Project (see www.o3consortium.eu), operated one year after by HECE together with BICT’s laboratories HTL and OSL at DEEI and by the group of the Radiology Department of Padova. The mission of O3 was (and is still today) to make research, development and deployment of open-source products for the three domains of the tomorrow’s e-health, in the frame of the European e-health programs: hospital, territory and home-care / mobile-care /ambient assisted living (AAL) in a citizen-centric vision [6].
G. Faustini, P. Inchingolo
O3 Consortium is today an important international reality, where O3-XXX products are studied, developed and implemented at international level. While the beginning of the O3 success was obtained by joining the servers O3-DPACS (an improvement of DPACS 2004) and O3-MARIS (an improvement of Padova’s RIS called MARiS [7]) with the workstation client HDW2 [8], at the beginning of 2006 the development of a new workstation O3-RWS, characterized as to be also open source, started in the BICT laboratories in Trieste. The goal of the O3-RWS project was that to give to the physicians what they need and what they want to have, but the commercial products do not offer to them: an easy-touse, light and complete solution for the radiology reporting process and for the report creation. To be able to have the correct users’ specifications, the O3-RWS development has been preceded by an accurate study and interpretation of the real physician’s needs. III. RESULTS A. O3-RWS technological introduction The first version of O3-RWS has been developed from 2005 to 2006. The first step has been the collection of physicians’ requirements: an accurate and deep analysis was made upon radiologist’s feedback of different European hospitals; the best feedbacks came from the radiologists’ group of prof. Davide Caramella at the S. Chiara Hospital of Pisa, Italy. The most important feature that has been requested by radiologists is to have a very clear and easy-to-use interface, with no other objects than the ones they need to perform the reporting process: image processing features, measurement facilities and the possibility to synchronize different studies. Subsequently, O3-RWS development study and implementation has been started according to these characteristics; at the same time, in order to assure the main general specification reported in material and methods, O3-RWS has been developed as a very versatile platform-independent radiology workstation, providing user authentication and ease to use also for private users, with ability to retrieve, visualize and manage medical images. In the new version being developed this year, O3-RWS is being extended to O3-UWS (Universal Workstation), with the ability to deal with vital signs like ECG, hemodynamical and pneumological data. O3-RWS has features common with other O3 projects: higher scalability, modularity, use of java at any level, support of any platform, interoperability through the full com-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
O3-RWS: a Java-based, IHE-compliant open-source radiology workstation
729
pliance to the “Integrating the Healthcare Enterprise” world project, high level of multi-language internationalization. B. O3-RWS technological features O3-RWS has been completely internationalized in several languages. It is currently available in Croatian, English, German, Italian, Slovenian and Russian languages and it is going to be available in French and Spanish languages in the near future.
Fig. 3 O3-RWS Workstation managing personalized multi-monitor viewing according to the available physical devices
Fig. 1 O3-RWS Workstation Image Management Design It has been designed to be extremely modular: Fig. 1 shows its architecture, where it is possible to evidence its multi-layer structure. Due to its nature, the system supports the plug-in option: i.e. it is possible to develop stand-alone plug-ins (Fig. 2) which can be automatically recognized and used by the workstation.
(VR). O3-3D benefits of a long tradition of our group in 3D reconstruction and rendering of brain morphology and electrical activity in pathological conditions [9]. O3-RWS provides two interfaces to the Medical Imaging Resource Centre (MIRC) servers: it is possible to query directly from the workstation and to display the teaching file, and it is also possible to create the teaching file within the same frame. The radiologist can mark the key-interest images and then create the teaching file by filling simple forms: the case will be automatically created taking care of saving the images with the correct parameters (window level) without patient personal data. A tool that helps physician to remove image built-in patient data from secondary capture images is in a development phase. One of the most important features of O3-RWS is that it is “multi-monitor based” rather than simply “multi-monitor supporting”: it is therefore possible to use it on a single notebook or on a three devices mode, but the key point is that it is made to support the everyday clinical workflow. The user can personalize the image viewer bounds and the viewers layout in order to manage different series/studies synchronization and hanging protocols (Fig. 3). C. O3-RWS IHE Compliance
Fig. 2 O3-RWS Workstation Modules’ Design O3-RWS fully supports the integration with O3-3D, an open-source module for the three dimensional reconstruction, designed by O3 at the beginning of 2006: it provides basic tools for multi-planar reconstruction (MPR), maximum intensity projection (MIP) and Volume rendering
According to the IHE Actors’ definition, O3-RWS, in the version of May 2006 (Barcelona European Connectathon), has been developed and implemented as: 1) an Image Display, the actor displaying radiological images and added objects, allowing queries and several kinds of access to information; 2) a Secure Node; 3) a Time Client. Four main IHE integration profiles necessary to work with the Image Manager/Archiver have been developed and implemented: ARI: Access to Radiology Information; KIN: Key Image Notes; CPI: Consistent Presentation of Images; SWF: Scheduled Workflow; ATNA: Basic Security.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
730
G. Faustini, P. Inchingolo
Fig. 4 Key Image Notes
(KIN) IHE profile operating in the O3-RWS Workstation
Analogously to O3-DPACS and to the previous workstation HDW2 [8], also O3-RWS has been opened during the last year to the new frontiers of a) the organization of reporting information flows and 2) the exchange (retrieval) of documents and images across healthcare enterprises. To achieve the first goal, the following modules, already implemented in RWS are being implemented in O3-RWS: 4) Evidence Creator: to create added objects on the image, such as notes or graphic references; 5) Report Creator: to produce electronic structured reports; 6) Report Reader: to read electronic structured reports. The following integration profiles have been associated to these new actors: KIN: Key Image Notes, applied to the Image Display, the Evidence Creator and the Portable Media Creator actors (Fig. 4); RWF: Reporting Workflow (see above for its function), applied to the Report Creator and the Report Reader actors. To achieve the second goal, i.e. the exchange of documents and images across healthcare enterprises, two new IHE actors, already implemented in HDW2, are being implementing in O3-RWS: 6) Document Consumer; 7) Imaging Document Consumer. Analogously, the following integration profiles, already implemented in HDW2, are being associated to them: XDS: Cross-Enterprise Clinical Document Sharing, applied to the Document Consumer actor, allowing exchange of health information, with adequate access control, making it retrievable for all health professional figures, external to the health enterprise, co-operating in guaranteeing citizen’s health; XDS-I: Cross-Enterprise Document Sharing for Imaging, applied to both the actors, i.e., Document Consumer and Imaging Document Consumer. IV. DISCUSSION AND CONCLUSIONS
same time. It is simple to use, tailored on the real needs of the physicians, in particular of the radiologists, who like using it very much. The extension of O3-RWS to O3-UWS opens many doors related to the citizen-centric integration of hospitals, RHIOs and personal environments (at home and on the move). The full adherence of O3-RWS to IHE, allows simple and successful interoperability and therefore integration in any environment. The modularity, scalability and OS independent features assure large portability and easy and smooth installations’ growing. Some words must be spent in relation to the use of the JAVA Graphics Technology: which one is the best for medical images displays? One focus point is the choice of JAVA as programming language for the display of the radiological images on medical devices, which have a very large resolutions: the first issue we dealt with was the lack of performance displaying heavy size images (>20Mb) on big scale monitors (3-5Mpixels). The first step to solve this problems has been to search for nowadays special technologies: JAI libraries and various ways to interface JAVA with OPENGL have been tested. Anyway, JAVA2D management was the best in performances: JAI and OPENGL were leak in the windowleveling and in zooming operation with big scale volumes, while they exhibited similar performances for small images (CT, MR). After having obtained this result, a 6 months performance tuning step was started, with the aim to optimize in an extreme way the management of classes, fields and methods: special engines and workflows have been created to fasten the display step, to optimize and lower memory usage. The results obtained at the end fully enable the use of Java also for high performance medical displays.
REFERENCES 1.
2. 3. 4. 5. 6.
O3-RWS is the first open source, java based workstation in the world; moreover, it offers also 3D reconstruction tools, multi-monitor managing and MIRC supporting at the
7.
Diminich M., Inchingolo P., Magliacca F., Martinolli N. (1993). Versatile and open tools for LAN, MAN and WAN communications with PACS. In: Comput. Biomed.. Held, Brebbia, Ciskowski, Power Eds, Comp.. Mech. Pub., Southampton, pp. 309-16. Fioravanti F., Inchingolo P., Valenzin G., Dalla Palma L. (1997). The DPACS Project at the University of Trieste. Med Informat, 22(4):301-14. Inchingolo P., Pozzi Mucelli R. (eds) (2004). EuroPACS-MIR 2004 in the Enlarged Europe. EUT, Trieste, ISBN: 88-8303-150-4. Inchingolo P. et al. (2004). DPACS-2004 becomes a java-based opensource modular system. Idem, pp. 271-6. Miniussi E. et al. (2004) HDW2: a powerful and customizationfriendly java-based Dicom Workstation. Idem, . pp. 215–8. Inchingolo P. (2006) The Open Three (O3) Consortium Project. In: Open Source Strategy for Multi-Center Image Management, https://www.mcim.georgetown.edu/MCIM 2006 Saccavini C. (2004) The MARIS project: open-source approach to IHE radiological workflow software. Idem, pp. 285–7.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
O3-RWS: a Java-based, IHE-compliant open-source radiology workstation 8.
9.
Inchingolo P. et al. (2006). O3-DPACS Open-Source Image-Data Manager/Archiver and HDW2 Image-Data Display: an IHEcompliant project pushing the e-health integration in the world, Comput. Med. Imag. Graph., Elsevier Science 30: 391-406. Vatta F, Bruno P, Inchingolo P. (2005) Multi-region bicentric-spheres models of the head for the simulation of bioelectric phenomena. IEEE Trans Biomed Eng. 52(3):384–9.
731 Author: Giorgio Faustini Institute: Street: City: Country: Email:
SSIC-HECE, DEEI, University of Trieste Via Valerio, 10 Trieste Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Open Source in Health Care: a milestone toward the creation of an ICT-based pan-European health facility D. Dinevski1, P. Inchingolo2, I. Krajnc1, P. Kokol3 1
2
Open Three Consortium, Faculty of Medicine, University of Maribor, Slovenia Open Three Consortium, Higher Education in Clinical Engineering, DEEI, University of Trieste, Trieste, Italy 3 Open Three Consortium, Faculty of Electrical Engineering and Computer Science, Slovenia
Abstract— The Open Source model gained a high credibility in the last few years and its benefits have been proved recently also in complex applications. Open Source in health care is growing, but it is still to come although several successful stories have been marked already. There are some specifics of Open Source in health care that are investigated in the paper. On the basis of these specifics some recommendations are proposed to reach the interoperability and integration effect. Very recently, political, administrative and industrial decisions taken in particular in Unites States and in the European Union have pushed the Open Source in health care and in the creation of stable solutions for the independence of the life and the active aging. These policies have allowed the growth of some international initiatives, as the Open Three (O3) Consortium, being consistent with the recommendations here proposed and representing a milestone toward the creation of an ICT-based pan-European health facility, which is one of the main goals of the Information Society based policy of the European Union. Keywords— Open Source; Health Information Systems; Open Standards; Interoperability; Health Care Integration.
I. INTRODUCTION The Open Source (OS) model as defined by Open Source Initiative (OSI - http://www.opensource.org/) has a lot to offer. It's a way to build open standards as actual software, rather than paper documents. It's a way that many companies and individuals can collaborate on a product that none of them could achieve alone. It is proved (the references are listed at the mentioned OSI web page) that OS generally means higher security and higher reliability. The real-world evidence shows that OS also brings robustness, clear flexibility and higher quality if compared to closed software in general. In the “Bazaar-mode” development as described in the highly cited and excellent source on OS philosophy "The Cathedral and the Bazaar" (http://www.catb.org/~esr/writings/cathedral-bazaar/) one can expect higher development speed and lower overhead. When advocating OS in health care applications most of the readers will search for the benefits of the “customers” instead those of developers. What is the main advantage that
the OS applications bring to the hospitals or healthcare institutions? They don’t become prisoners by implementing the information systems into their daily routine! Because they can get access to source, they can survive the collapse of their vendor. They are no longer at the mercy of unfixed bugs. And if the vendor's support fees become inflated, they can buy support from elsewhere. Open source software has had success in horizontal applications, or applications that are useful in many different industries. These applications include enterprise resource planning and customer relationship management. But open source has had less impact vertically, in applications specific to one single industry, such as health care. The potential market for vertical applications is smaller than that for horizontal applications. Furthermore, the health care industry historically has not made IT a top priority, so it lags behind other, more IT-intensive industries, such as financial services and Internet businesses in adopting OS. II. EUROPEN DIMENSION OF THE OPEN SOURCE Is there any specific European position in OS world? A new study on the economic impact of Open Source Software on the European information and communication technologies field [1] has found that it would certainly increase Europe's competitiveness. "Given Europe's historically lower ability to create new software businesses compared to the US, due to restricted venture capital and risk tolerance, the high share of European OS developers provides a unique opportunity to create new software businesses and reach towards the Lisbon goals of making Europe the most competitive knowledge economy by 2010," states the report, which was requested by the European Commission's Enterprise DG. Further it says that a growth and innovation simulation model shows that increasing the OS share of software investment from 20% to 40% would lead to a 0.1% increase in annual EU GDP growth excluding benefits within the ICT industry itself. The report suggests that Europe is in a good position to increase its €22 billion investment in OS (compared to 36 billion in the US), considering that 63% of all OS developers are resident in the European Union, while only 20% are in the USA and Canada.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 719–722, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
720
D. Dinevski, P. Inchingolo, I. Krajnc, P. Kokol
III. OPEN SOURCE IN HEALTH CARE Clinicians’ use of information systems and the ability of these systems to share patient data are two critical steps in the transformation of the health care. But these accomplishments also pose difficult IT challenges — among them, how to share patient information beyond the walls of individual institutions and clinics, how to bring health care providers into regional networks that can easily and securely exchange that information, and how to expand the use of electronic medical records. Around the world, health care software is moving from hospital-centered departmental systems, to patient-centered medical records that are distributed across networks. These changes mirror and support organizational changes in the health care industry. Whether the change is called "managed care", "regional health", or "community health networks", health care providers and their software needs are in transition. Open source software has intriguing potential to solve some of the obstacles now being encountered in this transition: - Open source reference implementations of electronic medical record standards could speed their adoption and increase interoperability in practice. - Open source software could reduce the issue of "Who pays?" in community health networks by eliminating per user and per site license costs. Very good analysis of Open source in health care is a reference [2], where also actual solutions with some concrete software descriptions are suggested. A. Interoperability and integration in relation to the OS health care history The IT systems in health care are relatively complex and interdisciplinary – in a typical healthcare institution they are dealing with: information exchange and processing, knowledge management, process integration, research, collaboration, teaching, delivery, evaluation… It is not difficult to guess that the magic words here are “interoperability” and “integration”. In the closed source information systems you get interoperability rare and slow, while in the collaboration environment generated by critical mass of OS developers, interoperability (especially between OS solutions) is relatively easy to achieve. The guiding principle of OS development in health is collaboration in open, problem based, evidence guided manner. There are several success stories of OS in health care. Early ones (till 2003) are well documented under the SPIRIT project funded by the European commission (http://www.euspirit.org/). Others are WorldVista (Inte-
grated hospital EHR, multimedia patient record), PrimaCare, OSCAR, JEngine, OpenEMed, HAPI and others. In the same paragraph it has to be mentioned that several early (from 1998 to 2001) initiatives failed, mostly due to the premature state of technology and, at that time, low trustworthiness of OS for accomplishing critical missions. B. Costs associated to the use of OS in health care One of the most popular arguments for OS is “lower cost” and, while several studies and cases that prove this argument are publicly presented, there are few documented cases for health care environment and almost none for hospitals. As a good documented example from health care, we recommend the study from Beaumont hospital published by Fitzgerald and Kenny in 2003 [3]. The conclusion of the study is that OS brings substantial savings to the hospital in several fields of IT solutions over 5 year period: Desktop applications by factor 8, Content management by factor 4, Digital imaging by factor 30 and Application sever by factor 9. C. Territorial impacts of OS in health care Currently, patient care is provided by an unconnected collection of often competing facilities, including hospitals, physician offices, home health agencies, clinical laboratories, and rehabilitation centers. These providers need to be woven into regional networks that can easily share and exchange patient information in order to provide the best possible, and most cost-effective, care. Such networks will depend on health care IT. By facilitating the adoption of standardized electronic medical records, open source software may contribute to the creation of regional health information networks, which exchange data and patient records. Access to source code will allow each region to adapt the software to its specific requirements without having to develop an entire software suite from scratch. In turn, regional innovation will filter back to the larger health care community, advancing the technology while minimizing costs. D. Specifics of the OS development for health care Health care organizations that develop their own applications using their own programming staff find open source software attractive for four main reasons: - low cost and ease of acquiring the software, - growing selection of OS projects results in the independence on supplier, - wide support for open source standards, - flexibility - ability to view and modify the source code.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Open Source in Health Care: a milestone toward the creation of an ICT-based pan-European health facility
On the other hand, these organizations often must arrange their own project support. They face the risk that the community developing the open source software may become inactive and cease enhancing it. Small health care organizations, such as group practices, clinics, and most hospitals, do not develop any software themselves. They may engage consultants to customize standard, off-the-shelf applications. For them, the benefits of open source are the lower acquisition cost of the many pieces of software they need to build a complete IT environment. Also, by relying on open source solution gives them greater flexibility and more options in the future. IV. NEW OPEN SOURCE IN HEALTH CARE POLICIES The last year a strong common trend in the industrialized and political world to use massively open source in health care, in ambient assisted living and in the management of the independence of the life in the aging society has been noticed. The first important event has been the Open Source Strategy for Multi-Centre Image Management Workshop, held in March 2006 at Las Vegas (USA). The workshop was organized by the US federal government (Dept. of Defense and Dept. of Health & Human Services) together with one key industry and three universities. Invited speakers came also from NIST, FDA, other industries and from the O3 Consortium. The workshop evidenced the strong position assumed by the Department of Health & Human Services and the Department of Defense of Unites States to use open source at least in all the health care core activities, to assure stability of the products, easily recovering from the disappearance of a producer and also lower costs. A second important event has been the OSDL Joint Initiatives Face to Face Meeting Review – Health Care Information Exchange, held in May 2006 at Sophia-Antipolis (France). OSDL, the world’s biggest industries open source association, decided to move to open source products for health care in the next 4-5 years. Finally, the European Union made big decisions about open source. During the Intergovernmental Meeting of the European Commission “ICT for an Inclusive Society”, held in June 2006 at Riga (Latvia), open source has been considered, together with open standards, a key solution for an inclusive society, to be used for health care, for ICT-based ambient assisted living and for the management of the independence of the life in the aging society has been noticed. This roadmap has been formalized with the Inter-ministerial Riga Declaration signed during the meeting.
721
After Riga, many European Union events followed in preparation and for the start-up of the Seventh Framework, the largest European Research Plan, having a duration of 7 years (2007-2014) and being dominated by ICT and by Health sectors. The IST event of Helsinki in November 2006, the Information Days of ICT for Independent Living and Inclusion in January 2007 in the European Parliament in Brussels and the IST event of Cologne in February 2007 underlined the strong roadmap through open source in health and for inclusion and independent life adopted by Europe. The content of the Challenges 5 (Health) and 7 (ICT for Independent Living and Inclusion) of the ICT sector of the Seventh Framework, confirm with the first 2 calls this important decision of the European Union. V. A CASE STUDY: THE OPEN THREE (O3) CONSORTIUM These new European and world-wide policies of using open source in health have allowed the growth of some international initiatives, as the Open Three (O3) Consortium- This initiative is consistent with the recommendations proposed by the European Union and the US Federal Government and is a candidate to represent a milestone toward the creation of an ICT-based pan-European health facility, which is one of the main goals of the Information Society based policy of the European Union. The Open Three Consortium (O3) International Project has been constituted in 2005 by the Higher Education in Clinical Engineering (HECE) of the University of Trieste, at the Dipartimento di Elettrotecnica, Elettronica ed Informatica of the University of Trieste [5]. O3 Consortium is an innovative open-source project dealing with the multi-centric integration of hospitals, RHIOs and citizen (care at home and on the move, and ambient assisted living), based on the about 60 HECE bilateral cooperation Agreements with Hospitals, Medical Research Centers, Healthcare Enterprises, Industrial Enterprises and Governmental Agencies and on the International Networks ABIC-BME (Adriatic Balcanic Ionian Cooperation on Biomedical Engineering) and ALADIN (Alpe Adria Initiative Universities’ Network). The Users’ and Developers’ O3 Consortium Communities are based mainly on the HECE agreements. The Developers’ community, started, under the responsibility and administration of HECE, with the main contributions of the Universities of Trieste and Padova in Italy, of the University of Maribor in Slovenia and it grew with many other European and US contributions, from universities, research centers, industries and e-health service companies. It provides the active members of the Users’ Community with all the necessary project design, site analysis,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
722
D. Dinevski, P. Inchingolo, I. Krajnc, P. Kokol
implementation, logging, authoring, bugs’ solving, and high-level 7/7 - 24/24 full-risk service. Additionally, training is highly cared by HECE, starting with preparing clinical engineering professionals at three different levels, offering both traditional and e-learning courses with particular skills in Clinical Informatics, Health Telematics, E-health integration standards and IHE-based interoperability, and providing also specific courses and training on site. Furthermore, selected radiologists of the Active Users’ Community – where O3 is running (in Italy, from Trieste, Padova, Pisa and Siena and in Slovenia from Maribor) constitute a Medical Advisor Committee, which gives very precious feedbacks to the O3 Developers’ Community. Some months ago, the collaboration with multiple opensource solutions has been extended, starting an international cooperation with the open-source based company Sequence Managers Software, Raleigh, NC, United States [6]. The O3 Consortium proposes e-inclusive citizen-centric solutions to cover the above reported three main aspects of the future of e-health in Europe with open-source strategies joined to full-service maintenance and management models. The main characteristics of the O3 open-source products are multi-language support, high scalability and modularity, use of Java and Web technologies at any level, support of any platform, high level of security and safety management, support of various types of data-bases and application contexts, treatment of any type of medical information, i.e., images, data and signals, and interoperability through full compliance to IHE, obtained by building up O3 as a collection of “bricks” representing the IHE “Actors”, connecting each other through the implementation of a wide set of IHE Integration profiles [7]. VI. DISCUSSION AND CONCLUSIONS The other benefits of open source software — low cost, flexibility, opportunities to innovate — are important, but independence from vendors is the most relevant for health care. A great deal of software development will be required over the next decade to build the technical infrastructure and applications necessary for electronic medical records that can be easily and securely shared by regional health information networks. To this end, having a vendor-neutral, open-source software platform to invest in is probably the best way to channel foundation and public sector funding into software development for the purpose of providing higher quality, less expensive medical care. The O3 Consortium initiative seems to represent a significant example of contribution in this direction.
Open Source in Health Care is considered today a milestone toward the creation of an ICT-based pan-European health facility in Europe and to facilitate the multi-centric integration of the health services in United States. It is also considered a pillar for the inclusive implementation of ICT for independent life and active aging in Europe. Once again, the European and the US-based international policy and activity of O3 Consortium seems to represent one qualifying demonstration that the open source in health care and inclusion roadmaps of European Union and of Unites States can have short-time, although at the moment only partial, solutions.
REFERENCES 1.
2.
3.
4. 5. 6.
7.
Ghosh R. A. (2006) Economic impact of open source software on innovation and the competitiveness of the Information and Communication Technologies (ICT) sector in the EU, MERIT 2006, http://ec.europa.eu/enterprise/ict/policy/doc/2006-1120-flossimpact.pdf, 1.1.2007 Goulde M., Brown E. (2006) Open Source Software; A Primer for Healthcare Leaders, California Healthcare Foundation, available online at: http://www.chcf.org/documents/ihealth/OpenSourcePrimer.pd f, 1.1.2007 Fitzgerald B., Kenny, T. (2003) Open Source Software can Improve the Health of the Bank Balance - The Beaumont Hospital Experience, http://opensource.mit.edu/papers/fitzgeraldkenny.pdf, 1.1.2007 Ministerial Riga Declaration, 11 June 2006, Riga, Latvia, http://ec.europa.eu/information_society/events/ict_riga_2006/ doc/declaration_riga.pdf. Inchingolo P. (2006) The Open Three (O3) Consortium Project. In: Open Source Strategy for Multi-Center Image Management, https://www.mcim.georgetown.edu/MCIM 2006 Inchingolo P., Lord B. (2007) International medical data collaboration with multiple open-source solutions. In: Open Source Strategy for Multi-Center Image Management, St. Louis Missouri, USA. http://www.mcim.georgetown.edu/MCIM2007. Inchingolo P. et al. (2006). O3-DPACS Open-Source ImageData Manager/Archiver and HDW2 Image-Data Display: an IHE-compliant project pushing the e-health integration in the world, Comput. Med. Imag. Graph., Elsevier Science 30: 391406 Author: Institute: Street: City: Country: Email:
Dejan Dinevski Faculty of Medicine, University of Maribor Janeziceva 5 Maribor Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Reducing time in emergency medical service by improving information exchange among information systems A. Jelovsek1, M. Stern1 1
Computel d.o.o., Ljubljana, Slovenia
Abstract— There are many organized units involved to perform an emergency rescue mission: dispatch center, mobile rescue units and emergency departments (ED) in hospitals. Communication among them is often not fully automated, and then personnel need to cope with unnecessary work. That of course takes time in cases of urgent interventions, while time is one of the most important factors for patient survival. There are several processes in which better performance could be established. Improvement can be made by reducing communication obstacles between actors in processes and among three different information systems involved: hospital information system (HIS) in emergency department, computer aided dispatch (CAD) and records management system (RMS) used by mobile units. Verbal information exchange unreliability, paper sharing problems and retyping of data from system to system can be removed in many processes: hospital staff e-ordering from HIS, call taker to dispatcher in dispatch center, dispatcher to mobile unit and mobile unit to emergency department in hospital. With the establishment of paths among these three information systems (HIS, CAD and RMS) priceless saved minutes can be used in the battle for patient’s life. Improvements can also be achieved in the cost-effectiveness. Many data exchanged from involved information systems and gathered to a central database can be very useful for the needs of accountancy and EMS operation improvement management and EMS quality assurance management. Keywords— information system, information exchange, emergency medical system, XML, HL7
I. INTRODUCTION This will present the current situation in EMS sector by analyzing actors in that sector and communication flows among them. EMS system usually consists of three organized units, that use different information systems. Every unit has to get proper information in appointed time to take prompt actions. To contribute to the needs of the units, every information flow has to have as less obstruction in its way to the user of that information as possible. This article will introduce different processes, which that takes place in EMS activity. All of these processes holds their problems when conducting in a old fashion way. For each of the problems we will find the solutions by using knowledge in medical informatics. These problems are:
• • • • •
Paper information cannot be shared simultaneously Availability of a call taker Lack of efficient control over mobile units Inefficiency of paper on terrain Poor and delayed information in ED for further hospital treatment • Lack of cooperation among mobile units from different rescue stations • Manual retyping when elaborating accounting and statistic analysis At the end we will make a sketch of an integrated modular solution to dismiss the problems which occur in processes under investigation and some conclusions will be presented. II. PROBLEMS AND SOLUTIONS FOR DEFINED PROBLEMS A. Paper information cannot be shared simultaneously Usually, when dispatch center is in full operation, there are at least two dispatchers, handling telephone calls for urgent and non-urgent interventions. Call-taker takes care of proper information acquisition and professional guidance of emergency situation patient/eyewitness. Dispatcher is managing the optimal mobile unit spatial distribution and assigning optimal mobile crews to the incidents according to the data collected by the call taker. While paper communication media can not be shared among many persons at the same time and so, time is spent inefficiently, the computer program could resolve the delay between acceptance of information and allocating the mobile unit to incident. That kind of disturbance can be abolished by establishing user friendly adjusted dispatch application with review of online data. That solution is appropriate even for dispatching centers with separated rooms. We estimate that time can be reduced for 10 – 30 seconds on average, in some cases up to several minutes, if call taker has a lot of calls to handle and does not have enough time to pass filled paper forms to a dispatcher. B. Availability of call taker Call takers receives urgent and non-urgent calls in the order in which their clients call them. Non-urgent calls are continually made from different hospital wards for transpor-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 704–707, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Reducing time in emergency medical service by improving information exchange among information systems
tations of patients between different hospitals for several reasons. Call taker can accept that kind of a call for the time, someone else is making an urgent call. Sometimes it takes a lot of waiting online before call taker has a chance to answer emergency call. In between a seriously injured patient on field can blood out if caller cannot get some helpful information how to help the injured or a mobile unit is not fast enough. Sometimes seconds count. By developing communications between HIS (as a nonurgent transport orderer) and CAD all of standard nonurgent transportations can be ordered automatically with simple transmission of data from a hospital to a dispatch center via XML/HL7 protocol levels. In that case call taker has a free line for eventual call, that urgently needs to be answered. During the day (when hospital majority of staff is working) there are a lot of non-urgent transports ordered. In larger towns and cities on every fifth non-urgent call one urgent is made and could potentially not be answered intime because of unavailable call-taker. C. Lack of efficient control over mobile units In earlier cases we focused mostly on call taker, but for shortening reaction time of an intervention the efficiency of dispatchers are as important as the efficiency of call taker. Assigning intervention to a vehicle must be made by consideration of different factors like: • • • • • • •
Location of the incident Clasification of incident Availability of mobile units Location of mobile units Equipment of mobile units Knowledge of mobile units staff Etc.
Usually dispatcher uses a radio station in order to collect that information or has all available vehicles in a centralized garage. Gathering that information takes a lot of time before locating the nearest capable mobile unit, centralized garage is often not the best solutions, especially if the covering area is relatively large. Sophisticated software in CAD can gather data automatically, quickly and more efficiently as earlier listed methods. Of course there is a need for some special hardware like GPS positioning, system and communications media sending data through GSM/GPRS, UHF radio, TETRA or other. Information exchange between RMS and CAD has to be as quick and as reliable as possible in order to get the right data at the right time. Locating nearest mobile units via GPS, automatic checking for their availability, abilities and equipment and automatic delivering of data from dispatch center to mobile unit
705
and vice versa could improve productivity and response times. A rough estimation shows that at least 10% (and up to 50%) of the time spent to assign intervention to an adequate mobile unit can be saved using improved and compatible software in dispatch centre and mobile units. D. Inefficiency of paper on terrain Many mobile unit crews use paper to write reports of the interventions they make. That means rewriting data from paper to a central database (for analysis purposes) later, when they come to the central garage. It can also mean rewriting some data for example from EKG monitor device. Paper is easily destructible media to write on (drops of blood, tearing, spilling liquid…) and not always the most appropriate to use in some circumstances. Mobile units should use sophisticated hardware device, which would store data from an intervention the first time, they enter it. Because of better performance in all kind of condition rugged Tablet PC-s with special RMS software are likely more appropriate then ordinary laptops (too vulnerable). Very important time-saving factor is the synchronization between data in RMS and central data-base (DB), because data from RMS don’t need to be re-typed into central data-base (DB). Other connections can be automated too, for example direct automatic connection to EKG and vital signs monitoring. With usage of protocols for intervention procedures, mobile unit software can be used as a paramedic guide through procedures of pre-hospital patient treatment. E. Poor and delayed information in ED for further hospital treatment Intervention does usually not end on the terrain but in the hospital ED, where patient receives further treatment by doctors and other medical staff. Urgent patients are treated by special hospital emergency departments. It is very important that preparation of surgery rooms and equipment is made before the patient arrives to the hospital. Life saving activity can be more effective that way.
Fig. 1 Example of eventual hardware to use in a Mobile unit
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
706
Delays and miscommunication occurs in transferring data regarding patient’s condition from paramedics to doctor in the ED. ED doctors get to little data on patient soon enough to adequately prepare on how to help the patient. If they had all these data before patient arrived, they could have done all the preparations that was mentioned earlier. Data, which would arrive to hospital before patient, could be studied in much more silent circumstances and better medical decision could be made than in rush when patient arrives. That shows necessary information exchange between RMS and HIS in an ED.. F. Lack of cooperation among mobile units from different rescue stations A lot of transports between hospitals in a country are made every day for patients who need examinations on another location in the country. Usually mobile unit from one part of the country delivers the patient to a specialist in a hospital in another part and then waits for him/her until the end of an examination. That reduces the total availability of the unit especially if examination of a patient takes longer time or there are many of such transports. When rescue stations send their mobile units to distant places their capacities are reduced and they cannot use those mobile units. Computer engineering could help fixing that kind of a dilemma with information system that supports mobile unit control exchange among dispatch centers IS. Dispatchers, could use mobile units when the units from another rescue station come into their area of coverage and are waiting the patient to finishes the examination with the specialist doctor. Capacities of mobile units can be stabilized that way and there should be no fear of lack of vehicles available. Therefore response time to non-urgent transport requests can be shortened to about 10-20%, according to some estimations as well as ambulance vehicle utilization can be improved by the same amount. G. Manual retyping when elaborating accounting and statistic analysis EMS system is quite a complex and diverse cooperation among several organizations. There are lots of different educated employees (from paramedics, dispatchers, mechanics to doctors, accountants and managers), Vehicle Park with large number of expensive equipped paramedic vehicles and infrastructure. But all these assets that enables the organizations in producing public goods need financing. People need salaries, vehicles and buildings need upkeep. So it is very important to use collected data on business processes made in past for proper accountancy and statistic analysis.
A. Jelovsek, M. Stern
When accounting department uses the data directly obtained form hospital e-order, the data is complete and hold all the insurance and accounting details and there is no need to complete the data by retyping from paper forms that accompanies the patient to billing system that is usually a software module in HIS. This reduces the number of mistakes in accounting procedures and decreases the number ob complaints made by insurance companies. Statistics is usually elaborated by management of the EMS unit pursuing the economic efficiency and quality assurance. Elaborating statistic analysis automatically is quite a different thing from elaborating statistics manually There is also reporting to a Ministry for Healthcare that is required on monthly basis. Usage of central database when elaborating statistics can shorten time that is important from economic point of view, can reduce probability of making mistakes that may have heavy impacts. That means more cost efficiency in the EMS unit better management of changes according to pitfalls seen in the past and stronger insight when conducting quality assurance procedures. III. INTEGRATED MODULAR SOLUTION Analysis of several problems in the EMS activity showed above bring us to the conclusions on, how to make the entire process more efficient. All of the involved information systems (CAD, RMS and HIS) must be exchange information in a way to reduce verbal communication and paper or any other ineffective way of data exchange. Carefully engineered applications and structured databases can serve as good basis for automatic information exchange among different information systems. We will show involved information systems, actors and communication flows on a simple picture to get an impression on how we think it should be connected. Figure 2 shows us processes and data exchange among different information systems and their users from placing an order or calling the dispatch (call) center all the way to final hospital treatment by a doctor in a hospital. We can see that automation of processes and improved information exchange among different information systems involved can spare an EMS system a lot of precious time.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Reducing time in emergency medical service by improving information exchange among information systems
CALL CENTER phone call caller
HOSPITAL IS e-ordering hospital staff
DISPATCH IS dispatching dispatcher
MOBILE IS prehospital treatment doctor/paramedic
707
3. Shorten waiting queue in non-urgent transports for patients by 10%. Make better utilization of vehicles by 10% means approximately 10% better economic results. That means 4 million EUR per year on national level in Slovenia[3]. 4. Cause additional considerable improvements in a field of more productive accounting, management and quality assurance. Updating (modernizing) the information systems and data exchange among them can solve some economic problems too, from accurate accounting to possible future planning of capacities and organization of EMS unit(s). In the end we can say that informatization of that kind of institutions has so many positive effects that the investment to build the proper informatic infrastructure is practically negligent.
ACKNOWLEDGMENT HOSPITAL IS hospital treatment doctor
Fig. 2
Full concept of modular structure
Authors would like to thank emergency medical staff from University Clinical Centre in Ljubljana, especially Mr. Andrej Fink and Mr. Janez Persak for sharing their knowledge. We thank also to Dr. Mitja Mohor for his suggestions for mobile terminals and Mr. Branko Kozar and Mr. Darko Cander for sharing their insight into cooperation between 112 service and EMS.
IV. CONCLUSIONS From a life-saving point of view significant progress in response time can be achieved. According to some estimations complete information exchange among IS supported by some organizational changes and work methodologies[1] can: 1. Achieve the standardized 60 seconds in urgent vehicle activation time[2]. by improving up to 5 minutes interval when bearing in mind the worst case scenario where national emergency number 112 is called and later transferred to EMS rescue station and again transferred to doctor to decide if he/she will participate the terrain intervention. 2. Improve the maximum vehicle driving time to urgent incident from 20min to 10min. by carefully allocating vehicles across the terrain. Reducing time to treatment from 20min to 10 min in cardiac arrest conditions means 100% improvement in probability of survival.
REFERENCES 1.
2. 3.
Watson E R, McNeil M, Biancalana C (2003) Emergency medical services dispatch program guidelines. State of California - Health and Human Services Agency, Sacramento, CA, USA Fink A, Jelovsek A (2004) Computer aided dispatch/dispatch software, 11th International Symposium on Emergency Medicine, Portoroz, Slovenia, 2004, pp 441-443 Jelovsek A (2005) Holistic approach to EMS information system. 12th International Symposium on Emergency Medicine, Portoroz, Slovenia, 2005 Author: Institute: Street: City: Country: Email:
Matic Stern Computel d.o.o. Teslova ulica 30 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Reshaping Clinical Trial Data Collection Process to Use the Advantages of the Web-Based Electronic Data Collection I. Pavlovic1 and I. Lazarevic2 1
Institute for Project Management and Information Technology IPMIT d.o.o., Ljubljana, Slovenia 2 OMNICOM d.o.o., Belgrade, Serbia
Abstract— In this paper some results of modeling web-based clinical trial electronic data collection process are presented. Clinical trial data collection is usually a paper-based process. Here we propose a new process of electronic data collection adjusted for the utilization of the web-based technology. Two models that we present here are the basis for the development of the electronic data collection software to support clinical trials. The first one defines use cases of the actors using the system. The second one is a model of the structure of the electronic Case Report Form (eCRF) document. Descriptions of these two models at a high level of abstraction are presented using the standard UML (Unified Modeling Language) version 2.0. Keywords— electronic data collection, EDC, clinical trial, medical informatics, clinical data
I. INTRODUCTION As it is defined in the EU legislation clinical trial is “any investigation in human subjects intended to discover or verify the clinical, pharmacological and/or other pharmacodynamic effects of one or more medicinal product(s)” (for full definition see [1]). Clinical trial can be carried out in either one site or multiple sites. One of the core documents in clinical trial is the Case Report Form (CRF). The CRF is a form where investigator enters all patients’ clinical and non-clinical data related to the trial. This is like a clinical trial dossier of the patient. Data collected in the CRF are both data related to the patient health parameters and data related to the medical procedure performed in the trial. After clinical trial data are collected on the paper forms they have to be entered in the electronic database in order to perform computer data analysis. For this purpose investigators usually send the copies of paper CRF to the data center where data managers enter these data into the database. This paper-based routine has many disadvantages which result in erroneous data in the database and longer duration of clinical trial (especially for the large multi-centre clinical trials). An alternative to paper data collection is electronic data collection (EDC), where the investigators enter data in the electronic database themselves. This way the errors from copying of data from paper forms to electronic database by
the person who did not collect the data are avoided. Another advantage is that the data managers can have continuous insight into the data and the data collection process during the trial and thus manage the process better. Despite the fact that the EDC tools have been available for more than two decades, clinical trials are still mainly conducted using paper data collection as the primary tool (according to [2] over 75%). The reason for this can be partially ascribed to the fact that present technological applications often do not have adequate functionality to improve the data collection process as a whole. Based on this assumption we decided to develop an EDC tool that will enhance EDC with better management of the existing human resources through the exploitation of the web (Internet) technology. In this paper we present a proposed model of clinical trial EDC process adjusted for the web-based data collection. Based on this model we developed EDC software to support clinical trials. II. CLINICAL TRIAL EDC AS A PROCESS Clinical trial (CT) can be understood as a business process. This process is complex and includes different processes and activities like protocol development, protocol use and implementation in the CT experimentation, data collection and the evaluation of the CT results. Each activity has different objectives and is enacted in different environments, carried out by its own agents and resources, and governed by specific rules [3]. Data collection process is one segment of the data management process which is highly affected by the change of the supporting technology from paper-based to web-based. In an attempt to simplify the understanding of the EDC process we take a multi-center clinical trial as a general case of clinical trial. The single-center trial we consider as a special case of multi-center clinical trial which includes just one trial center, thus making some of the elements of the EDC process (roles and activities) redundant. This simplification let us make one model that is applicable for both types of clinical trials - multi-center and single-center.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 741–744, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
742
I. Pavlovic and I. Lazarevic
In the case of multi-center clinical trial a data collection process involves activities both in the participating centers and in the coordinating center. Participating centers are responsible for patient enrolment, delivery of medical procedure according to the CT protocol and entering CT data in the electronic Case Report Forms (eCRF). The coordinating center carries out activities related to the set up, coordination and monitoring of the participating centers, and the final evaluation of the CT results. Coordinating center analyzes both data collected in eCRF and data on the activities of participating centers such as: number of involved patients, number of completed forms, or number of discrepancies discovered and corrected by the participating center’s staff. Based on the data on activities of participating centers the coordinating center can take appropriate steps in order to improve the CT process resulting in shorter CT duration and better quality of the collected data. Therefore, it is important to assure continuous interchange of data and information between the participating centers and the coordinating center. III. THE EDC USER’S FUNCTIONAL VIEW The first step in describing the EDC process is definition of appropriate roles for the actors involved in the process. The model that we present in this paper is described using UML (Unified Modeling Language), which is the standard language for system modeling [4]. It is a visual modeling language, which enables system developers to specify, visualize, document, and exchange models in a manner that supports scalability, security, and robust execution. UML provides a set of diagrams and features which have all proven valuable in real-world modeling. Functional views specify the logic of a system. Its main functions can be identified by the perspective of either a user, who interacts with the system (user’s or black box view), or a system’s designer, i.e. the stakeholder, who decides how the system is built (designer’s or white box view). The use case diagram is a functional view used to model how actors (people and other systems) interact with the system. A use case denotes a set of scenarios where the user interacts with the system. A UML use case diagram shows the relationships among actors and use cases within a system. One or more actors can participate to a use case, but only one is the user case trigger, who makes the user case start. Use cases provide the functional requirements of the system [4]. Figure 1 shows the main scenario of the EDC process from the perspectives of actors involved in the process. The proposed roles in the EDC process are: operator, controller,
Fig. 1. Use case diagram supervisor and coordinator. The roles of the operator and the controller appear in the participating center. The supervisor can belong to single participating center or can be set for several participating centers. This can happen in the case of international multi-center clinical trial when in each participating country there are one or more participating centers supervised by the same person. The coordinator belongs to the coordinating center and can be omitted in the case of the single-center clinical trial. In the following sub-chapters we describe each of these roles in detail. A. Operator The operator is a person who primary enters data into the eCRF. There have to be at least two operators in each participating center (this will be explained later). The operator can copy data from existing paper CRF, extract data from the existing medical records or enter data directly into the eCRF during the CT procedure. This person is loaded with responsible and demanding work on entering data into the electronic forms. The number of data to be entered is usually high, and thus the errors while entering data can hardly be avoided. Therefore, there is a need for at least one check of entered data before the submission. As the data are or-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Reshaping Clinical Trial Data Collection Process to Use the Advantages of the Web-Based Electronic Data Collection
ganized into sections which corresponds to different CT events (like Pre-Study Visit, Session Visit, Follow-Up Visit, etc.) we propose data verification for each eCRF section separately. After completing each eCRF section operator has to set the status of that section to “completed”. After setting section to “complete” the operator can not change data any more. The completed section is sent to data verification. If during the data verification it appears that there are some erroneous data in eCRF section, this section can be returned to the operator to correct the errors. The operator has to correct erroneous data and set the status of the section to “complete” again. This process can be repeated until the eCRF section passes the data verification. The operator receives messages with information on eCRF sections and data to be corrected, and he generates messages to the controller or the supervisor (when there is no controller assigned to the operator) whenever he completes the section. B. Controller The controller performs verification of data entered by operator before data submission. It is very important that at least two persons go through the entire data set as this procedure reduces the number of errors in submitted data. In our model, a person acting as operator can be a controller for any other operator from the same center. As the operator completes an eCRF section and sets status of the section to “complete” one of the other operators in the same center gets the role “controller” for that particular eCRF section. This assignment of the role “controller” can be done automatically or manually. In the case when the operator who completed the eCRF section has a single controller assigned to him for the entire CT the assignment of “controller” role is done automatically for each eCRF section completed by this operator. Otherwise the supervisor has to assign the role of “controller” to one of the operators from the same center (excluding the operator who entered the data). The task of the controller is to verify data before submission. For this purpose the operator has to go through the entire eCRF section and for each data to set the status of the entered data to “valid” in order to be able to submit the eCRF section. If the controller finds erroneous data he can correct the data himself or return the eCRF section to the operator to correct the errors. This process can be repeated several times until the operator verifies entire eCRF section. Finally the controller approves the data by digitally signing the eCRF section. Digitally signed eCRF section is locked for further changes. The controller receives messages with information on completed eCRF sections and on the eCRF sections rejected
743
by supervisor. On the other hand, the message is sent to the operator when data do not pass the verification and to the supervisor when the controller digitally signs the eCRF section. C. Supervisor The supervisor is a person responsible for the quality of CT data from one or more participating centers. The supervisor finally approves each eCRF for the data analysis. It is assumed that after the approval of data quality by the supervisor the coordinator can treat these data as final and thus ready for data analysis. It’s a responsibility of supervisor to assure that the quality of data from his center(s) is high enough. If the supervisor notices some error or inconsistency in data he has to return the entire eCRF section to correction. For this purpose he can remove the signature from the eCRF section. The controller who signed the section has to repeat the process of data verification according to the supervisor’s instructions. When the supervisor is satisfied with the entire eCRF content he digitally seals the eCRF. After sealing no more changes in the eCRF data can be introduced. Digitally sealed eCRF is available for the data analysis in the coordinating center. Another task of the supervisor is to manage the process of data collection in his center(s). In order to follow the activities in his center(s) the supervisor has insight in the statuses of all the eCRF sections and pages belonging to his center(s). From these data he can see if the data collection process for some patients or even centers is stuck and try to improve the process. According to the distribution of work between the operators he can assign “controller” roles to the operators who are less loaded. The supervisor exchanges messages with the controller and the coordinator. From the controller he gets messages when the eCRF section is signed and sends message to him when returning the eCRF section to correction. The coordinator receives message whenever the supervisor digitally seals the eCRF. D. Coordinator The coordinator is supposed to coordinate the activities of all the participating centers in the study. In the case of single-center study the role of coordinator can be omitted. Otherwise, coordinator gets notifications on each digital sealing of the CRF and has read-only access to all the data in each participating center. Based on the summary of activities in each of the centers, the coordinator can take actions to manage and balance these activities and make reports.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
744
I. Pavlovic and I. Lazarevic
IV. DISCUSSION AND CONCLUSIONS The model presented in this paper is a basis for a development of the web-based Electronic Data Collection software to support clinical trials. This software should enhance data collection process in terms of duration of the process, used resources (persons and equipment) and quality of collected data. However, this enhancement can not be reached without adjustment of the data collection process to use advantages of the web (Internet) technology. Therefore we set a new model for the roles of the actors involved in the data collection process. The use case diagram defines roles of the users of the system. We propose a new distribution of tasks of the collection process between these roles that profits the best from the web technology. We let supervisor manage distribution of tasks among the operators. For each completed CRF section he can assign the controller role to any operator in the same participating center upon the current work load of the operators or set the controller for each operator before the trial starts. This approach requires at least two operators in each participating center but it assures a double data control before submission. We believe that the utilization of the EDC software based on the model that we proposed has several advantages over the paper based data collection and the other present EDC solutions as well. The paper-based process definitely lacks accurate control of the process flow when this process is distributed among distant centers. We reduce erroneous data by letting investigators to enter the data directly in the eCRF. By introducing the controller role we insist on double data check. On the other hand we enable dynamic distri-
bution of the controller roles among the actors. Finally, through the exchange of messages between the actors in the process we keep them informed on the tasks that are assigned to them and thus improve the process flow. By taking into consideration both the electronic data collection process with its use cases and the eCRF document structure (which is not presented in this paper) we intend to propose a solution that will improve the clinical trial process and its results. However, we have to prove this concept through the pilot utilization of the software based on the proposed model.
REFERENCES 1. 2.
3.
4. 5.
Directive 2001/20/EC of the European Parliament, Apr. 2001. Alschuler L, Bain L, Kush R D (2004) Improving data collection for patient care and clinical trials, Health Level 7 and Clinical Data Interchange Standards Consortium [Online], Mar. 2004. Available: http://nextwave.sciencemag.org/cgi/content/full/2004/03/24/8 Collada Ali L, Fazi P, Luzi D, Ricci F L, Serbanati L D , Vignetti M, (2004) Toward a Model of Clinical Trials, Proceedings of the 5th International Symposium ISBMDA 2004, Barcellona, Spain, 2004 Nov. 18-19, pp. 299-312 Pender T (2003) UML Bible. John Wiley UML Superstructure version 2.0. at http://www.omg.org/docs/ptc/0308-02.pdf Author: Ivan Pavlovic Institute: Inst. for Project Management and Information Technology IPMIT d.o.o. Street: Kotnikova 30 City: Ljubljana Country: Slovenia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Simulation in Medicine and Nursing – First Experiences in Simulation centre at Faculty of Health Sciences University of Maribor D. Micetic-Turk1,2, M. Krizmaric1, H. Blazun1, N. Krcevski-Skvarc1,2, A. Kozelj1, P. Kokol1, Š. Grmec3, Z. Turk1,2 1
University of Maribor/Faculty of Health Sciences, Maribor, Slovenia 2 General Hospital Maribor, Maribor, Slovenia 3 Centre for Emergency Medicine Maribor, Maribor, Slovenia
Abstract— Medical and nursing simulations at the Faculty of Health Sciences University of Maribor provide the skills to nursing and medical students that will improve clinical practice. Simulations is thus becoming an essential tool in improving quality of patient care and safety. Simulations enable teaching, learning, evaluation and clinical research. Keywords— Simulation, education, human patient simulator.
I. INTRODUCTION Human patient simulation technology offers tremendous opportunity for continuing education for all health care providers. Emergency medical technicians, firefighters, nurses, respiratory care specialists and other medical care providers are interested in utilizing patient simulation technology in their training. Computer-model-driven, full-sized mannequins create hands-on experiences in true-to-life scenarios. Computer-controlled simulated patients can improve the design of clinical trials before any real patients are even enrolled. There are now broad opportunity for capabilities of the computer-controlled full-body patient simulator and how the technology can be used for competencybased instruction and critical incident nursing management in undergraduate and graduate nursing curriculum. II. SIMULATION Hands-on simulation Learning any new skill means making mistakes. Mistakes are an important part of the learning process. Learning medical procedures traditionally has meant making mistakes on real patients. Hands-on simulation and experiential learning is indispensable for healthcare professionals during their training. Simulation opened a rapidly expanding field of highly realistic simulations of patients and entire medical environments, providing much greater fidelity and broader use than previously possible [1]. In medical simulation,
computer-controlled systems and devices advance medical education while protecting patient safety by enabling nursing and medical students, and to learn treatment protocols and procedure skills before using them on actual patients. Simulation technology offers remarkable visual and physical realism. Learning procedures using advanced medical simulators is a step forward, but medical errors often result from ineffective processes and poor communication. Healthcare organizations employ many technologies to reduce medical errors and improve patient safety. Simulation technology aims to reduce medical errors The use of human patient simulators in the nursing and medical curriculum provides cutting-edge technology for the comprehensive, objective measurement of the student’s knowledge, technical skill level, and critical thinking abilities. Simulation education Simulation education is currently flourishing all around the world. Simulation technology has improved, and its cost have dropped. When faced with demands for more accountability for quality education and increased enrolment, disciplines and specialties are embracing the idea of simulation as a valuable tool. Frequently, institutions develop simulation programs based on a narrow understanding of the technology and teaching potential of this tool. The purchase of simulation equipment often precedes the development of a sound program “vision” and plan. Only after understanding the tools and equipment can meaningful plan be development. Simulation has been used in the health care domain foe more then 15 years. In the past 2 to 3 years, there has been an explosion in popularity. The increased use of simulation in nursing care can be attributed to: The nursing shortage and the need to increase enrolment into nursing programs, A need to supplement limited numbers of clinical sites. Lower cost of simulation equipment. Emphasis on evidence-based practice and competencies. Acceptance of simulation as a useful tool. Increasing awareness of the need to address patient safety. The ability of simulation to enhance clinical practice.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 716–718, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Simulation in Medicine and Nursing – First Experiences in Simulation centre at Faculty of Health Sciences University of Maribor
The challenges facing nursing include the rapid introduction of mixed-fidelity simulation products and the range of products available to a profession with limited information on simulation and related products (2). Medical devices training Biomedical technology can trace its history to as far back as a hundred years ago, when the first x-ray machines and electrocardiographs dramatically illustrated how technology could be applied towards the diagnosis of disease. From the moment we enter a hospital, we are confronted with equipment designed to improve the diagnosis and treatment of disease. Most of the initial clinical evaluation is aided by seemingly simple devices. However, the fact that these devices have become commonplace does not detract from the elegance of their technological development. The widespread use of basic clinical technologies has greatly benefited patient care; body temperature, heart activity, blood pressure, and brain activity can be measured much faster and more accurately then ever before. Moreover, certain patterns in the data gathered by these relatively simple devices can often be used to provide an accurate diagnosis, as in the case of cardiac arrhythmia. Management of an operating room suite or intensive care unit is far more complex and conflicted than most outsiders appreciate. The expensive resources that are involved in their daily activities are in chronically short supply. The inherent uncertainty in the course of illness and of surgery makes the control of these settings difficult. Medical devices play an essential role in the diagnosis and treatment of diseases and in the delivery of high quality health care. The Medical Simulation centre offers multidisciplinary opportunities related to the training, research, and development of medical devices. Medical devices range from surgical sutures and blood glucose monitors to ventilators and anesthesia machines. As technology advances, more devices are becoming a part of the internal environment (i.e., our bodies) as well. Some people have an implanted mechanical device, such as a cardiac pacemaker, prosthetic heart valves or a cardioverter defibrillator. Biomedical technologies integrating expertise in electronics, computers and material technologies with biological and medical sciences. It is uniquely positioned to conduct multi-disciplinary biomedical research. Its goal is the advancement of our understanding of human disease and the development of diagnostic and treatment tools – medical devices. Analyses of accidents in medicine have led to a much broader understanding of accident causation, with less focus on the individual who makes an error and more on pre-existing organisational factors that provide the conditions in which errors occur.
717
Simulation is excellent tool for learning the techniques of using various types of equipment to perform procedures. Engineers assume that complex medical devices will be used by trained, competent users. However, licensing authorities have not adopted this logic for medical devices users and competence is not required. From an engineering perspective, requiring complex devices to be idiot proof is absurd. Sometimes patients die because complex devices are used by unskilled personnel. Education, simulation and training of users, and the continued assessment of medical devices is very important for safety work. III. OUR EXPERIENCES WITH THE SIMULATION CENTER Medical simulation centre Maribor combines high fidelity human patient simulation, life-like adult and mannequins with extensive computer hardware and software to allow realistic interactions and interventions to occur in programmed scenarios. The patient simulator presents realistic vital signs and responds to clinical procedures. Physiological parameters can be pre-programmed, or changed at any moment during the simulation by the operator. The simulation room can be configured as a fully functional and realistic operating room, intensive-care unit and emergency department. Operating room was equipped with pipeline supplies of oxygen (O2), nitrous oxide (N2O), compressed air and vacuum. A crash cart was on hand and included resuscitation drugs and supplies. During simulations, instructors can operate the simulator and video equipment from the control room, which is adjacent to the simulator room and includes a observation window. An important element of medical simulation training is that it affords opportunities to engage in detailed de-briefing after educational sessions. The simulator facility is used in graduate and continuing medical education, especially in emergency medicine. The simulated operating room provides an environment that allows students to learn and practice, positioning, draping techniques, prep procedures, and other essential aspects of surgical nursing. Anesthesia staff can perfect skills in difficult intubation, using SimMan simulator. The Laerdal SimMan simulator provides students and practitioners the opportunity to practice a wide variety of scenariobased training needs. Training including emergency medicine, nursing care, ACLS, and difficult airway management. SimMan has preprogrammed verbal responses, his chest rises and falls and, he demonstrates a variety of physiologic conditions including trismus, tongue edema and decreased cervical range of motion. SimMan can be programmed to demonstrate fluctuating physiologic parameters that can be viewed on the SimMan patient monitor. The SimMan patient monitor allows for extensive viewing HR, ABP, SpO2,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
718
D. Micetic-Turk, M. Krizmaric, H. Blazun, N. Krcevski-Skvarc, A. Kozelj, P. Kokol, Š. Grmec, Z. Turk,
CO2, RR, Core Temp, NIBP, Train of Four, CO, Peri Temp, FIO2, FIN2O, FI Anesthetic agent, EtCO2, EtN2O and Et Anesthetic agent can be seen and measured. The Simulation Centre Maribor, which opened in April 2005, offers advanced clinical education using high-tech simulators to improve patient care and safety. The setup includes an operating table and other medical supplies and devices like anesthesia and intensive care equipment. We are cooperating with the University of Würzburg Simulation center (Poliklinik für Anästhesiologie der Universität Würzburg). Hands-on simulations provide participants experience with biomedical technology and medical procedures. Progress in technology has made introduce the possibilities for students to be in contact with multimedia simulations. Students can practice and experiment with different approaches without harming patients. By allowing students to take risks and make decisions independently, they gain critical thinking skills that will help them later in their profession. Students who can practice procedures on the simulator will likely be more confident when they face an actual patient.
such courses and education, so we are trying to gain some finantial support from the government.
At Faculty of Health Sciences we are currently performing following activities within the Simulation center:
REFERENCES
− organizing life long learning courses for nurses and physiciants in practise; − short intensive courses in emergency medicine and nursing care; − short intensive courses in intensive medicine and nursing care; The first experiences of both students and health care professional are very positive. Of course we had some problems like different level of knowledge of different health professions, working in small groups require more effort of professors and instructors, not enough qualified personnel and some minor organizational problems. Because of very expencive equipment and operational costs the trainnes and their employers can seldom affort
IV. CONCLUSIONS The use of human patient simulators in the nursing curriculum provides cutting-edge technology for the comprehensive, objective measurement of the student’s knowledge, technical skill level, and critical thinking abilities. Patient safety depends on the skills of trained individuals working as members of a clinical team. Simulation technology has already demonstrated its value in medical and nursing education. Based on very positive experiences we intent to extend the use of the Simulation center not only for the educational perposes but also on the field of research like researching the stress of trainees, optimization of emergency procedures and protocols, developing software for automatic generation of scenariour etc.
1. 2. 3. 4.
Smith B, Gaba D (2000) Simulators. In: Lake C, Blitt C, Hines R, eds. Clinical Monitoring: Practical Application. New York, NY: WB Saunders Company Seropian MA, Brown K, Samuelson Gavilanes J, Driggers B (2004) Simulation: Not just a manikin. Journal of Nursing Education; vol. 43, pp 164-169. Bronzino J (2000) The Biomedical Engineering Handbook 2nd Edition. Boca Raton: CRC Press. Doyle DJ (2002) Simulation in medical education: Focus on Anesthesiology. Med Educ Online, 2002;7:16. Author: Institute: Street: City: Country: Email:
Dusanka Micetic-Turk Faculty of Health Sciences University of Maribor Zitna ulica 15 Maribor Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Telepathology: Success or Failure? D. Giansanti1, L. Castrichella2 and M. R. Giovagnoli2 1
2
Dipartimento di Tecnologie e Salute, Istituto Superiore di Sanità, Roma Seconda Facoltà di Medicina e Chirurgia, Università “La Sapienza”, Roma
Abstract— This paper reviews the scientific development around the tele-pathology (T-P) to explore the successes of this telemedicine application and individuate the weakness points which hamper its introduction in the National Health Care Systems (NHCS) as a routine methodology. Keywords— telepathology, e-health, telemedicine, digital pathology
I. INTRODUCTION The diffusion of telemedicine health services led up to the development of remote diagnosis based on the pathology systems, namely the tele-pathology. However the massive use of the remote diagnosis in tele-pathology was hampered in the past by the large amount of data, that imposed high compression rate of data files, thus degrading the image quality. This aspect, unfortunately, had affected the diagnostic accuracy and thus the diffusion of the technology. The today’s technology, such as the low-cost wideband communication channels, the compression techniques at high bit rate, without significant image degradation, could allow the implementation of low-cost and clinically accurate T-P systems. The first aim of the paper was to review the status of the scientific development around the T-P, in order to explore the perspectives for future successes of its introduction in the NHCs as a routine methodology. Thus the authors focused to the articles documenting and highlighting the technical feasibility and clinical effectiveness of the T-P. The side objective of the paper was to critically analyze the reasons of failures of this technique and then to identify possible solutions. II. MATERIALS AND METHODS The studies identified were published from 1998 to 2006. The papers selection aimed to trace back the validity of the T-E, outlining the first uses of the digital telepathology, the evolution of communication channels and evaluating its present viability, in terms of technical implementation and clinical effectiveness. The cost effectiveness and social impact were also considered.
A. Sources and method of review The articles inspection was accomplished in the electronic database MEDLINE (http://www.ncbi.nlm.nih.gov/), biomedical library databases, conferences proceedings, and Internet websites. The documents selection used the following keywords: “tele-pathology”, “telemedicine and pathology”, “digital pathology”, “remote diagnosis”, “teleconsultation”, “image diagnostic accuracy”, “image transmission”. Reference lists from selected published papers were also examined, in order to identify those most relevant for this review. We identified (Figure 1) a few issues, basic for the development of the tele-pathology and its reliability after the crucial passage to the digital pathology (1), like as: the (2) evolution of the telecommunication network; the (3) methods and parameters for evaluating the image quality of the compressed images and the diagnostic accuracy assessment. In addition, we looked at works dealing with a today crucial aspect i.e. the (4) economical, legal and social impact of the technology. Finally we resumed, the today’ s main clinical fields of application of the tele-pathology i.e, the (5) new trends in Tele-pathology B. Evolution of the Telecommunication channel and Tele-Pathology The evolution of the telecommunication network has permitted to have today wideband communication channels at lower costs, that allows the transmission of more complex examinations also permitting real-time diagnosis, more complex exams, and an interpretation of results more rapid when compared to those obtained by means of the store and forward techniques. The studies investigated showed that the principal connections were represented by the Satellite and numerical channel connection (principally multiple ISDN channels) [5-10]. Cellular based wireless technologies, (used for example in tele-echocardiography [3] ) up today have been never used in telepathology Weakness points: In the case of critical medical cases a double connection should be introduced using the method known as cold and warm machine. If, for example, a cable connection is available as regular method of communica-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 745–748, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
746
D. Giansanti, L. Castrichella and M. R. Giovagnoli
tion, a satellite connection, , should be provided as (warm) exceptional connection and available without delays when the regular connection fails (becomes cold). This approach could be also useful for individuating and minimizing the Failure-risk. If the failure rate of the cable connection is, for example, expressed by (1), and the failure rate of the satellite connection is expressed by (2), the total failure rate is expressed by (3):
Literature Review performed on Medline and Internet Websites with the keywords:
FRca = 10-7 % of the time in a Year (1) FRsa = 10-8 % of the time in a Year (2) FRtot= FRsa× FRca =10-15% of the time in a Year (3) C. Methods and parameters for evaluating the quality of the compressed images and related diagnostic accuracy Different telepathology solutions are available based on different equipments such as: ZEISS ( http://www.zeiss.it ) NIKON (http://www.nikon.it) OLYMPUS (http://www.olympus.it) HAMAMATSU (http.//www.sales.hamamatsu.com) APERIO (http:// www.aperio.com) These equipments furnish images in different format according to the common standards such as Tiff, Jpeg2000 or proprietary standards. Studies on the diagnostic accuracy are available both on the standard and non standard image formats, showing that a high accuracy in transmission is today possible [11,12-15]. Particularly promising for the direction to the global standardization is the study proposed by Brox and Huston on the usage of the standard Mpeg 4 in telepathology. Weakness points: The analysis has enlightened the need and the lacking of wide ranging protocols based both on quantitative, pure subjective and partially subjective evaluations. The diagnostic accuracy in Tele-telepathology is, in fact, not only a function of quantitative parameters but also of the subjective decision of the operator depending on his/her a priori knowledge based on complex internal models. D. Economical and social impact social impact of tele-pathology Recent studies have showed economical vantages caused by: 1) The reduction of the traveling of the experts in the territory. 2) The reduction of costs thanks to the co division of an expensive equipment among more Hospitals connected by WAN
“tele-echocardiography”, “tele-pathology”, “telemedicine and echocardiography”, pathology”, “digital echocardiography” pathology” “remote diagnosis”, “teleconsultation” “image diagnostic accuracy”, “echocardiograms “image transmission” transmission”
(1) Employment of the digital echocardiography pathology versus the traditional videotape method registrations (3) Methods and parameters for evaluating the image quality of the compressed images and the diagnostic accuracy assessment.
(2) Evolution of the telecommunication network (4) Economical, legal and social impact of the technology (5) New trends in Teleechocardiography Tele-pathology
Individuation of the Weakness points
Fig. 1 Flow of the rationale followed in the review Weakness points: One of the weakness aspect found is the lacking of studies affording the legal and reimbursement aspects in T-P. These are core aspects for the introduction of the T-P in the NHCSs as routine methodology. New trends in Tele-pathology One of the more interesting and attractive new trend in this field of telemedicine is the set-up of web-based systems for furnishing remote multiple and heterogeneous services of digital pathology based on internet. An example of this is represented by the Web based services furnished by the University of Leeds (UK) at the web address : (http://www.virtualpathology.leeds.ac.uk/index.php) 1) Service of digitalization and visualization of the digitalized images from glasses sent by snail mail from the hospitals of each part of the Globe. 2) Service of tele-consulting of virtual glasses for the most common cancers. 3) Services of e-learning based on a wide database of clinical cases 4) Service of connection to the Leeds Tissue Bank based on a Database of tissues continuously updated and enriched by Scientists and Physicians. 5) Visualization of the Clinical cases of the Leed’s Hospital 6) Connection to the National Cancer Research Institute with the possibility of the cross-reference between pathological and radiological exams.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Telepathology: Success or Failure?
Another example is the Web solution born by the cooperation of Albanova University Centerat Royal Institute of Technology of Stoccolm (Sweeden) and the Rudbeck Laboratory of the Uppsala University (Swweeden) (http://www.proteinatlas.org). This Web solution is principally directed on the investigation on the expression and localization of the proteins in human cancer. III. DISCUSSION The analysis showed the increased diffusion of the T-P thanks to the wonderful development of the Information Technology. In particular, great reasons for the success of this technique was the introduction of both the algorithms compression and the wide-band digital transmission mediums. The T-P diffusion was nearly always accompanied by the investigation of the diagnostic accuracy and study on the economical and social impact. The analysis also showed that the more the funds of NHCS was high, the more the diffusion of the T-P telemedicine application was wide. IV. HOW TO LEED THE TELEPATHOLOGY AS ROUTINE METHODOLOGY
The conducted analysis also ( Table 1) showed some weakness aspects of the T-P investigation which could hamper its successful introduction in the NHCS as routine methodology: 1. Deepening of the services failure rate investigations in the case of the critical clinical case. 2. Legal aspects. 3. Reimbursement aspects. 4. Definition of wide range protocols based both on quantitative and subjective evaluations for investigating the diagnostic accuracy in T-P. 5. The technology assessment by means of quality control procedures considering the overall T-E system and not only on the image quality. The points 1÷4 are specific of each one of the investigated issue. The first aspect regards the reliable connectivity in the case of the critical medical cases. In these mission critical T-E applications, the authors suggest that a double connection should be introduced using the method known cold and warm machine. The second point, the legal aspects is basic for the definition of the responsibilities connected to the medical decisions which could affect the patients’ safety. The third point comprehends the complex procedures which should be set-up to guarantee the adequate reimbursement for a the tele-diagnosis [3]. To understand the forth aspect
747
we should consider that the diagnostic accuracy in telepathology is not only a function of quantitative parameters but also of the subjective decision of the clinician depending on his/her a priori knowledge based on complex internal models, in fact, medical decisions are taken on the basis of not only image, but also on complex neural models resulting from experience. One of the last aspects emerged as first final global consideration and is a basic issue for the introduction of the T-Pin the NHCS. In fact today every system, before being adopted by the public administration must be qualified [1-2,4]. The T-P system such as the other Telemedicine systems, is complex and heterogeneous comprehending components from Bioengineering, Medical Physics, and Information Technology (software, hardware, networking). It is thus important for a T-P application to afford not only the image quality but the quality control of all the parts of the T-P system. The second final global consideration is that for the successful introduction of the TP as routine methodology it should be forecasted specific formation during the course of study at the University. This a basic aspect. In fact the personnel principally applies as routine methodology, without efforts and with a preferred instinctive autonomy what He or Her has previously learned during the formation path and has fixed in the internal neural action-and-decision models. This basic aspect up today has been poorly afforded in T-P. Table 1 Issues and weakness aspects ISSUE
WEAKNESS ASPECT
GLOBAL WEAKNESS ASPECT
EVOLUTION OF TELECOMMUNICATION NETWORK AND TELEPATHOLOGY
METHODS AND PARAMETERS FOR EVALUATING THE QUALITY OF THE COMPRESSED IMAGES AND RELATED DIAGNOSTIC ACCURACY ECONOMICAL, LEGAL AND SOCIAL IMPACT OF TELE–PATHOLOGY
Deepening of the services failure rate investigations in the case of the critical clinical case:in the mission critical T-P applications, the authors suggest that a double connection should be introduced using the method known cold and warm machine Definition of wide range protocols based both on quantitative and subjective evaluations for investigating the diagnostic accuracy in T-P. Investigation of legal aspects Investigation of reimbursement aspects
1. Technology assessment by means of quality control procedures considering the overall T-P system and not only on the image quality. 2. T-P should be inserted in the programs for the personnel’ s formation
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
748
D. Giansanti, L. Castrichella and M. R. Giovagnoli
REFERENCES 1.
Giansanti D, Morelli S, Macellari V Toward technology assessment in Telemedicine Part I: Set up and validation of a quality control system 2. Giansanti D, Morelli S, Macellari V Toward technology assessment in Telemedicine Part II: Tools for a quality control system Telemedicine and e-Health Journal in press summer 3. Giansanti D, Morelli S, Macellari V A protocol for the assessment of the diagnostic accuracy in teleechocardiography imaging Telemedicine and e-Health Journal in press summer 2007 4. Bedini R, Belardinelli A, Giansanti D, Guerriero L, Macellari V, Morelli S. Quality assessment and cataloguing of telemedicine applications. J Telemed Telecare. 2006;12(4):189-93. 5. Danda J, Juszkiewicz K, Leszczuk M, Loziak K, Papir Z, Sikora M, Watza R. Medical video server construction. Pol J Pathol. 2003;54(3):197-204. 6. Gulube SM, Wynchank S. Telemedicine in South Africa: success or failure? J Telemed Telecare. 2001;7 Suppl 2:47-9. 7. Picot J. Meeting the need for educational standards in the practice of telemedicine and telehealth. J Telemed Telecare. 2000;6 Suppl 2:S5962. 8. Yogesan K, Constable IJ, Eikelboom RH, van Saarloos PP. 12.Teleophthalmic screening using digital imaging devices.Aust N Z J Ophthalmol. 1998 May;26 Suppl 1:S9-11. 9. Roca OF, Pitti S, Cardama AD, Markidou S, Maeso C, Ramos A, Coen H.Factors influencing distant tele-evaluation in cytology, pathology, conventional radiology and mammography.Anal Cell Pathol. 1996 Jan;10(1):13-23. 10. Miaoulis G, Protopapa E, Skourlas C, Delides G. Supporting telemicroscopy and laboratory medicine activities. The Greek "TELE.INFO.MED.LAB" project. Arch Anat Cytol Pathol. 1995;43(4):275-81. 11. Seidenari S, Pellacani G, Righi E, Di Nardo A. Is JPEG compression of videomicroscopic images compatible with telediagnosis? Comparison between diagnostic performance and pattern recognition on uncompressed TIFF images and JPEG compressed ones.Telemed J E Health. 2004 Fall;10(3):294-303. 12. Brox GA, Huston JL. The application of the MPEG-4 standard to telepathology images for electronic patient records. J Telemed Telecare. 2003;9 Suppl 1:S19-21.
13. Marcelo A, Fontelo P, Farolan M, Cualing H. Effect of image compression on telepathology. A randomized clinical trial. Arch Pathol Lab Med. 2000 Nov;124(11):1653-6. J Telemed Telecare. 2000;6 Suppl 2:S59-62. 14. Lee ES, Kim IS, Choi JS, Yeom BW, Kim HK, Ahn GH, Leong AS. Practical telepathology using a digital camera and the internet. Telemed J E Health. 2002 Summer;8(2):159-65. 15. Belnap CP, Freeman JH, Hudson DA, Person DA. A versatile and economical method of image capture for telepathology.J Telemed Telecare. 2002;8(2):117-20. 16. Singh N, Akbar N, Sowter C, Lea KG, Wells CA. Telepathology in a routine clinical environment: implementation and accuracy of diagnosis by robotic microscopy in a one-stop breast clinic.J Pathol. 2002 Mar;196(3):351-5. 17. Leong FJ.Practical applications of Internet resources for cost-effective telepathology practice. Pathology. 2001 Nov;33(4):498-503. 18. Dierks C. Legal aspects of telepathology. 19. Anal Cell Pathol. 2000;21(3-4):97-9. Schwarzmann P, Binder B, Klose R. Technical aspects of telepathology with emphasis on future development. Anal Cell Pathol. 2000;21(3-4):107-26. Review. 20. Mizushima H, Uchiyama E, Nagata H, Matsuno Y, Sekiguchi R, Ohmatsu H, Hojo F, Shimoda T, Wakao F, Shinkai T, Yamaguchi N, Moriyama N, Kakizoe T, Abe K, Terada M. Japanese experience of telemedicine in oncology. Int J Med Inform. 2001 May;61(2-3):20715. 21. Dunn BE, Choi H, Almagro UA, Recla DL, Davis CW. Telepathology networking in VISN-12 of the Veterans Health Administration. Telemed J E Health. 2000 Fall;6(3):349-54. 22. Goncalves L, Cunha C. Telemedicine project in the Azores Islands. Arch Anat Cytol Pathol. 1995;43(4):285-7. Author: Daniele Giansanti Institute: Street: City: Country: Email:
Istituto Superiore di Sanità via Regina Elena 299 00161 Roma Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Open Three Consortium: an open-source, full-service-based world-wide e-health initiative P. Inchingolo1, M. Beltrame1, P. Bosazzi2, D. Dinevski3 G. Faustini1, S. Mininel1, A. Poli1, F. Vatta1 1
Open Three Consortium, Higher Education in Clinical Engineering, DEEI, University of Trieste, Trieste, Italy 2 Open Three Consortium, Clinical Unit of Radiology, University of Trieste, Italy 3 Open Three Consortium, Faculty of Medicine, University of Maribor, Slovenia
Abstract— The Higher Education in Clinical Engineering (HECE) of the University of Trieste constituted in 2005 the Open Three Consortium (O3), an innovative open-source project dealing with the multi-centric integration of hospitals, RHIOs and citizen (care at home and on the move, and ambient assisted living), based on the about 60 HECE bilateral cooperation Agreements with Hospitals, Medical Research Centers, Healthcare Enterprises, Industrial Enterprises and Governmental Agencies and on the International Networks ABIC-BME (Adriatic Balcanic Ionian Cooperation on Biomedical Engineering) and ALADIN (Alpe Adria Initiative Universities’ Network). Some months ago, the collaboration with multiple open-source solutions has been extended, starting an international cooperation with the open-source based company Sequence Managers Software, Raleigh, NC, United States. The O3 Consortium proposes e-inclusive citizen-centric solutions to cover the above reported three main aspects of the future of e-health in Europe with open-source strategies joined to full-service maintenance and management models. The Users’ and Developers’ O3 Consortium Communities are based mainly on the HECE agreements.
widely diffused, and its basic concept can be found today in the European Union Research Programs, in particular in the FP7. A first version of DPACS was experimented in 19961997 at the Cattinara Hospital of Trieste. In 1998 the DPACS system was running routinely for managing all radiological images (CT, MRI, DR, US, etc.) as well as in the connection with the stereo-tactic neurosurgery. Some mono-dimensional signals such as ECGs were also integrated into the system.
Keywords— open-source, distributed health care, citizencentric health-care, ambient assisted living, international cooperation communities.
I. INTRODUCTION
Fig. 1 The project Open-PACS (1991-1995).
After an early experience (Fig. 1) with the project OpenPACS (1991-95), aiming to distribute PACS services and to pioneer a surgical PACS by opening the AT&T Commview PACS installed in 1988 in Trieste [1], the Group of Bioengineering and ICT and the Higher Education in Clinical Engineering (HECE) of the University of Trieste started in 1995 the project DPACS (Data and Picture Archiving and Communication System). The goal of DPACS (Fig. 2) was “the development of an open, scalable, cheap and universal system with accompanying tools, to store, exchange and retrieve all health information of each citizen at hospital, metropolitan, regional, national and European levels, thus offering an integrated virtual health card of the European Citizens” in a citizencentric vision [2]. In a decade, the idea of DPACS was
Fig. 2 The project DPACS (1995-2004) aiming to offer a virtuallyintegrated health record of the European Citizen.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 723–726, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
724
P. Inchingolo, M. Beltrame, P. Bosazzi, D. Dinevski, G. Faustini, S. Mininel, A. Poli, F. Vatta
Over the years, DPACS was enriched with the sections of anatomo-pathology, anesthesia and reanimation, clinical chemistry laboratory and others; furthermore, at the beginning of 2000s’ its applications have been progressively forwarded to the new emerging necessities of the future health care, health management and assistance to the world citizen, based on e-health (telemedicine) driven home-care, personal-care and ambient assisted living. II. MATERIALS AND METHODS According to the above reported considerations, some new needs have been pointed out and used to program the new developments of the project, as: 1) to have a multilingual approach to both client and server managing interfaces and to the presentation of medical contents); 2) to have a simple data & image display client interface, automatically updatable, highly portable from a PC or a MAC or a LINUX workstation to a palm or a cellular-based communicator; 3) to be able to connect with a wide variety of communication means, both fix and mobile; 4) to offer a highly modular data & image manager/archiver, independent of the platform (UNIX/LINUX, WINDOWS, MAC) and of the selected data-base; 5) to improve the interoperability of both server and client system components among them and with all the other information systems components in the hospital and in the health enterprise; 6) to have an efficient and effective tool to “create” the integrated virtual clinical record in the hospital as well as at home or during the travel of a citizen. The recognized importance of these strategies of DPACS for the future of Europe, presented as concluding lecture of the EuroPACS meeting in Oulu in 2002 [3], led the EuroPACS Society to entrust HECE with the organization of the 2004 EuroPACS meeting in Trieste, focusing on these themes. The successful “EuroPACS-MIR 2004 in the enlarged Europe” meeting held in Trieste in September 2004, with more than 400 participants from 47 Countries, witnessed the deep discussion on the organizational, standard-related and interoperability issues in all the contexts from the single department case up to the transnational integration [4]. Discussions in all the conference sessions, and especially the ones on interoperability in the one-day-lasting workshop on the world-wide IHE (Integrating the Healthcare Enterprise) project, gave strong results and guidelines for the future work. First, the round table “Is there a need for a transnational IHE committee in Central and Eastern Europe?” concluding the IHE Workshop closed with the commitment to HECE of creating a transnational IHE committee for the Central and Eastern Europe, dealing with
technical, harmonization and law-orienting activities in 22 Central and Eastern European Countries. Second, the same round table and most of the IHE workshop sessions underlined that the adoption of open standards and open source solutions is becoming a strictly obligated path to facilitate a fast integration of health systems in Europe and worldwide, fostering this process in the transitional and developing Countries.
III. RESULTS A. Building up the Open Three Consortium HECE, together with BICT’s laboratories HTL and OSL (Open Source Laboratory) at DEEI, started in 2005 both these lines. In particular, in relation to the second one, the group of Trieste, who presented at the Trieste’s EuroPACS the new open-source version of their DPACS-2004 project [5], and the group of the Radiology Department of Padova, which presented the new open-source version of their Raynux /MARiS project [6], decided to fuse and integrate their projects and efforts. Hence, the “Open Three (O3) Consortium” Project has been formally constituted by HECE (see www.o3consortium.eu). O3 deals [7] with opensource products for the three domains of the tomorrow’s ehealth, in the frame of the European e-health programs: hospital, territory and home-care / mobile-care /ambient assisted living (AAL) in a citizen-centric vision (Fig. 3). The main characteristics of the O3 open-source products are multi-language support, high scalability and modularity, use of Java and Web technologies at any level, support of any platform, high level of security and safety management, support of various types of data-bases and application contexts, treatment of any type of medical information, i.e. images, data and signals, and interoperability through full compliance to the “Integrating the Healthcare Enterprise”
Fig. 3 The three domains of the Open Three (O3) Consortium.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Open Three Consortium: an open-source, full-service-based world-wide e-health initiative
Fig. 4 The first set of O3 products.
(IHE) world project, obtained by building up O3 as a collection of “bricks” representing the IHE “Actors”, connecting each other through the implementation of a wide set of IHE Integration profiles [8]. B. First set of products of the Open Three Consortium The first set of O3 products cover all the needs of images management in Radiology and in Nuclear Medicine at intraand inter-Enterprise levels (Fig. 4). The most important are: O3-DPACS, the new version of DPACS [9] enriched with many new features as, e.g., the XDS (Cross-Enterprise Clinical Document Sharing) and the XDS-I (Cross-Enterprise Document Sharing for Imaging) profiles, which allow images and data be exchanged very easily within any territorial environment.; O3-RWS [10], a revolutionary radiological workstation, including managing of and access to MIRC (Medical Images Resource Center) data and structured report; O3-MARIS, a “super” RIS offering many new integration features and MIRC support; O3XDS, one of the first XDS document repository and registry; O3-PDA, a first step toward the opening to the homecare and mobile-care world; O3-TEBAM allowing true reconstruction of the electrical brain in 3D in presence of pathologies. The O3 products have been tested successfully at the IHE 2005 Connectathon in Amsterdam and at the IHE 2006 Connectathon in Barcelona, gaining compliance to 19 IHE actors and 15 IHE profiles, having passed more than 300 tests with most of the European market brands. C. Organization of the Open Three Consortium From the organizational point of view, the O3 Community is made by all the institutions having and agreement
725
with HECE: they are, in particular, those belonging to the international networks ABIC-BME (Adriatic Balcanic Ionian Cooperation in Biomedical Engineering) and ALADIN (Alpe Adria Initiative Universities Network), and the institutions - about 60 healthcare and industrial enterprises and governmental agencies - having a bilateral agreement active with HECE. In the O3 Community, an O3 Users’ Community and an O3 Developers’ Community are identified. Every member of the O3 Community can in principle ask to participate to both communities. The Developers’ community, started, under the responsibility and administration of HECE, with the main contributions of the Universities of Trieste and Padova, and lately Maribor in Slovenia, and grew with many other European and US contributions, from universities and research centers and from industries. It provides the active members of the Users’ Community with all the necessary project design, site analysis, implementation, logging, authoring, bugs’ solving, and high-level 7/7 - 24/24 full-risk service. Additionally, training is highly cared by HECE, starting with preparing clinical engineering professionals at three different levels, offering both traditional and e-learning courses with particular skills in Clinical Informatics, Health Telematics, E-health integration standards and IHE-based interoperability, and providing also specific courses and training on site. Furthermore, selected radiologists of the Active Users’ Community – where O3 is running (in Italy, from Trieste, Padova, Pisa and Siena and in Slovenia from Maribor) constitute a Medical Advisor Committee, which gives very precious feedbacks to the O3 Developers’ Community. The growing cooperation of O3 with large industries belonging to the O3 Community is another very interesting aspect, and it is especially focused to the integration with territory and home-care. O3 is working in many western countries (Italy, Slovenia, Cyprus, Switzerland, United States, etc.) and now is been adopted also in the third world countries, thanks to the O3 non-profit initiative called O3-AID. Some months ago, the collaboration with multiple opensource solutions has been extended, starting an international cooperation with the open-source based company Sequence Managers Software, Raleigh, NC, United States, which is one of the core companies of WorldVistA. Their main products are a very powerful Electronic Medical record (EMR) joined with an Hospital Information System (HIS), counting nearly 10.000 installations in military and civil US hospitals. Our O3 products are now being introduced in these hospitals, integrating them with the SMS EMR and HIS [11].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
726
P. Inchingolo, M. Beltrame, P. Bosazzi, D. Dinevski, G. Faustini, S. Mininel, A. Poli, F. Vatta
IV. DISCUSSION Thanks to the practical experimentation in the solutions described above, the experience of a 16-years study on the integration of health systems using ICT technologies, from the hospital department to the single citizen in the e-health context of the future information-based society, has shown that some key methodological and organizational elements are extremely relevant to the success of the e-health integration process. From the point of view of the organization of our cooperative work with other user and developer centers, the initiative of the Open Three Consortium has proved its real efficiency and efficacy. All the O3 sub-systems can be adjusted to any scale including the national and the international. Being O3 completely developed as Open Source and with Java and Web technologies, being independent of database, OS, HW and language and 100% compliant with the IHE world-wide interoperability initiative, its re-use and portability are facilitated, fostering a wide distribution in all the world. The choice of Open Source as the leading solution of O3 for the future of e-health anticipates a common trend in the industrialized and political world, evidenced last year by the position assumed by the Department of Health & Human Services and the Department of Defense of Unites States at the Open Source Strategy for Multi-Centre Image Management Workshop, held in March 2006 at Las Vegas (USA), by the decision announced by the world’s biggest industries at the OSDL Joint Initiatives Face to Face Meeting Review – Health Care Information Exchange, held in May 2006 at Sophia-Antipolis (France), and finally by the European Union with the Riga Declaration signed during the Intergovernmental Meeting of the European Commission “ICT for an Inclusive Society”, held in June 2006 at Riga (Latvia). Interestingly, O3 has been invited at all these three events. The adoption of the O3 concept in Europe, in Asia and Africa, and, in particular, in the United Stated with the international cooperation with the SMS – WorldVista open new scenarios of world-wide cooperation fostering opensource multi-centric and citizen-centric solutions. V. CONCLUSIONS In conclusion, the O3 Consortium seems to represent a significant contribution that will really support the increase of e-health integration, not only in the local region, but also across Europe and the world. O3 links vital processes in the moving and integration of information thanks to an e-integration approach started five
years ago with our ALADIN network (Alpe Adria Initiative Universities’ Network - www.aladin-net.eu), one of the first citizen-centric initiatives in Europe. Within the Alpe-Adria Region (central and eastern Europe), O3 is demonstrating relevant actions in cross-border eRegion development that improve the way people work together, live together and grow together, without frontiers. The strong cooperation recently started with the Faculty of Medicine of the University of Maribor is an important testimony of this process. From this region, O3 is fostering the widest international cooperation and integration, reinforcing the synergy with the European industry and the power of Europe to approach and gain more and more the non-European markets.
REFERENCES 1.
Diminich M., Inchingolo P., Magliacca F., Martinolli N. (1993). Versatile and open tools for LAN, MAN and WAN communications with PACS. In: Comput. Biomed.. Held, Brebbia, Ciskowski, Power Eds, Comp.. Mech. Pub., Southampton, pp. 309-16. 2. Fioravanti F., Inchingolo P., Valenzin G., Dalla Palma L. (1997). The DPACS Project at the University of Trieste. Med Informat, 22(4):301-14. 3. Inchingolo P., et al. (2002) New trends of the DPACS project. In: Proceedings o20th EuroPACS, Niinimaki, Ilkko, Reponen, Eds. Oulu University Press,. pp. 205–8. 4. Inchingolo P., Pozzi Mucelli R. (eds) (2004). EuroPACS-MIR 2004 in the Enlarged Europe. EUT, Trieste, ISBN: 88-8303-150-4. 5. Inchingolo P. et al. (2004). DPACS-2004 becomes a java-based opensource modular system. Idem, pp. 271-6. 6. Saccavini C. (2004) The MARIS project: open-source approach to IHE radiological workflow software. Idem, pp. 285–7. 7. Inchingolo P. (2006) The Open Three (O3) Consortium Project. In: Open Source Strategy for Multi-Center Image Management, https://www.mcim.georgetown.edu/MCIM 2006 8. Inchingolo P. et al. (2006). O3-DPACS Open-Source Image-Data Manager/Archiver and HDW2 Image-Data Display: an IHEcompliant project pushing the e-health integration in the world, Comput. Med. Imag. Graph., Elsevier Science 30: 391-406 9. Beltrame M., Bosazzi P., Poli A., Inchingolo P. (2007) O3-DPACS: a Java-based, IHE compliant open-source data and image manager and archiver. In.: IFMBE Proceed. Medicon 2007. 10. Faustini G., Inchingolo P. (2007) O3-RWS: a Java-based, IHEcompliant open-source radiology workstation. In.: IFMBE Proceed. Medicon 2007. 11. Inchingolo P., Lord B. (2007) International medical data collaboration with multiple open-source solutions. In: Open Source Strategy for Multi-Center Image Management, St. Louis Missouri, USA. http://www.mcim.georgetown.edu/MCIM2007. Author: Paolo Inchingolo Institute: Street: City: Country: Email:
SSIC-HECE, DEEI, University of Trieste Via Valerio, 10 Trieste Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A hospital structural and technological performance indicators set E. Iadanza1, F. Dori1 and G. Biffi Gentili1, G. Calani1, E. Marini1, E. Sladoievich1, A. Surace1 1
Department of Electronics and Telecommunications, Università di Firenze, Firenze, Italy
Abstract— Management is one of the most complex subjects in the field of of health care systems. Indeed, the performance of an hospital is affected by plenty of factors, related to technology, organization and estate. Health centres have often been monitoring their operations by analyzing financial and operational reports provided, but organizational and technological aspects are sometimes pushed to the background. This paper describes a decision support system designed for hospital administrators to increase their analysis and management effectiveness.. Taking as a starting point recent researches, this study proposes a set of performance indicators, balancing organizational, structural and technological aspects, to be used together with widely adopted clinical indicators. Keywords— Performance Indicators, KPI, Health System, Management, Performance Measures.
I. INTRODUCTION Public-service that supplies utility always faces changes and reforms that are not easy to put in practice when located in complex organizations. Particularly medical structures are very hard to be managed especially because their target is not profit but healthcare. The loss of units of measurement that are tipical of business world, leads us to apply a new scientific approach in order to value our business trend. The medical corps is like an organization that obtains information and gives back services. So we can consider healthcare service like an open and finalistic system made up of technological and human component. Moreover we need to look on the importance of environment: it affects our system in organizational and cultural way. In order to achieve these aspects, we suggest to find a set of indicators that describes as much as possible the several faces of our system. This set of indicators is made up of an elementary number of variables, representing a multidimensional system. This indicators set is supposed to be then inserted in a "business dashboard" used for management by health care authorities. In the past the attention was focused on trying to create central databases, but we think that it’s necessary to work out the data to deeply analyse the medical system’s activities. So we have elaborated a system that obtains
simple information by the sanitary management and gives back a set of information about the hospital’s performance. We deliberately neglected both economical data and clinical data because we didn’t want to wander off technological field. II. MATHERIAL AND METHODS To develop this work we have considered papers published by Italian and international institutions and researchers pertaining to quality and effectiveness of health care structures. A critical revision of these documents has highlighted the dominant presence of a wide range of proposals concerning clinical and business aspects: indeed there are others dimensions that cannot be neglected to obtain an overall evaluation of a healthcare organization. The purpose of this work is to find out an explicit model, based on key performance indicators (KPI), that allow to “read” and “value” the critical points connected to three kind of features that deal with regional guidelines of accreditation: • • •
Organizational: describe the amount of hospital elements of activities (human factors, number and kind of admissions, etc.) Structural: information about condition of estate and space utilization (logistic analysis, functional areas, distances from principal pavilions, etc.) Technological: analyze the distribution of medical equipment in the wards, classified by functional features.
In the first time we collected main indicators from literature, concerning the three previously mentioned aspects, especially referring to the Italian legislative documents. Actually, it wasn’t possible to find enough indicators for each element of health care process, to achieve an exhaustive description; than we have enhanced the set with planned indicators. Therefore, in order to simplify the burden of work, we have enumerated in a “Data Base” the numerical data essential to calculate indicators set. Elements in “Data Base” are marked by numerical labels. In particular, they are collected in two levels: in the first one there are objects obtained by sum of second level objects (for example:
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 752–755, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A hospital structural and technological performance indicators set
“number of monthly accesses” is an element of the first level and it’s the sum of the “number of monthly surgery accesses” and “the number of monthly medical accesses”). III. RESULTS The collection of indicators is established by forty-six indicators of which fourteen are organizational, twentythree structural and nine technological. They are related to a health care system partitioned into departments, which cost centres are “Operational Units” (O.U.), wards or groups of wards. In this way it’s possible to have a more detailed and significant description of activities: critical points of single process can be identified and the total structure management can locate the most critical organization process.
753
exploits CAD cartographic maps, containing inherent information like structural quantity (expressed in square meters), number of beds and number of units. This system provides a wide amount of data essential to calculate structural indicators such as: quantity of square meters per bed, consistencies for each environment typology in relation to total space, mean size of units classified on the basis of homogeneous functional areas.
A. Organizational Indicators Organizational indicators describe the hospital from staff to beds number and major operations number. In particular pay attention on staff’s value: the hospital database gives just the number of staff that gains a paycheck, but in this case we must consider also the number of senior year students and graduate students that are working in hospital, in case the structure is an University Hospital. Therefore, unlike other data, the number of staff isn’t equal to the sum of the values in the second level. These indicators are a little bit different from the others because they concern to human resources. B. Structural Indicators Structural Indicators describe the O.U. operating, taking into consideration the relationship between available equipment, space and performed activity. For instance, they examine the “bed occupation percent”, to allow studies on fairness of bed distribution in the hospital. Also, taking as a starting point the national and regional accreditation lines, they inform about the bed distribution in the rooms; indeed, it is important for reserved space to every individual bed, to be higher than a minimum value. Structural analysis shows as well the composition of the O.U. rooms, classified according to health care activities list shown in the table I. This classification is the one used by the S.A.C.S. system (Italian acronym for “System for Analysis of Structural Consistencies”), a software developed within the University of Florence, able to notice, join and give the information necessary to plan the location of the O.U. The S.A.C.S.
Another aspect, often underestimated, is the distance among wards and others key structures. that strongly interact with daily activities (like diagnostic, surgical blocks and intensive care rooms). We consider this parameter as important, because it’s able to point out some logistic critical states that could escape performance analysis based on other approaches. To find out real connection matters in an objective way we decided to consider three main parameters: • • •
Distances among pavilions: to consider as open-air path because at present there is no internal connection between pavilions. Number of floors to pass through. Number of lifts serving the floor.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
754
E. Iadanza, F. Dori and G. Biffi Gentili, G. Calani, E. Marini, E. Sladoievich, A. Surace
C. Technological Indicators
IV. DISCUSSION
Technological indicators describe medical equipment in use to every department and operative structure, in order to highlight prospective critical points. Actually, our tool is definitely not thought to replace analytic control systems used by Clinical Engineering Department, but it’s supposed instead to give a smart picture to risk managers and Head Offices. Hence, the most of medical equipment classifications found in literature did not entirely satisfy our needs. Therefore, we suggest a new type of classification on the line of CIVAB code : it classifies medical equipment in seven clusters. • • •
• • •
Values collected in “Data base” are gathered from different sources of information; as outlined above, the structural data are extracted by the S.A.C.S. system, that gives information about the dimensions of individual locals and about their assignment to different hospital structures. Other data comes from Clinical Engineering Structure database and from the informative flows that –according to Italian law - every single hospital must send to belonging Regions.. The data origin related to the monitoring system is summarized in figure 1:
Equipment for radiation therapy and radiation diagnosis: devices that use ionizing radiation for therapy and diagnosis; Equipment for medical image: equipment for creating biological images except for radiodiagnostical equipment; Equipment for diagnosis and functional analysis: devices to assess pathologies using invasive and non invasive methods, except for radiodiagnostical and medical image equipment; Equipment for therapy: devices that allow medical or surgery treatment; Equipment for functional monitoring: devices that allow a continue control of vital functions; Equipment for diagnostic laboratory: equipment that permit to diagnose via physical and chemical biological sample treatment.
In addition, we have determined three technological levels for medical devices: high, medium and low level (Table II). The equipments that in CIVAB code are classified as “fitting”, are now classified as the appliance they’re projected for. We have also divided devices in “fixed”, where they are fixed to construction support and they need a particular installation, and “mobile”, if they don’t need a particular installation.
Fig. 1: Block diagram of dashboard and data sources.
Table II : Technological level subdivision Technological level
Equipment
High
TAC, RMN, Angiography, Automatic analyzer Ecography, LASER, Monitor, Electrical scalpel, Pulmonary ventilator ECG, Microscope, Defibrillator
Medium Low
This way we think we’re able to collect information in a semi-automatic approach, without loading hospital staff with the umpteenth newest - empty - database to fill.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A hospital structural and technological performance indicators set
755
V. CONCLUSIONS The proposed indicators set is an experimental method to analyze a generic department without a specific function. We will test the system effectiveness on two particular cases: surgery and “cord blood bank”. In particular, we have studied these particular cases because they are quite different and critical if compared to the other hospital departments. Indeed, the surgery has particular rules and it’s of course more complicate and difficult to manage respect to generic wards: in Italy there are guidelines from ISPESL that explain the specific characteristics of a surgical block. We are working on a specific set of indicators for surgery and we think that this instrument could help the sanitary direction to determine the effective status of surgery and it would be a good method to decide the priority interventions that are necessary. On the other side, blood cord bank represents as well a really particular department to manage. Indeed, it should satisfy a specific requirements set in order to fit law criteria It also has to manage a lot of relationships with other departments and has to plan out information and samples flow. We’ll apply our indicators set along with new information taken out from FACHT guidelines. We would like to check if all these information is able to wholly characterize department activity. Finally, we are working about the hospital’s construction; actually we think that our method is able to give many inputs to the plan of buildings characteristics.
REFERENCES 1. 2.
Ministeriale Decree 24 July 1995, Contenuti e modalità di utilizzo degli indicatori di efficienza e di qualità nel Servizio sanitario nazionale. Official Journal 10 November 1995 Sistema informativo sul Controllo di gestione delle Aziende Ospedaliere. Elaborazione trimestrale dei dati- Ottica Aziendale-
3.
4.
5.
6.
7.
8. 9.
Anno 2003 at www.sanita.regione.lombardia.it/ecofin/controllo_gesti one/nota_metodologica C. Brunini, A. Messina, R. Milazzo, F. Paradisi, Infrastrutture e territorio: una proposta di metodo di sintesi di indicatori, ISTAT: Informazione statistica territoriale e settoriale per le politiche strutturali 2001-08 P. Bellini, M. Braga, V. Rebba, S. Rodella, E. Vendrami, Definizione di un set di indicatori per il monitoraggio e la valutazione dell'attività sanitaria. Council of minister, Commission for the security of statistical information, Research’s report, April 2002 Azienda Ospedaliero-Universitaria di Ferrara, Benchmarking su indicatori di performance clinica, organizzativa ed economica delle Aziende Ospedaliere Universitarie Italiane. Final Relation, Decembre 2004 at http://www.ospfe.it/index.phtml?id=1763 M. Franzini, F. Borgogelli, C. Carmignani, C. Consumi, M. De Francesco, R. Di Pasquale, S. Ingargiola, C. Manetti, F. Marzo, M. P. Monaco, Indagine sulle modalità di attuazione della riforma dei servizi per l'impiego nella Regione Toscana. Postgraduate school for manager and public servant, University of Siena Ministerial Decree 15 October 1996, Approvazione degli indicatori per la valutazione delle dimensioni qualitative del servizio riguardanti la personalizzazione e l'umanizzazione dell'assistenza, il diritto all'informazione, alle prestazioni alberghiere, nonché l'andamento delle attività di prevenzione delle malattie. Official Journal 18 January 1997 Azienda Ospedaliero-Universitaria, Arcispedale S. Anna di Ferrara, Indicatori di qualità, Servizio Sanitario Regionale at http://www.ospfe.it/index.phtml?id=93 Ted Cohen, Craig Bakuzonis, Steven B. Friedman, Richard L. Roa, (1995) Benchmark Indicators for Medical Equipment Repair and Maintenance. Biomed. Instrum. Technol, 29:308-321 Author: Ernesto Iadanza Institute: Street: City: Country: Email:
Department of Electronics and Telecommunications Via di S. Marta, 3 Firenze ITALY
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Multi Scale Methodology for Technology Assessment. A case study on Spine Surgery L. Pecchia1, F. Acampora1 and S. Acampora2, M. Bracale1 1
Department of Electronic and Telecommunication Engineering, Biomedical Unit, Univ. Federico II, Naples, Italy 2 Istituto Neurologico Mediterraneo – I.R.C.C.S. Neuromed, Pozzilli, Isernia, Italy
[email protected],
[email protected]
Abstract— This article describes a multi-scale methodology for Health Technology Assessment. Multi-scale means that this method focuses the attention on the needs to be satisfied taken as a whole, by categories and one by one. For this reason a graphic representation has been used to identify the strengths of every technology to be compared. Than an objective function has been established in order to individuate the technology that better fill the needs as whole and by category. In order to establish a scale of satisfaction, it was necessary to define an algorithm that could distinguish parameters in objectivequantitative and subjective-qualitative. These last items have been quantified by auditing an un-polarized sample of experts trough questionnaires. Finally the chosen method was applied to neurosurgery, especially to spine surgery. Two different surgical techniques have been evaluated for different pathologies, each method applying different biocompatible elements: the traditional treatment, which employs rigid fixers (Cage and Titanium bars) and a new procedure, which employs elastic fixers with shape memory (Somafix and Nitinol bars). Keywords— HTA, SWOT, decision support, target function.
I. INTRODUCTION Health Technology Assessment (HTA) is a multidiscipline and multidimensional process to evaluate different technologies, alternative and competitive between them. HTA must consider various needs, from health and economics to social and ethic aspects. Such needs are classified in three categories: clinical needs, patients’ needs and management needs. The goal is to support the decisions makers in health policies with technical-scientific evaluations. As a consequence HTA develops a straight link between research and health policy. From this point of view HTA may represents an effective instrument for Health Organizations to be able to face the daily challenges[14]. This work suggests a method which can detect the technology that fill needs on a different basis; on a different basis meaning to satisfy individual needs as a whole as well as one by one. An objective function was established in order to detect the best technology overall. Its results are represented in a graph which analyzes and compares the various technologies, as per the three categories stated above.
To conclude, the case study for the application of the chosen methodology is the evaluation of two different types of neurosurgeries, and particularly of the rachis surgery. More specifically, this study will analyse two lumbar and two cervical surgeries, which implant two different types of fixers: elastic and rigid ones. II. METHODOLOGY The algorithm formulated for the Technology Assessment is composed by three different and consecutive elements: 1. evaluation of needs: focusing on the specific pathology, first we identified needs and then we classified them per their importance with the aid of specificallyformulated questionnaires; 2. clinical application analysis: we considered the different methods of treatment available for the specific pathology, and then we analyzed the possible factors which would make some or all of the treatments unsuitable to the specific case; 3. compared evaluation of technologies: this phase gives a rate of evaluation of the different technologies in comparison with the specific needs, and then proceeds to represent the results into a graph. This last phase is the most important for the application of the proposed algorithm, as it provides a way to represent both the specific and individual needs. We must consider that only some needs are expressed as objective and quantitative parameters, others are subjective and qualitative. Therefore a first step towards a scientific evaluation technology consists in recognizing a clear methodology for changing the subjective evaluation in objective normalized criteria. This is a crucial point in the technology assessment, where the experience and mainly the knowledge of the complete process under evaluation are both very important. The result of the assessment is not the “truth”, but can strongly support decisions. It is also a valid tool to the Decision Makers for reducing the limitation of an empirical or subjective approach, where the “feeling” and the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 762–765, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A Multi Scale Methodology for Technology Assessment. A case study on Spine Surgery
763
Fig. 2. Sample of representation of a strong point and a weak point in a SWOT analysis.
Fig. 1. Algorithm of the Assessment
“sensibility” of the individual is a negative and limitative factor.After that the specific needs have been identified, it is possible to proceed to classify them per their importance, therefore obtaining a n-pla of bi values. We can then measure how any technology satisfies all of the identified needs. As stated above, some parameters of evaluation are objective and quantitative, whilst others are subjective and qualitative. Whilst we can easily measure the level of performance for the individual technology, we must look further into the subjective and qualitative parameters, in order to achieve a quantitative, even if still subjective, scale. The algorithm illustrated in Figure 1 produces a n-pla si for each technology. Each element of this n-pla indicates the level of performance compared to each individual need, which is translated into a quantity according to a standardised numerical scale. By adopting such algorithm we will obtain the following matrix: ⎛ b1 ⎜ 1 ⎜ s1 ⎜ s2 ⎜ 1 ⎜ ... ⎜ sm ⎝ 1 n m bn sn m
... bn ⎞ ⎟ ... s1n ⎟ s22 ... sn2 ⎟ ⎟ ... ... ... ⎟ ... ... snm ⎟⎠
b2 s12
(1)
number of detected needs number of compared technologies importance of the n-need level at which the m-technology meets the n-need
From the above matrix we can obtain both a diagram to show the ratio between the need and the level to which it has been met, and an Index of Technology Evaluation (ITE), either partial or total, by adopting the defined target
function. Partial ITE refers to the one specific category of needs, whilst total ITE results from the estimated average of partial ITEs weighed with value defined a priori by structure managers. Let’s examine these procedures in detail. A Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis is adopted to obtain a concise and selfevident parallel evaluation of the technologies one is comparing. SWOT analysis entails placing the data referring to the importance of the individual need on the axis of the abscissa, and the data denoting the satisfaction rate of a specific technology on the ordinate axis. It therefore becomes possible to determine the strongest points of a technology, points that maximise the product between the importance of a need and its satisfaction rate. Those elements that do not produce a good enough satisfaction rate in relation to a need which is considered to be a priority will then become the weakest points of a technology. The following graph visualizes at once both the strength and the weakness points of different technologies, placing the first ones in the third quadrant and the second ones in the fourth quadrant (fig.2). SWOT analysis allows an evaluation and therefore an analysis, of both the strengths and the weakness points of different technologies in order to obtain an evaluation of the two elements under consideration: importance of needs and satisfaction rate. This analysis is valuable from two points of view. First of all, the producers of the technologies under comparison are able to detect what satisfies patients better and what requires improvements, therefore identifying who to focus research and improving. The second aspect is that it helps possible to particularly evaluate the most appropriate technology to their needs. Despite the advantages of a SWOT analysis, it is often necessary to produce a brief and concise evaluation in order to name the technology which best suits the identified criteria. In such cases, the evaluation and comparison of technologies can be achieved by linking the importance of a need and its satisfaction rate as given by different technolo-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
764
L. Pecchia, F. Acampora and S. Acampora, M. Bracale
gies. We can therefore outline an objective function to calculate an index called Technology Evaluation Index (TEI). Last, for each of the three categories of needs, the satisfaction rate for each individual need is multiplied by the need’s own importance. The standardised sum of these results, allows the calculation of the partial TEI. The algebraic expression used for the objective function is the following:
∑(b nc
(
f m,c = f b , b ,...,b ; s , s ,...,s c 1
c 2
c nc
c 1
c 2
c nc
) = TEI = Δ
c
i=1
c i
∗ sic,m nc
)
nc ∗ smax ∗ ∑bic
(2)
i =1
TEI c Technology Evaluation partial Index referred to needs listed in category “c” bic importance of “i” need belonging to category “c” sic patients’ satisfaction rate for an “i” need belonging to category “c” m “m” technology under consideration nc number of needs detected in the “c” category
This is a way to easily compare different technologies in relation to each category of needs. By plotting the results by using a bar diagrams, it is possible to promptly and concisely identify the technology which best suits a specific category of needs. In order to obtain the total TEI for the technologies compared in a study, it is necessary to estimate the weighted average of the partial TEIs. The weights attributed to the different categories of needs are indicated by the decision makers, and are quantified by attributing a specific number of score. The decision makers use such weights to indicate the strategies they are wishing to implement with regards to the particular health organisation they are responsible for, in accordance with regional and national health policies. On this basis, the technology that obtains the highest total TEI is identified to be the best option. III. DISCUSSION AND RESULTS The proposed method of Health Technology Assessment has been applied to the analysis of two different surgical techniques, both used in rachis surgery to treat several different pathologies. In particular, the study took in account two surgical methods using different biocompatible elements: the traditional one employs rigid fixers, while the innovative method uses elastic fixers with shape memory. Specific devices are employed for any operations and for different spine segments, as scheduled: The evaluation exanimate three main categories of needs: clinical, patients’ and organizational. The identification of needs, especially of the clinical, has been developed considering different scientific studies which focused their attention on physiopathology of the rachis [4-5] and on the specific surgical techniques [1].
Table 1. Devices under evaluation. Rachis Cervical Lumbar
elastic fixation Somafix Nitinol bars
Device for: rigid fixation Cage in Peek Titanium bars
These needs were classified by members of surgical equips how deal with both technique; furthermore by patients suffering from rachis pathologies and finally by managers of structures dealing with vertebral surgery. Also with the limitation of this study, focused on a not very large number of users, it was possible to reach the aim of this study evaluating the effectiveness of the proposed method. Afterwards, the satisfaction rate of the needs was computed examining the four technologies under comparison two by two. All the data were processed following the method shown in Figure 1, and obtaining the matrix of needs importance and satisfaction rate.
Clinical requirements Rachis alignment Intervertebral spaces preservation Restoration of cervical canal size Recovery of functions Loss of pain
Somafix A B C D E
Cage A’ B’ C’ D’ E’
Fig. 3. SWOT analysis of clinical requirements in cervical rachis surgery As previously stated, SWOT analysis has made it possible to evaluate and comment one by one all points, concerning the importance of need and satisfaction rate, in order to detect the strongest point and the weakest points of different technologies. There is evidence in the above figure that different technologies satisfies the same need with different results. The figure 3 show the result of the SWOT analysis for some clinical needs and evidence that different technology satisfy the same need with different score. The graphical representation allows a compared analysis of different technologies evidencing the respective strangeness and weakness point for any single need. Using the definition of the partial TEI it is possible to represent the results for three different categories of need: clinical, patients’ and management. The result are plotted in the figure 4.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Multi Scale Methodology for Technology Assessment. A case study on Spine Surgery
765
be a good support to decisions makers. Besides partial TEIs allows to evaluate the satisfaction rate reached by each category of need and for every technology under comparison. The final decision as already mentioned in the previous paragraph is obtained considering the weight assigned by the decision makers to each category of needs.
REFERENCES Fig. 4. Comparison of partial TEIs: elastic fixers are figured blue, while rigid fixers orange. From this representation it is clear that both elastic fixers overcome rigid ones in the satisfaction of clinical and patents’ needs. Instead rigid fixers overcome elastics ones in Organization needs. A trading of those results with strategic lines of health structure could lead to the choice of the better technology.
[1] Acampora S., De Marinis P., Del Gaizo C., Amoroso E.:
[2] [3]
At last the following scheme shows the total TEI resulting from the weighted average of partial TEIs, weights provided by the management of Istituto Neurologico Mediterraneo – I.R.C.C.S. Neuromed, Pozzilli (Is-Italy). Offices responsible for this organization, distribute 10 score as following: 5 points to clinical requirements, 3 points to patients’ needs and 2 points to organization requirements. This assignment of score evidences the policy of the structure that seem to be more addressed to the quality of the care then to management needs. Table 2. Total TEI for each specific technology. Rachis Cervical
Lumbar
Device
TEI
Somafix
7,23
Cage in Peek
6,39
Nitinol
7,38
Titanium
7,30
Acceptability Index 1,13 (=7,23/6,39) 1,01 (=7,38/7,30)
IV. CONCLUSIONS The method under examination, and particularly with SWOT analysis, outlined and compared the strangeness of different technologies for every identified needs. The graphical representation may be easily understood and may
[4] [5] [6] [7]
[8]
Application of shape memory titanium-nikelide in spinal surgery. Journal of Surgical Oncology (Wiley-Liss) Supplement 4:72, 1999. Crepea A.T.: A systems engineering approach to technology assessment. In: Journal of Clinical Engineering 7-8, 1995. Houben, G., Lenie, K. and Vanhoof, K. (1999); A knowledgebased SWOT-analysis system as an instrument for strategic planning in small and medium sized enterprises, Decision Support Systems, Vol. 26, 1999, p125-135. Panjabi M.M., Oxland T., Takata K., Goel V., Duranceau J., Krag M., “Articular facets of the human spine: quantitative three-dimensional anatomy”, Spine, Vol.18, 1993. Panjabi M.M., White A.: Basic biomechanics of the spine. Neurosurgery, 7(1):76,1980. PSN 2006-2008, approvato con D.P.R. 07-04-2006, pubblicato G.U. n. 139 del 17-6-2006 - Suppl. Ordinario n. 149. Sin G, Van Hulle SW, De Pauw DJ, van Griensven A, Vanrolleghem PA.: A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis. Water Res. 2005 Jul;39(12):2459-74. Vitiello L., Bracale U., Bracale M., Renda A.: An operative and structural methodology for the technology assessment O.S.ME.T.A., Medicon 2004, July 31-5 August 2004 Ischia, Naples-Italy.
Author: Marcello Bracale. Institute: University Federico II, Department of Electronic Engineering and Telecommunication, Biomedical Engineering Unit Street: Via Claudio, 21 80025 City: Naples Country: Italy Email:
[email protected]. Author: Leandro Pecchia. Institute: University Federico II, Department of Electronic Engineering and Telecommunication, Biomedical Engineering Unit Street: Via Claudio, 21 80025 City: Naples Country: Italy. Email:
[email protected].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Continuous EEG monitoring in the Intensive Care Unit: Beta Scientific and Management Scientific aspects P.M.H. Sanders1,2, M.J.A.M. van Putten2 1
2
Department of Medical Biology, University of Groningen, Groningen, Netherlands (
[email protected]) Department of Clinical Neurophysiology, Medisch Spectrum Twente and Institute of Technical Medicine, University of Twente, Enschede, Netherlands (
[email protected])
Abstract— Due to various technological advances, it is now possible to continuously monitor critically ill patients using EEG, including the extraction of various quantitative features. In this study, several beta scientific and management scientific aspects of the implementation and use of cEEG on the ICU will be discussed. Keywords— Continuous EEG, quantitative EEG, patient groups, cost, labour intensity
I. INTRODUCTION Continuous EEG (cEEG) monitoring provides a noninvasive and rather inexpensive method to continuously assess important aspects of the neurologic status of a patient. Because this technique can monitor brain function for long periods of time, even when patients are comatose or sedated, it can be of great use in the intensive care unit (ICU). For instance, Jordan [1] monitored 124 patients in a neuro ICU, and in 51% of these patients cEEG made an essential contribution for decisions that were taken by the physician. Another positive aspect of cEEG is that it can result in a declined length of stay on the ICU, shown in a study by Vespa et al [2]. It can also result in a reduced need for CT scans. However, there are various practical and logistical problems that have to be overcome before cEEG can be implemented in the ICU. For instance, analysis of the raw EEG signal has to be performed by a specialist, who is not always present in the ICU. This problem can be overcome by the accessibility of a network, so the physician can view the EEG from his office or from home, although this would be rather labour intensive. Quantitative EEG (qEEG) analysis methods and automated signalling can simplify interpretation for nursing staff of the ICU and reduce labour intensity as well [3,4]. In this study, we analyze the aspects that are involved in implementation and use of cEEG in the ICU. The beta scientific aspects that are analyzed are for instance the relevant patient groups that can be monitored and the suitability of the different qEEG features, including automated signalling. The management scientific aspects that are determined are the costs and labour intensity. Based on our own experience
and literature about this subject, it is likely that a combination of qEEG features is needed to optimally monitor different types of injury. Analysis of the aspects involved in implementation and use of cEEG in the ICU will reveal the points of interest and will contribute to a successful implementation in the ICU. II. MATERIALS AND METHODS For this study, the experiences described in existing literature are reviewed. By means of information obtained from these studies, relevant patient groups will be established. All eligible patients will be monitored for 24 hours or up. EEGs are recorded according to the International 1020 system with Ag/AgCl electrodes. Recording is performed using a NeuroCenter EEG System (Clinical Science Systems, Netherlands), with a. sampling frequency set to 256 Hz. Various quantitative features will be evaluated, as well, including the Brain Symmetry Index [5], mean spectral power coherence, and Nearest Neighbor Phase Synchronization [3]. Firstly, various of these features are evaluated off-line, using EEG recordings from our digital EEG database. Promising features will be implemented in NeuroCenter EEG for real-time analysis. Before actual longterm EEG recordings will start, several teaching courses for the intensive care physicians and nursing staff will be given. III. RESULTS Several presentations were given to the ICU-staff, and long-term EEG recordings will soon be started. At the time of the congress, we expect to have included 10 to 15 patients. In Figure 1 we show an example of a potential qEEG feature, that may serve to detect seizure activity in ICU patients. This feature is based on the nearest neighbor coherence, and may serve to detect seizure activity.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 756–757, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Continuous EEG monitoring in the Intensive Care Unit: Beta Scientific and Management Scientific aspects
757
We expect to have included 10-15 patients at the time of the congress. These data will allow us to draw our first conclusions about the beta and management scientific aspects of cEEG monitoring in the ICU. 0
10
20 Time (min)
Fig. 1. Illustration of a qEEG feature that can assist in the detection of seizure activity. Arrows point to bursts of epileptiform discharges.
IV. DISCUSSION AND CONCLUSIONS Literature, and our own experience, strongly suggest that cEEG monitoring should be more commonplace. Various neurological derangements are very difficult, if not impossible, to detect in ICU patients, without the use of cEEG, in particular in sedated patients. This includes the presence of a non-convulsive status epilepticus or the occurrence of vasospasms. cEEG may assist in the detection of derangement in brain function in a still-reversible state, allowing a therapeutic window. Various conditions need to be satisfied in order to successfully implement cEEG recording in the ICU. This includes teaching the ICU-staff about the various logistic and technical challenges.
REFERENCES 1. 2. 3. 4.
5.
Jordan KG (1995). Neurophysiologic monitoring in the neuroscience intensive care unit. Neurol Clin 13:579-626. Vespa PM, Nenov V, Nuwer MR (1999). Continuous EEG monitoring in the Intensive Care Unit: Early findings and clinical efficacy. 16(1):1-13. Van Putten MJAM (2003). Nearest Neighbor Phase Synchronization as a Measure to detect Seizure Activity from Scalp EEG recordings. J. Clin Neurophysio. 20(5):320-325. van Putten MJAM, Kind T, Visser F, Lagerburg V. Detecting temporal lobe seizures from scalp EEG recordings: a comparison of various features. Clin Neurophysiol 116:24802489. van Putten MJAM et al. A brain Symmetry Index (BSI) for online EEG monitoring in carotid endarterectomy. Clin Neurophysiol (2004): 115:1189-1194. Author: Institute: Street: City: Country: Email:
P. Sanders Department of Clinical Neurophysiology Po Box 50.000 7500 KA Enschede Netherlands
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
E-learning for Laurea in Biomedical laboratory Technicians: a feasibility study D. Giansanti1, L. Castrichella2 and M.R. Giovagnoli2 1
2
Dipartimento di Tecnologie e Salute, Istituto Superiore di Sanità, Roma Seconda Facoltà di Medicina e Chirurgia, Università “La Sapienza”, Roma
Abstract— With the development of e-learning and its ability to provide rich animated content rapidly to a wide audience, new methods for teaching have evolved. E-learning tools allow building of learner-focused structured courses. The authors affords in this paper the feasibility study of a Webbased e-learning course in the academic degree of Laurea for Biomedical Laboratory Technicians (LBLT). Topic and basic aspects and essential requirements have been identified covering (1) the simplicity of the needed methodology to exchange didactic material and (2) the need of a simple and fast architecture for digital tele-pathology to exchange pathologic and /or cytological data. The last is a core aspect for the amount of data to be exchanged and the complexity of the applications even higher that other tele-imaging methodologies such as tele-radiology and tele-echocardiography. For these reasons a course of LBLT could represent thus an interesting bench test for the e-learning. Keywords— telepathology, e-health, telemedicine, digital pathology, tele-cytology, e-learning
I. INTRODUCTION A. E-learning and University There are mainly three issues driving the development of e-learning. The first is the pedagogical effectiveness (better learning); the second is the economic efficiency (more students per teacher and the third is the management of learning [1]. The first two aspects are strongly linked and are considered the point of force for the on-line e-learning Although these have their importance for the financial status of a University and the delivery of knowledge at a distance, up today the development are still very limited B. E-learning and medical images Among all the e-learning applications , the ones focused on Medical Imaging show high criticality for the following aspects: 1) The needing of large files exchanging, such as for example for DICOM files. 2) The needing of a real-time cooperative managing of the images with videoconferencing such as for example in tele-echocardiology [2].
3) The medical decision needs a high accuracy of the transmitted images; some image quality evaluation tests should be then performed. Recently e-learning have shown a great development in the e-learning applications. Hutten et al. for example, designed and constructed an internet based system for the e-learning in Medical Engineering which met a great approval in the involved subjects [3]. Stoeva and Cvectov [4] designed and constructed an e-learning system suitable for the medical radiationphysics education. Pallikarakis showed in [5] that open and distance learning systems were very effective means for continuous education and training purposes. Furthermore the author, following the principles of the collaborative learning, designed and constructed a full course on Medical Image Processing. Focusing on e-learing applications more directly connected to our field of investigation, it is notable the e-learning system designed at the Leeds university based on the international coopeartion of Biologists, physicians, bioengineers [6]. C. Problem definition and Aim of the Paper The aim of the paper was to investigate the feasibility of the realization of an e-learning system at the University for the course of Laurea in BIOMEDICAL LABORATORY TECHNICIANS (italian: Tecnici di Laboratorio Biomedico). For its design should be considered both the two points listed above. In particular the e-learning system should : 1) Be cheap, due to the decreasing of the Public finances. 2) Be user-friendly and easily self upgradeable by the single initiative. 3) Allow the managing of very large files of digitalized pathological and cytological information. II. FEASIBILITY INVESTIGATION A. Successful Model-Examples A pioneered complete system (but overstated for our purpose ) is the solution designed at the University of Leeds (UK) [6] which furnishes four services directly connected to e-learning applications: (1)Service of tele-consulting of
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 749–751, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
750
virtual glasses for the most common cancers. (2) Services of e-learning based on a wide database of clinical cases. (3) Service of connection to the Leeds Tissue Bank based on a Database of tissues continuously up-dated and enriched by Scientists and Physicians. (4) Visualization of the Clinical cases of the Leeds’ Hospital. Useful platforms that could be integrated to constitute an e-learning system are the commercial telepathological platforms. Different telepathology solutions are available based on different equipments such as : ZEISS ( http://www.zeiss.it ) NIKON (http://www.nikon.it) OLYMPUS (http://www.olympus.it) HAMAMATSU (http.//www.sales.hamamatsu.com) APERIO (http:// www.aperio.com)
D. Giansanti, L. Castrichella and M.R. Giovagnoli
WEB-DAV Server Tele Pathology / Platform
INTRANET UNIVERSITY
Student A
Teacher Course A
WAN
Student B
Teacher Course B Student C
Teacher Course N
Fig. 1 Architecture for the e-learning TEACHER / STUDENT BASIC PLATFORM
ADOBE PDF WRITER
These equipments furnish images in different format according to the common standards such as Tiff, Jpeg2000 or proprietary standards.
ADOBE PDF READER IMAGE READER IMAGE WRITER
B. Merging and adaptation of the Model In consideration of the three requirements we identified the following essential characteristics for the e-learning systems: Architecture As architecture we identified an architecture as simple as FTP but more secure and Web-based. We used the WebDav protocol (Web-based Distributed Authoring and Versioning, USA). In the extranet connection WebDav uses the cryptography SSL 128 bit both for the authentication and for the datatransferring. This is an open protocol compatible to Windows, Linux, Unix and Macintosh operative systems; furthermore it is an extension of HTTP/1.1 adding to it the feasibility of managing the files in the remote resources. Fig. 1 shows the architecture that comprehends also a tele-pathology platform with the digital microscope ( in consideration to the objectives of formation). The minimal transmission rate, in consideration to the requirements of these systems is equal to 384 kb/s. Teacher and student platform The platform at the teacher and the student side should alow: video-conferencing; exchange and managing of large files of images ( especially for courses of biology ), telepathological connections; exchange of didactical material; exchange of reports. Fig. 2 details the required characteristics. For the tele-pathological connections many of the above listed systems furnish a “Lite” applications for the clients, represented in this case by students or teachers. Adobe PDF Writer/ Reader allows the generation and man-
LITE SOFWARE FOR TELE-PATHOLOGY TELE-CONFERENCING APPLICATION SW / HW
MICROSOFT XP OR HIGHER OPERATIVE SYSTEM
MICROSOFT OFFICE SW
INTERNET CONNECTION HIGHER THAN 384 Kb/s
Fig. 2 The essential teacher /student platform aging of small files starting from wide Word files. An image reader and writer is also essential for the managing and exchanging of biological image. The reports may be managed and/or generated using either PDF files or Office files (excel, word, powerpoint, etc.) III. DISCUSSION A study on the feasibility of an e-learning system for a course of study of the academic degree of Laurea for Biomedical Laboratory Technicians (LBLT) has been proposed. Topic and basic aspects and essential requirements have been identified covering (1) the simplicity of the needed methodology to exchange didactic material and (2) the need of a simple and fast architecture for digital telepathology to exchange pathologic and /or cytological data. The last is a core aspect first of all for the amount of data to be exchanged even higher that other tele-imaging methodologies such as tele-radiology and tele-echocardiography. It is also an essential aspect in consideration of the future SCENARIO of work of these students. In fact the students
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
E-learning for Laurea in Biomedical laboratory Technicians: a feasibility study
will have to face Tele-pathology applications as routine methodology. It is thus essential to forecast specific formation during the course of study at the University. In fact the personnel principally applies as routine methodology, without efforts and with a preferred instinctive autonomy what He or Her has previously learned during the formation path and has fixed in the internal neural action-and-decision models. This a basic challenge that renders the e-learning for LBLT a pioneer application for both Academic and Medical aspects.
7.
8. 9. 10. 11.
REFERENCES 1. 2.
3.
4. 5. 6.
Allen R. e-learning in Medical Engineering an Physics Med. Eng. & Phys 27 (2005) 543-547 Giansanti D, Morelli S, Macellari V. A protocol for the assessment of the diagnostic accuracy in teleechocardiography imaging Telemedicine and e-Health Journal in press summer 2007 Hutten H, St iegma ier W, Rauchegger G KISS- -a new approach to se lf -controlled e-learning of selected chapters in Medical Eng ineering and other fields at bachelor and master course level. Medical Engineering and Physics Sep ;27( 7):605-9. Stoeva M. and Cvetkov A. “e-Learning system ERM for Medical Rad iation Physics education” Medical Engineering and Physics Sep 27(7) 605- 9. Pal likarakis N. Development and evaluatio n of an ODL course on Med icalImage Pr ocess ing Med Eng Phys. 2005 Sep;27(7):549-54 http://www.virtualpathology.leeds.ac.uk/index.php
12.
13.
14.
751
Bedini R, Belardinelli A, Giansanti D, Guerriero L, Macellari V, Morelli S. Quality assessment and cataloguing of telemedicine applications. J Telemed Telecare. 2006;12(4):18993. Danda J, Juszkiewicz K, Leszczuk M, Loziak K, Papir Z, Sikora M, Watza R. Medical video server construction. Pol J Pathol. 2003;54(3):197-204. Gulube SM, Wynchank S. Telemedicine in South Africa: success or failure? J Telemed Telecare. 2001;7 Suppl 2:47-9. Picot J. Meeting the need for educational standards in the practice of telemedicine and telehealth. J Telemed Telecare. 2000;6 Suppl 2:S59-62. Yogesan K, Constable IJ, Eikelboom RH, van Saarloos PP. 12.Tele-ophthalmic screening using digital imaging devices.Aust N Z J Ophthalmol. 1998 May;26 Suppl 1:S9-11. Roca OF, Pitti S, Cardama AD, Markidou S, Maeso C, Ramos A, Coen H.Factors influencing distant tele-evaluation in cytology, pathology, conventional radiology and mammography.Anal Cell Pathol. 1996 Jan;10(1):13-23. Miaoulis G, Protopapa E, Skourlas C, Delides G. Supporting telemicroscopy and laboratory medicine activities. The Greek "TELE.INFO.MED.LAB" project. Arch Anat Cytol Pathol. 1995;43(4):275-81. Seidenari S, Pellacani G, Righi E, Di Nardo A. Is JPEG compression of videomicroscopic images compatible with telediagnosis? Comparison between diagnostic performance and pattern recognition on uncompressed TIFF images and JPEG compressed ones. Telemed J E Health. 2004 Fall; 10(3):294-303. Author: Institute: Street: City: Country: Email:
Daniele Giansanti Istituto Superiore di Sanità via Regina Elena 299 00161 Roma Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Technology Assessment for evaluating integration of Ambulatory Follow-up and Home Monitoring L. Pecchia1, L. Bisaccia1, P. Melillo1, L. Argenziano2, M. Bracale1 1
University Federico II, Department of Electronic Engineering and Telecommunication, Biomedical Engineering Unit, Naples, Italy. 2 Cardiologic Unit, Clinica Villalba, Naples, Italy.
Abstract— The introduction of new services into a complex system, as it is the Health System, is not an easy challenge. Health Technology Assessment could support decision makers and drive decision making regarding the introduction of new health services. This paper describes an analysis to evaluate those clinical and patient’s needs that could be satisfied by integration of Ambulatory Follow-up and Home Monitoring. Then a cost-analysis on the ambulatory follow-up and a preliminary cost-estimation on Web Service are performed in order to assess the economical impact of these services on the follow-up. Keywords— Health Technology Assessment, Telemedecine, Web Service, Chronic Heart Faillure.
I. INTRODUCTION In the last years the focus of the Health Systems has changed. It has moved from the single act of primary care, such as diagnosis, treatment, therapy, to the management of the whole process of care following the modern concept of the Continuity of Care. Furthermore the Hospital mission is changing and becoming much more focused on acute care. Moreover the need of new models of organization of care is growing after the adoption in many states of Diagnosis Related Group (DRG) as methodology to finance or to evaluate private or public hospitals that push to discharge the patient in the less time possible[1]. Consequently new strategies for the management and organization of care are required and the telemedicine seems to be a good solution to improve quality of care[2]. Particularly at present in Italy new form of assistance are strongly recommended to institutions, and Home Care is considered[3] a valid new model to organize the care with two main goals: reduction of hospitalization and renforce the integration on the territory of social and health services. Moreover several studies refer that the home monitoring of physiological parameters, that are good markers of such criteria, can reduce rehospitalizations and mortality[9]. The introduction of a new service into a complex system, as it is the Health System, is not an easy challenge. Health Technology Assessment could support decision makers and could drive decision making regarding the introduction of new health services [3].
This paper describes an analysis to evaluate the clinical and patient’s needs that could be satisfied by a home monitoring. Then a cost-analysis on the ambulatory followup and a preliminary cost-estimation on Web Service are performed in order to assess the economical impact of these services on the follow-up. The main aim of the proposed Home Monitoring services is to integrate Ambulatory protocol of follow-up for patients suffering of Chronic Hearth Failure (CHF) through Web Services. The proposed service, called further Integrated Protocol (I.Pr.), will be briefly described in the following paragraph. A. Follow-up Integrated Protocol Several studies showed that there is a significant reduction in re-hospitalizations in group of home monitored patients compared with patients treated with usual followup[4]. For that reasons several protocols have been proposed in order to continuously monitor chronic patients. The described protocol has two main goals: to reduce discontinuity in follow-up; to bring specialist know-how in local ambulatory. In previous studies [5-7] it has been pointed out a Service via Web for the Continuity of Care of patients suffering of CHF, aimed at collecting all the data acquired in different environments (Hospital, Ambulatory and Home). Furthermore a specific tool has been developed for the automatic evaluation of the parameters picked up at the home of the patient: particularly a Service via Web for the ECG processing and the analysis of the Heart Rate Variability (HRV) has been implemented, which is a powerful method to control the degree of instability of the CHF [8]. This is particularly important for CHF patients, whose condition of stability needs to be continuously monitored to prevent sudden aggravations. In fact HRV is an index of autonomic balance and change significantly in case of heart failure. The diagnostic and prognostic utility of this method has been extensively investigated [9-11] although the value of this technology in clinical practice still remains to be determined. The I.Pr. improves Ambulatory follow-up using a Service via Web which collects and processes, even daily, ECG and other physiological parameters of the patients.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 758–761, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Technology Assessment for evaluating integration of Ambulatory Follow-up and Home Monitoring
This parameters could be send to the referring center from home via web through PC or mobile phone. The follow-up protocol prescribes several controls in the year depending on the four NYHA class of CHF. The number of Ambulatory controls, in a year, goes from a minimum of three to a maximum of eighteen[12]. Whit the Home Monitoring the number of controls may be encreased in principle depending upon the severity of the case. II. METHODOLOGY This study is performed in two parts: the first assesses the clinical, the patients’ and the management needs; the second performs a cost analysis of Ambulatory follow-up and an estimation of integrated protocol costs.
Table I: costs analysis. Phase Identification Allocation
Aim To Identify all costs to impute every cost to a specific resource. to organize costs according to the kind of resource in: fixed direct, fixed indirect, variable.
Classification
B. Cost analysis strategy assessment The method used to make the economic feasibility study, is composed by three phases: identification, allocation and classification of the costs. The phase of identification has the goal of identifying all the costs, starting from the analytical and graphical description of the profile of the care. In this case study the tool that we used is still the UML. Subsequently every cost has been associated to the specific resource. From this phase some cost-resource pairs are classified as direct or indirect fixed and variable.
Output Costs Cost-resource pair Fixed indirect C. Fixed direct C. Variable C.
Table II: some personnel exempla costs of Ambulatory follow-up. Personnel Sanitary
Cost Classification Variable costs (consultant): 1. cardiologist 2. radiologist Fixed direct cost (dependent): 1. nurse Fixed indirect cost (dependent): 1. laboratory technician 2. radiology technician
A. Needs assessment Needs have been identified and organized in three categories of needs: clinical; patients’; management. To define clinical needs we studied national and international guidelines [12] for the care and the follow-up of CHF. We started from the hypothesis that the integrated Follow-up has to satisfy the same Criteria of Stability of ambulatory Follow-up as defined by guidelines. The quality of the care may be improved by increasing the number of controls per year. Patients’ needs have been assessed by using some questionnaires to evaluate: the degree of difficulty to attend the ambulatory visits; the availability for integrative domiciliary visits; the familiarity with Internet, PC, mobile phone; the time that the companion usually spends with the patient. The management needs were defined by auditing specialized cardiologists and managers of cardiologic ambulatory. For the identification of the needs we used also a graphic modeling language that allowed us to improve communication with the doctors. The chosen language is the UML (Unified Modeling Language) that allows to describe a process by using different diagrams.
759
Other
Fixed indirect cost (dependent): 1. administrative 2. clean staff 3. maintenance personnel
The costs of the web service supporting the I.Pr., have been identified in three categories: human resources, hardware, software. In our model that cardiologic ambulatory already exists; the maintenance and the required education are already added in the cost of the human resources; the service does not increase significantly the sanitary staff working hours; the patient is assisted by a relative trained ad hoc; the expected life of the Hardware and Software is reasonably considered of five years: this is the time that mast be considered for the depreciation . In these hypothesis the additional costs of the web service can be considered fixed while the cost of the Hardware and Software, referred to one year, is the 20% of the initial cost. A break even analysis concludes the study. In lack of suitable tariff of telemedicine services, the proceeds of the I.Pr. have been kept unchanged compared to ambulatory follow-up. At the present in Italy telemedicine services are not yet included into health regional tariff list[13]. Therefore it is useful to do a break even analysis to evaluate the volume of activities necessary for the sustainability of the new services. III. RESULTS AND DISCUSSION The case study has been carried out at the Private Hospital “Clinica Villalba del Prof. Umberto Bracale” in the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
760
framework of a no-profit scientific and educational convention with the B.M.E Unit of University Federico II in Naples. The inter and multidisciplinary expert team, which is employed in the ambulatory activity in the hospital, participated with their different professionally to the prefeasibility study. Medical doctors support the preparation of the questionnaires for the definition of clinical and patients’ need and the administrative personnel support individuation of economical data. The most relevant clinical need are: absence of dyspnea, asthenia, abdominal pain; angina pectoris; ventricular arrhythmia; body weight variation, systolic blood pressure. From all the criteria referred by guidelines we focused our attention to those that can be usefully monitored by picking biomedical parameters at home even daily. Those criteria should be considered as outcome during all the follow-up. Furthermore an analysis of users needs has been performed by using questionnaires. In the table III we refer the results. From the analysis of the answers it results that the totality of the patients is not able to use any technology for tele-monitoring. For that reason we interviewed the accompaniers to mainly understand how long they spent with the patients and what is the degree of familiarity with technology for telecommunications. From the answers it is possible to conclude that the accompanier spends at least two hours every day with the patient and has enough familiarity with PC, mobile and internet to be trained to the use of the proposed web services. From this cross-section of all the performed needs analysis it has been possible to conclude that an Integrated Protocol of Follow-up can improve the quality of the care. Since we concluded that the I.Pr. could generate an improvement, we evaluated its sustainability by performing a cost analysis with the aim of evaluating the impact of the Web Services costs on the total costs of Ambulatory follow-up. In the Case Study specific hypothesis are: the cardiologic ambulatory is not dedicated only to CHF but it is shared; it is dedicated to CHF for 12 hours/month; the technological resources exist and are already amortized; the costs are evaluated during 12 months for each patient; the number of admissions during a year and the health service for each admission depends by NYHA classification. From the last hypothesis it comes out that the variable costs and the proceeds change for any different NYHA class. The results of the Costs Analysis are summarized in the break even analysis: the figure shows that for each NYHA class the break even point of I.Pr. changes about few patient’s units in comprising with ambulatory follow-up.
L. Pecchia, L. Bisaccia, P. Melillo, L. Argenziano, M. Bracale Table III: Users needs evaluation. (Patients average age is 60±5 years old. #patients=50 , #accompany person=35.) Questions to patient
% 43,3 30,0
Who lives with you?
13,4 6,7
Answers My spouse I live alone With a nurse/other With my children
1
70,0 30,0
A relative Nobody
Do you think that ambulatory visit organization needs a long time?
63,3 36,7
Yes, I do No, I don’t
How much time do you needs to reach the hospital?
76,7 23,3
< 1 hour > 1 hours
Questions to patient’s accompanier
%
Answers
How often do you see to the patient?
66,7 33,3
> weekly < weekly
How long time do you stay with the patient?
86,7 13,3
2 h per day All the day
Who is your accompany person?
Q. patients and accompany person What’s your familiarity with: mobile phone?
PC?
Do you surf the Internet?
Answers
%Pz
%Acc
93,3
26,7
6,7
73,3
93,3 6,7
26,7 73,3
Scarce Good
0,0
46,7
100,0
53,3
Yes, I do No, I don’t
Poor Very good
IV. CONCLUSIONS The needs analysis shows that the integration of ambulatory follow-up with Home Monitoring improves the Quality of Care by reducing discontinuity in the profile of care. The questionnaires answers analysis shows that patients and their accompaniers welcome integration of ambulatory follow-up with ICT solutions. Costs analysis demonstrates that, under the considered hypothesis, the break even point increases of about few patient’s units. However it is necessary to take in account an education for all actors. Even if the implemented study needs further investigations, it seems preliminary that the integration of traditional ambulatory services with new telematic solutions can increase the satisfaction of different needs without remarkable growth of the costs.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Technology Assessment for evaluating integration of Ambulatory Follow-up and Home Monitoring
761
REFERENCES 1. 2. 3. 4.
5.
6.
7.
8.
9.
10. 11. 12.
Fig. 1 Break even analysis. It refers to an Hospital’s ambulatory dedicated to CHF patients for 12 hours/month for 12 months/year. The results show that the integration of Ambulatory Follow-up with a Web Services for the Home Monitoring implies an average increase of the break even point estimable in 35±2,6% of patients’ numbers.
ACKNOWLEDGMENT The Authors thank the General Directions of Clinica Villalba of Prof. Umberto Bracale for their cooperation and for having given the possibility to carry out all the activity necessary for the case study.
13.
Gronda E, Mangiavacchi M, Andreuzzi B, Municino A. (2002) “A population-based study on overt heart failure”. Ital Heart J 3: 96-103. Comitato Ospedalizzazione Domiciliare, Documento Conclusivo Caratterizzazione dei Servizi di Cure Domiciliari, Roma, 30 settembre 2002, at http://www.ministerosalute.it/. Piano sanitario nazionale 2006-2008, approved by D.P.R. of 0704-2006, published in Gazzetta Ufficiale della Repubblica Italiana n. 139 of 17-6-2006 - Suppl. Ordinario n. 149. Shah NB, Der E, Ruggerio C, Heidenreich PA, Massie BM. (1998) “Prevention of hospitalisations for heart failure with an interactive home monitoring program”. Am Heart J 135: 373378. Bracale M, Cesarelli M, Bifulco P (2004) “An integrated webbased telemedicine solution for ambulatory and home-care assistance and follow up of congestive heart failure and pacemaker patients”, electronic proceeding, Medicon 2004, Ischia, Naples (It). Pecchia L, Bracale M (2006) “A Service via Web for the use of HRV in the follow up of Cardiopath patients”, electronic proceeding, 5th European Symposium on Biomedical Engineering, 7th – 9th luglio 2006, Patrasso (Gr). Pecchia L, Argenziano L, Bracale M (2006), “The need of the follow-up in patients: a useful approach using web services”. IEEE International Conference on Information Technology in Biomedicine (ITAB 2006), electronic proceeding, 26-28 October 2006, Ioannina (Gr). Bonaduce D, Petretta M, Marciano F et al. (1999) “Independent and Incremental Prognostic Value of Heart Rate Variability in Patients with Chronic Heart Failure”. Am Heart J 138 (2):273284. Nolan J, Flapan AD, Capewell S, MacDonald TM, Neilson JM, Ewing DJ. (1992) “Decreased cardiac parasympathetic activity in chronic heart failure and its relation to left ventricular function”. Br Heart J; 67: 482–5. Malik M, Camm AJ. (1993) “Heart rate variability: from facts to fancies”. J Am Coll Cardiol; 22: 566–8. Hohnloser SH, Klingenheben T, Zabel M, Schroder F, Just H. (1992) “Intraindividual reproducibility of heart rate variability”. PACE 15: 2211–4. Hunt SA, Baker DW, Chin MH, et al. (2005) ACC/AHA 2005 “Guideline Update for the Diagnosis and Management of Chronic Heart Failure in the Adult: A Report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines”. Circulation 2005;112;154-235. D.M. Sanità of 22/07/1996, “Prestazioni di assistenza specialistica ambulatoriale erogabili nell’ambito del Servizio Sanitario Nazionale e relative tariffe”, published in Gazzetta Ufficiale della Repubblica Italiana n. 216/96.
Authors: Marcello Bracale; Leandro Pecchia Institute: University Federico II, Department of Electronic Engineering and Telecommunication, Biomedical Engineering Unit Street: Via Claudio, 21 80025. City: Naples Country: Italy Email:
[email protected];
[email protected].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
2CTG2: A new system for the antepartum analysis of fetal heart rate G. Magenes1, M.G. Signorini2, M. Ferrario2, F. Lunghi3 1
Dipartimento di Informatica e Sistemistica, Università di Pavia, Italy 2 Dipartimento di Bioingegneria, Politecnico di Milano, Italy 3 S.E.A. Sistemi Elettronici Avanzati, Pavia, Italy
Abstract— The cardiotocography (CTG) has been introduced in in the early ‘70ies as a clinical test for checking fetal well-being during pregnancy and at the moment of delivery. The traditional approach was based on the detection of several time domain parameters of Fetal Heart Rate (FHR) signal starting from the identification of a signal baseline. With the certainty that FHR really contains important indications about potentially dangerous fetal conditions, a prototype system has been setup based on new algorithms and indices which can enhance the differences among normal and pathological fetal conditions. The basic characteristics of this system are: FHR sampled and recorded at 2 Hz; on-line traditional analysis by incremental Mantel algorithm; extraction of accelerations, decelerations, FHR variability and related parameters; extraction of power spectral components related to different physiological control mechanisms; computation of FHR signal regularity indices through the Approximate Entropy algorithm. Keywords— Fetal monitoring, Heart rate analysis, Nonlinear methods.
I. INTRODUCTION Among the clinical tests performed in the antepartum period to assess the functional wellbeing of the fetus, the Cardiotocography (CTG) represents one of the most diffused methods to investigate fetal condition in a non-invasive way. Although the CTG has led to a drastic reduction of intrapartum and precocious child mortality, some recent studies have pointed out that very poor indications about fetus/newborn illness could be inferred from the actual CTG analysis [1]. This conclusion is apparently contradicted by the fact the most reliable indicator of fetal condition is the FHR signal, upon which CTG is based. Most current clinical interpretations of FHR signal, although with some minor differences, consider the presence of a sinusal rhythm of the FHR (baseline), on which the physiological control mechanisms generate some frequency changes in the heart activity. These changes or events are identified as accelerations and decelerations [2]. Up to now, the basic idea in automatic CTG analysis has been the extraction of the baseline, followed by the detection of the above mentioned events, considering and measuring some morphological signal characteristics in the same way as the
clinicians do by eye inspection. The automatic classification of fetal states proposed by some commercial CTG systems (Sonicaid System 8002, OB Tracevue, old 2CTG), is designed to match these criteria [3, 4]. As a further improvement, estimation of the short term (STV) and long term (LTV) variability characteristics has been considered. Other parameters such as Interval Index and Delta, have been derived from the previous ones in order to reinforce this statistical signal analysis [4]. The algorithmic approaches proposed so far did not lead to significant clinical improvements with respect to a qualitative analysis performed by an expert clinician, except for a reduction of the intra and inter-observer variability [5]. The scoring systems based on morphological and time domain parameters, aimed at providing some guidelines in the FHR signal reading and classification, are still lacking of reliability if the purpose is to calculate figures and to relate these figures to fetal outcome [1]. The goal of the present paper is to describe the implementation of an automatic analysis system for the fetal heart rate signals which integrates information coming from the morphological and time domain traditional approaches, from frequency domain analysis based on autoregressive estimation and, finally, from non linear analysis. The software design aims at improving the clinical monitoring by providing analysis tools, which implement advanced signal processing techniques in order to better classify fetal conditions. II. MATERIALS AND METHODS A. The computerized cardiotocographic system Because of its great diffusion in clinical prenatal fetal monitoring we decided to adopt a classical ultrasound based CTG for our FHR analysis. A numerical (or computerized) cardiotocographic (2CTG) system is composed by two devices: a cardiotocograph, which records fetal heart rate and the toco signal and a microprocessor system which analyzes and stores those signals. The two parts can be physically separated, as in our case where we acquired numerical data from a stand-alone fetal monitor equipped with a serial interface (or a Bluetooth) communicating with a PC.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 781–784, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
782
G. Magenes, M.G. Signorini, M. Ferrario, F. Lunghi
In our system we can use all CTG fetal monitors compatible with the HP data protocol (HP series 135X, Corometrics 170, Philips 50A, ...), provided with a serial link. These fetal monitors use an autocorrelation technique to compare the demodulated Doppler signal of a heartbeat with the next one. Peak detection software then determines the heart period (the equivalent of RR period) from the autocorrelation function. With a peak position interpolation algorithm, the effective resolution is better than 2 ms [6]. The resulting heart period is then converted into a heart frequency as soon as a new heart event is detected and accepted. Due to historical reasons, almost all commercially available fetal CTG monitors display only the fetal heart rate expressed in number of beats per minute (bpm). The HPlike monitors produce a new FHR value in bpm every 250 msec and store it in a buffer. In the commercially available systems (e.g. OB TraceVue), the computer reads 10 consecutive values of the buffer every 2.5 sec and determines the actual FHR as the average of the 10 values. In our 2CTG2 system, the software reads the FHR from the buffer at 2 Hz, allowing an increase of FHR Nyquist frequency up to 1 Hz. The choice of reading the FHR values each 0.5 sec represents a reasonable compromise to achieve enough bandwidth and an acceptable accuracy of the FHR signal. B. Signal Recording & Preprocessing If compared to standard Holter recordings, the buffering procedure highly reduces the precision of the RR sequence as generated by inverting the FHR signal (60000/FHR ms). Besides, the CTG device can erroneously lock on the slower maternal heartbeat, even when the autocorrelation method is employed. This leads to an abrupt decrease into the FHR signal and it influences the evaluation of variability indices. Therefore, a proper artifact detection technique has to be employed. The one we developed relays on the work of van Geijn et al. [1980]; the main concept is that an acceleration of heart rate develops more slowly than a deceleration does, thus the limit for the acceptance of the point S(i +1) differs according to whether S(i + 1) is smaller or greater then S(i). In details, three requirements are set up: (i) acceptance of FHR values which satisfy the criterium: 200 S (i ) 200 S (i ) < S (i + 1) < 400 − S (i ) 114 + 0.43S (i )
(1)
The whole series is processed many times and, at each run, the number of points rejected is counted; the process stops when in a entire run no further points were discarded; (ii) a minimum of three intervals that qualify to (1) must be present in succession (S(i - 1), S(i) and S(i + 1) for final acceptance of S(i+1)); (iii) short intervals of valid points,
contained between invalid sequences, are rejected if their length is equal or smaller than 20 points. Ranges for acceptance of S(i + 1) are comparable with those applied in commercial monitors (= S(i) ± 20 bpm). Nevertheless the applied criterion is definitely more selective and precise. A quality index quantifies three different levels of the FHR signal (optimal, acceptable and insufficient quality). The evaluation is based on the output of the autocorrelation procedure implemented in the HP1351A. Each FHR series underwent a subdivision into 3-minutes segments (360 points) after removing the bad quality points at the beginning of the sequence. C. Analysis parameters After being preprocessed, the CTG signals undergo to the analysis procedure. The software computes a set of standard parameters related to the morphology of the signal (baseline, large and small accelerations per hour, decelerations per hour and contractions per hour) and to the time domain characteristics of FHR (FHR mean over a minute, FHR standard deviation, Delta FHR, Short term variability (STV), Long term irregularity (LTI), Interval Index (II)) as reported in [4]. The novelty resides in the computation of frequency domain indices and of regularity/non linear parameters, following the results of our research group [7,8]. In particular, among the regularity and nonlinear parameters, the Approximate Entropy (ApEn) [9] is included in the standard clinical version. In the research versions of our software we implemented some new nonlinear indices: Detrended Fluctuation Analysis (DFA) [10]; Muliscale Entropy (MSE) [11], Lempel Ziv Complexity (LZC) [12] D. Software implementation The 2CTG2 can be classified in an intermediate position between a software tool for retrieving, analyzing and storing data from a CTG medical device and a stand alone solution to handle a variety of data regarding the whole pregnancy period. This is mainly because it offers both the functionalities. The main goal while developing the 2CTG2 software was indeed to achieve the maximum number of features without loosing lightness and simplicity. The 2CTG2 was built with the Microsoft Visual C++ IDE and was designed to run on any of the Windows platforms, from Windows 98 to the newest XP Pro versions. The application was divided in two main modules: the main interface and the mathematical library. The first is based on the Microsoft Foundation Classes and offers a simple user interface to handle the signals as well as the patient personal and anamnestic information. The second is a collection of highly optimized signal processing algorithms implemented
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
2CTG2: A new system for the antepartum analysis of fetal heart rate
in ANSI C and C++ routines. Considering in deeper detail the approach used in developing the solution, it is remarkable that the application is subdivided into three tiers: the data tier, the application tier and the user interface tier, according to the Windows DNA paradigm. The data tier consists of a set of classes that perform data extraction and handling from the database source. The application tier, also known as the “business logic”, is the real added value of the 2CTG2, being composed by the analysis and data manipulation modules. The user interface tier is the “presentation” layer that is the component designed to interact with the user. The data layer was written to communicate with a Microsoft Access database. Anyway, the application doesn’t need Access to be fully installed on the operating system: it only needs the Microsoft Jet Engine, which is lighter and freely distributable. Dividing the data tier from the business logic made the 2CTG2 easily extendable and flexible in event of changes in the data structure or for future releases requirements. The 2CTG2 interacts with the database using SQL queries, so the data tier can be considered as an interface layer between the application layer and potentially any relational database that supports the SQL language. By
783
simply extending this layer or just substituting with another implementation, the 2CTG2 will be able to connect to local and remote DBMS such as Oracle, MS SQL Server or mySQL, but no changes will be needed in the application and presentation layer. Another sub-module in the data tier is the file I/O layer. This module is a set of classes able to load and store patient and medical information in various file format in order to achieve the maximum compatibility and to allow data exchange between various sources. The 2CTG2 software can read and write files compatible with OBTraceVue, old versions of 2ctg, Matlab, and Excel. The second and most important tier is the application tier. It consists of a set of functionalities to manage the data provided by the data layer or directly coming from the connected medical device. The business logic contains the module responsible for the connection with the cardiotocograph. This was developed in a multithreaded way to take advantage of the modern operating system time sharing capabilities. The 2CTG2 software is designed to perform real time data acquisition and analysis while continuing offering the user a fully functional interface.
Actions
Parameters
CTG tracings
Fig. 1 The 2CTG2 graphical interface, with two comtemporary recordings coming from two different CTG monitors.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
784
G. Magenes, M.G. Signorini, M. Ferrario, F. Lunghi
This allows the user to perform the data acquisition from the cardiotocograph while storing patient’s personal data or even retrieving and analyzing old exams. Moreover, the software makes possible to acquire simultaneously as many traces as cardiotocographs are connected, through the identification of the CTG device. Both the analysis and data acquisition modules were developed considering the need of an almost immediate response. In a standard case (1 hour of trace) the analysis and the graphical representation of the output are performed within the refreshing frame rate of the video (20 ms). The third layer is the user interface and it was developed, as already told, using the Microsoft Foundation Classes (a set of classes to create standard Windows interfaces). It is a Multiple Document Interface, so it can offer the user many traces loaded and presented simultaneously, even during the acquisition process. Moreover a user can compare different tracks of the same trace by opening multiple views of the same exam. The main trace window needs a special mention: it displays the trace and let the user scroll it when reviewing. It was built trying to reproduce the cardiotocograph paper roll. Consequently, at a proper monitor resolution there is a perfect match between a centimeter drawn on the computer display and a centimeter printed out on the paper. A zoom command will scale the drawing but will maintain image ratios so that the trace will never appear deformed on the screen, in order to avoid misinterpretation of acceleration and contractions by the physician. The user interface contains also a group of printing possibilities, such as a compact print, a detailed print and a complex print of the analysis results. III. RESULTS AND CONCLUSIONS The 2CTG2 software has been extensively tested mainly in two OB-Gyn University Clinics in Rome and Naples (Prof. D. Arduini and Prof. A. Di Lieto) over more than 2000 cases. Each recording lasts at least 60 min, in order to include both activity and quiet periods of the fetus. For the majority of cases it has been possible to collect the full patient information, consisting of CTG recording date, gestational age, diagnostic indication at CTG date, identification of fetal sufferance, type of delivery, date of delivery, diagnosis at birth and Apgar scores. The software has been used for discriminating IUGR suffering fetuses from normal [7]. It also identifies severe IUGRs from small for gestational age (SGA) fetuses [8]. The 2CTG2 software is a well tested medical reality since it is used in many italian obstetric departments after a wide clinical test phase. This application seems to have reached its two main goals in terms of ease of use as a medical device integration
software and scientific contribution as it computes a detailed analysis on tracings without losing speed and simplicity. So, after a few years since the development of this tool begun, we can considering the 2CTG2 as a light but complete software that implements the features needed by a nurse who wants to perform a CTG exam in a crowded obstetric ward on a pregnant woman and a solution for the doctor who wants to perform a complex real-time analysis on the recording showing mathematical parameters for helping the diagnostic process. Research is still in progress and new features concerning non linear parameters (DFA, MSE, LZC) will be implemented on a new release of the software, as well as advanced classifiers. The future for this tool is to increase its scientific interest by performing different kinds of analyses and to enhance its power in detecting parameters related to illness states.
REFERENCES 1.
van-Geijn-HP(1996), Developments in CTG analysis. Baillieres-ClinObstet-Gynaecol.; 10(2): 185-209. 2. Mantel R., Van Geijn HP, Caron FJM et al (1990), Computer analysis of antepartum fetal heart rate, Int J Biomed Comput, 25, 261-272. 3. Dawes GS, Moulden M, Redman CW (1995) Computerized analysis of antepartum fetal heart rate, Amer. J. Obstet. Gynecol.,173, 4, 1353–1354, 1995. 4. Arduini D, Rizzo G et al (1993), Computerized Analysis of Fetal Heart Rate: I. Description of the System (2ctg), Matern. Fetal Invest. 3, 159-163. 5. van Geijn HP, Lachmeijer AM, Copray FJ (1993) European multicentre studies in the field of obstetrics, Eur J Obstet Gynecol Reprod Biol. 50, 1, 5–23. 6. Fetal Monitor Test—A Brief Summary, Hewlett-Packard, Boeblingen,Germany, 1995, pp. 1–6. 7. Signorini MG, Magenes G, Cerutti S, Arduini D (2003), Linear and Nonlinear Parameters for the Analysis of Fetal Heart Rate Signal from Cardiotocographic Recordings, IEEE Trans Biom Eng, 50(3), 365-374. 8. Ferrario M, Signorini MG, Magenes G, Cerutti S (2006) Comparison of entropy based regularity estimators: application to the Fetal Heart Rate signal for the identification of fetal distress, IEEE Trans Biom Eng,3,1, 119-125. 9. Pincus SM (1995), Approximated entropy (ApEn) as a complexity measure, Chaos, 5 , 110-117. 10. Peng CK,. Havlin S, Stanley HE, Goldberger AL (1997) Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series, Chaos, 5, 82–87. 11. Costa M, Goldberger AL,. Peng CK (2002), Multiscale Entropy Analysis of Complex Physiologic Time Series, Phys. Rev. Lett. 89 (6). 12. Lempel A, Ziv J (1976) On the complexity of finite sequences, IEEE Trans.Inf. Th, 22, 1, 75-81. Author: Institute: Street: City: Country: Email:
Giovanni Magenes Dipartimento di Informatica e Sistemistica Via Ferrata 1 27100 Pavia Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Cardiac arrhythmias and artifacts in fetal heart rate signals: detection and correction M. Cesarelli, M. Romano, P. Bifulco, A. Fratini Dept. of Electronic Engineering and Telecommunication, Biomedical Engineering Unit, University "Federico II", Naples, Italy Abstract— Cardiotocography is the most commonly used noninvasive diagnostic technique that provides physicians information about fetal development (in particular about development of autonomous nervous system - ANS) and wellbeing. It allows the simultaneous recording of Fetal Heart Rate (FHR), by means of a Doppler probe, and Uterine Contractions (UC), by means of an indirect pressure transducer. Currently, in cardiotocographic devices, Doppler methodology involves autocorrelation techniques to recognize heart beats, so evaluation of inter-beats time-interval is very improved. However, recorded FHR signals may contain artifacts, because of the possible degradation, or even less, of the Doppler signal due to relative motion between probe and fetal heart, maternal movements, muscle contractions and other causes. Moreover, fetal cardiac arrhythmias can have an effect on FHR signals. These arrhythmias do not represent an expression of the physiological behavior of the ANS. Both, artifacts and cardiac arrhythmias represent outliers of the FHR signals, so they affect both time domain and time frequency signal analysis. Their detection and correction is therefore necessary before carrying on signal processing. In this work, an algorithm for detection and successive correction of outliers (signal artifacts and fetal cardiac arrhythmias) was developed and tested, both on simulated FHR series and real FHR series. Keywords— fetal cardiac arrhythmias, local outliers, global outliers, median filter.
I. INTRODUCTION Cardiotocography (CTG) is the most diffused indirect diagnostic technique to monitor fetal health during pregnancy and labor and it is the only medical report to have legal value in Italy. It allows the simultaneous recording of Fetal Heart Rate (FHR) and Uterine Contractions (UC). CTG provides physicians information about fetal development and well-being; particularly, CTG permits to assess maturation of Autonomous Nervous System (ANS) of the fetus [1]. To assess fetal health and reactivity, clinicians evaluate specific signs of the FHR signal and, during labor, they pay also attention to shape, intensity and frequency of UC, correlating them to changes induced in FHR. The efficiency of this method, depending on the expertise of the observer, lacks of objectivity and reproducibility [2].
The general target of our research is to develop objective and quantitative analysis methods (both in time domain and in frequency domain) for physicians decision support. It is well known that, in adults, the HR Variability (HRV) is a noninvasive and quantitative means to investigate ANS activity (both in physiological and pathological conditions). Also for the fetus, the FHR Variability (FHRV) around its baseline could be a base for a more objective analysis [3] and for a better knowledge of ANS reactions. It is worth remembering that to record FHR signal a US Doppler probe is used. Currently, Doppler technique involves autocorrelation techniques to recognize heart beats, so evaluation of inter-beats time-interval is very improved. However, recorded FHR signals often result noisy and may contain artifacts, because of the possible degradation, or even less, of the Doppler signal due to relative motion between probe and fetal heart, maternal movements and other causes (in fact the device continuously provides an estimate of the signal quality level). Moreover, fetal cardiac arrhythmias, as premature supraventricular depolarizations, premature ventricular depolarizations, non conduced premature supraventricular depolarizations, parasystole [4] and others can have an effect on FHR signals. These arrhythmias do not represent an expression of the normal behavior of the ANS and for their study specific techniques, as 2D echography, blood flows measurements, are usually employed. Both, artifacts and cardiac arrhythmias represent outliers of the FHR signals so they affect both time domain and time frequency analysis. Their detection and correction is therefore necessary. In this work, an algorithm for detection and successive correction of outliers (signal artifacts and fetal cardiac arrhythmias) was developed and tested. II. MATERIAL AND METHODS A. Outlier definition Statistically, an outlier is a data point that is an unusual observation or an extreme value in the data set which deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism [5, 6].
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 789–792, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
790
This above can be considered a general definition of outlier, but it’s necessary a more specific and quantitative definition to detect outliers in the FHR signal. For this purpose, a recent formal definition can be used. This definition is distance based because it considers the distance between the object (which could be an outlier) and other objects of data set as follows: an object p in a dataset D is a (pct, dmin)-outlier if at least percentage pct of the objects in D lies greater than distance dmin from p [6]. Of course it’s necessary to introduce a metric to express distance. However, using this definition it is possible to capture only some kinds of outliers; since the definition takes a global view of the dataset, these outliers can be considered as “global” outliers [6]. For many datasets which exhibit a more complex structure, another kind of outliers has to be considered. Objects that are outlying relative to their local neighborhoods, particularly with respect to the densities of the neighborhoods. These outliers are regarded as “local” outliers [6].This feature is particularly important in FHR signals, so to detect local outliers in FHR signal a working definition of outlier can be used, as follows, introducing the concept of time dependant outlier. A time dependant outlier is a data point that is an unusual observation or an extreme value in the data set, which is not part of a time trend. As time trend we consider two or more consecutive data points which move in the same general direction within a given statistical range [5]. Typically, in FHR signals, cardiac arrhythmias and shorttime artifacts look as local outliers, while signal less appears as global outliers. Therefore, the proposed algorithm uses both working definition and formal distance based one for detection of local and global outliers.
M. Cesarelli, M. Romano, P. Bifulco, A. Fratini
the occorence of ectopic beats, missed beats, bigeminal and/or trigeminal pattern and cambinations of these arrhythmias. Examples of artificially generated FHR series are shown in the figure 1.
B. Simulated FHR signals To test the developed algorithm, synthetic FHR signals were artificially generated, via software, using a slightly modified version of a method proposed by other authors [7]. Following that procedure, an artificial R-R tachogram with specific power spectrum characteristics is generated. The following model parameters were adapted to resemble real fetal cases. LF and HF bands of the FHRV power spectrum were considered to lie between 0.04 and 0.2 Hz, and 0.2 – 1 Hz, respectively. LF/HF power ratio was fixed to 5 and Standard Deviation of HF band to 0.03. Mean FHR was initially set at 140 bpm (within the range of normality, 120160 bpm). Then, a variable SD was considered, it was set at 1 in the first part of the signal, at 4 in the second part and at 2 in the last part. In addition, to obtain signals resembling other conditions, we simulated also some typical and frequent fetal cardiac arrhythmia. For example, we simulated
Fig. 1 examples of artificially generated FHR series.From the top: # 6 resembles physiological conditions; # 7 resembles 13 PVD’s (Premature Ventricular Depolarizations); # 25 resembles examples of Bigeminism (a zoom is proposed immediately below). #27 resembles PVD’s and missed beats All the numbers are referred to an internal numbering All simulated FHR signals had a duration of 25 minutes.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Cardiac arrhythmias and artifacts in fetal heart rate signals: detection and correction
C. Real CTG signals We carried out other tests using real FHR signals. In this case, CTG traces were recorded in clinical environment during the normal daily practice from women (singleton physiological pregnancies), who did not take drugs, close to delivery (33–42 gestation weeks). Apgar scores, birth weights and other information were collected in order to involve only CTG regarding healthy fetuses in the analysis. CTG recordings have an average duration of about 30 minutes. CTG recordings with evident artifacts or fetal cardiac arrhythmias (recognized with the support of a medical team) were chosen for the analysis. D. Algorithm description The algorithm is included in a software for CTG preprocessing, developed at our lab., which segments each recorded signal into a number of reliable continuous tracts (i.e. where the signal quality level is acceptable or good); with an opportune procedure (see in the following) removes possible outliers and then replaces unreliable signal tracts of less than 4 seconds with linear interpolated values. Finally, re-interpolating and re-sampling the values at a frequency of 4 Hz, an output FHR signal uniformly sampled and aligned with UC (within 0.25 sec) is obtained. Regard the procedure to remove outliers (algorithm developed in this work), it consists in two main steps: detection and correction. Detection phase, in order to be more robust, following some examples reported in literature, is based on a double scanning of the FHR signal. In forward scanning, every sample of FHR which differs by more than an assigned threshold from the median computed on the previous 5 samples, is marked as “candidate outlier”. In backward scanning, instead, every sample of FHR which differs by more than an assigned threshold from the median computed on the following 5 samples is marked as “candidate outlier”. In our algorithm, threshold is fixed at 12 bpm for samples of good or acceptable quality, otherwise it is fixed at 6 bpm (both values were heuristically chosen). The length of median filter was chosen equal to 5 in order to have a result certainly not dependent on until 2 outliers out of 5 samples. Every sample of FHR signal marked as a candidate outlier both in forward scanning and in backward scanning is treated as a real local outlier. At this point, the algorithm looks for global outliers. As first step, it checks if it is possible to detect two groups of at least 4 consecutive local outliers lying on opposite slope tracts of FHR signal (let us call them “guard groups”). The samples
791
between the guard groups are considered candidate global outliers. The algorithm applies the formal distance based definition considering as outliers every sample which differs by more than 40 bpm from 95% of all samples of FHR signal. If this definition is matched, the algorithm searches the first sample (A) not marked as candidate outlier coming before the first guard group and the first sample (B) not marked as outliers following the second guard group. All samples between A and B are considered as real outliers. Correction phase is realized, for local outliers, by replacing outliers whit the result of a fifth order median filter temporally centered on outlier itself. For global outliers, the algorithms realizes a linear interpolation from the first valid sample preceding first guard group and the first valid sample following second guard group. All corrected tracts are marked on the CTG trace. III. RESULTS The performances of developed algorithm were assessed on 200 simulated FHR signals (whit a random insertion of most common fetal arrhythmias and some their combination) and on 25 real FHR signals. As example, the following figures show some results. In all tested cases, algorithm showed satisfactory performances; in fact, it is able to detect and correct all outliers due to arrhythmias and/or artifacts for simulated and real FHR signals.
Fig. 2 examples of results obtained with the developed algorithm. From the top (simulated FHR series): 7-bis: FHR series # 7, reported in fig. 1, after correction of outliers. 25-bis: FHR series # 25, reported in fig. 1, after correction of outliers.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
792
M. Cesarelli, M. Romano, P. Bifulco, A. Fratini
The developed algorithm is a non linear algorithm which consists of two principal steps, detection and correction, to find outliers and to replace them whit samples which simulate the normal ANS control on cardiac rate. Detection phase is based on double scanning of FHR series whit two different thresholds while, for correction, a fifth order median filter is used for local outliers and a linear interpolation is used for global outliers. The algorithm showed satisfactory performances in all test using both simulated and real FHR signals. However, to test the algorithm on a more numerous set of real FHR signals and involving other conditions can be very useful.
ACKNOWLEDGMENT The Authors would like to thank ing. Giuseppe Longobardi for his kind and precious collaboration.
REFERENCES 1. 2.
3.
Fig. 3 examples of results obtained with the developed algorithm. From the top (simulated FHR series): FHR series whit global outliers; zoom on global outliers; FHR series after correction.
4. 5. 6.
IV. CONCLUSION In FHR signals can be anomalous samples due to cardiac arrhythmias and/or artifacts. These samples represent outliers because they are not correlated whit normal ANS behavior. Outliers affect time domain analysis and frequency domain analysis of FHRV, so their detection and removal is necessary. Nevertheless, their impulsive nature makes impossible to use a linear filter to remove them.
7.
M.G.Signorini, G.Magenes, S.Cerutti, D.Arduini (2003) Linear and nonlinear parameters for the analysis of fetal heart rate signal from cardiotocographic recordings. IEEE Trans.Biomed.Eng.50(3):365-75 F.Figueras, S.Albela, S.Bonino, M.Palacio, E.Barrau, S.Hernandez, C.Casellas, O.Coll, V.Cararach (2005) Visual analysis of antepartum fetal heart rate tracings: inter- and intra-observer agreement and impact of knowledge of neonatal outcome. J Perinat Med.; 33 (3): 241-5 O.Sibony, J.P.Fouillot, M.Benaoudia, A.Benhalla, J.F.Oury, C.Sureau, P.Blot (1994) Quantification of the heart rate variability by spectral analysis of fetal well-being and fetal distress. Europ. J Obstet. & Gynecol. and Reproductive Biology, 54:103-108 M.L.Cabaniss, D.Karetnikov. Fetal monitoring interpretation. Lippincott Company D.N.Lebrun (2003) Analysis of neonatal heart rate variability and cardiac orienting responses. Thesis. Master of Engineering. University of Florida M.M.Breunig, H.P.Kriegel, J.Sande (2000) LOF: Identifying densitybased local outliers. ACM P.E.Mcsharry, G.D.Clifford, L.Tarassenko, L.A.Smith (2003) A dynamical model for generating synthetic electrocardiogram signals. IEEE Trans. on Biom. Eng. vol. 50, no. 3 Author: Prof. Mario Cesarelli Institute: Street: City: Country: Email:
Department of Electronic and Telecommunication via Claudio, 21 Naples Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Complexity Analysis of Heart Rate Control Using Symbolic Dynamics in Young Diabetic Patients M. Javorka1, Z. Trunkvalterova1, I. Tonhajzerova1, J. Javorkova2 and K. Javorka1 1
Comenius University, Institute of Physiology, Jessenius Faculty of Medicine, Martin, Slovakia 2 Clinic of Children and Adolescents, Martin Teaching Hospital, Martin, Slovakia
Abstract— Cardiovascular dysregulation and autonomic neuropathy are common complications of diabetes mellitus (DM). Although autonomic neuropathy is considered as one of the late complications of DM, there are some sensitive methods, that can detect autonomic nervous system dysregulation even in early phases of DM. There is ongoing effort to apply methods based on nonlinear dynamics to improve the description and classification of different cardiac states. The aim of this study was to find out which of the heart rate variability parameters of symbolic dynamics are different in young patients with DM compared to control group. Several parameters based on 4 symbols encoding were used for quantification of heart rate variability and complexity. Our results suggest slightly reduced complexity (expressed by marginally nonsignificantly reduced number of “forbidden words”) even in young diabetic patients pointing out to another aspect of heart rate dysregulation in this group. In addition we have found qualitative difference in distribution of symbolic words expressed by parameter “wpsum02”. Parameters of symbolic dynamics could be used (in combination with traditionally used linear HRV measures) for describe additional information in heart rate time series in patients with dysregulation. Keywords— Heart rate variability, nonlinear dynamics, complexity, symbolic dynamics, diabetes mellitus.
I. INTRODUCTION Cardiovascular dysregulation and autonomic neuropathy are common complications of diabetes mellitus (DM). It is very important to diagnose the cardiac dysregulation, because there is a significant relationship between the autonomic nervous system and cardiovascular mortality, including mortality of patients with this type of complication [1,2]. Although autonomic neuropathy is considered as one of the late complications of DM, there are some sensitive methods, that can detect autonomic nervous system dysregulation even in early phases of disease [3,4,5,6]. Although multicentric study EURODIAB have found the presence of cardiac autonomic neuropathy in 19% of diabetics in the age group 15 – 29 years [7], relatively few studies were focused on autonomic neuropathy in young adults with DM type 1.
For diagnosis of cardiac autonomic neuropathy, the Ewing battery of cardiovascular tests or conventional time and frequency domain heart rate variability (HRV) methods are usually used [8]. The reduction of spontaneous HRV is regarded as one of the early signs of the cardiac autonomic neuropathy [3]. The traditional techniques of data analysis in time and frequency domain are rather simple, take a little time, but they are often not sufficient to characterize the complex dynamics of heart beat generation. Therefore, there is ongoing effort to apply methods based on nonlinear dynamics to improve the description and classification of different cardiac states [9]. The aim of this study was to find out which of the HRV parameters of symbolic dynamics are different in young patients with DM compared to control group. II. METHODS A. Subjects In this study, we have included a sample of 34 patients subdivided into two groups. The first group (DM) consisted of 17 patients with type 1 DM (10 women, 7 men) aged 12.9 – 31.5 years (mean ± SEM: 22.4 ± 1.0 years). The mean duration of DM was 12.4 ± 1.2 years. The second group (Control) consists of 17 healthy gender and age matched probands (mean age: 21.9 ± 0.9 years). All subjects gave their informed consent prior to examination. Subjects were instructed not to use substances influencing cardiovascular system activity (coffeine, alcohol) and not to smoke prior to examination. All subjects were investigated in quiet room from 8 to 12 AM. The device VariaCardio TF4 (Sima Media, Olomouc, Czech republic) was used for continuous beat-to-beat monitoring of heart rate. This device consists of thoracic belt with intergrated electrodes. ECG signal was telemetrically transfered into PC for subsequent analysis for detection of R waves and derivation of R-R intervals time series. During measurement, subjects were under standardized conditions (supine position, rest, same time, place) for 60 minutes. We have asked the probands to avoid voluntary movements and speaking as much as possible.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 766–768, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Complexity Analysis of Heart Rate Control Using Symbolic Dynamics in Young Diabetic Patients
767
Fig. 1 Between-groups comparison for parmater “wpsum02”. Bars and error bars indicate groups means and SEM, respectively
Fig. 2 Between-groups comparison for parameter “Shannon entropy” Bars B. Data analysis The HRV analysis was performed off-line using special software on one time interval of R-R interval time series. This segment (denoted in figures as T) consisted of 3200 RR intervals (1st to 3200th R-R interval). The concept of symbolic dynamics is based on coarsegraining of the dynamics. The time series was transformed into symbol sequences with symbols from a given alphabet. Transformation of analysed interval (T) into a symbolic sequence was done as follows: The R-R intervals time series x1, x2, x3......xN was transformed to symbol sequence s1, s2, s3......sN, where si is from the set A, on the base of alphabet A = {0, 1, 2, 3}. Symbol si equals to 0, if xi > µ and xi ≤ µ + SD; symbol si equals to 1, if xi > µ + SD and xi < ∞; symbol si equals to 2, if xi > µ SD and xi ≤ µ; symbol si equals to 3, if xi > 0 and xi ≤ µ SD, where µ refers to mean lenght of R-R interval and SD is standard deviation of analysed time series [10]. There are several quantities that characterize such symbol strings. Parameter “forbidden words” is a number of words, which never occur in the distribution of words with the length of 3 symbols. A high number of forbidden words represents more regular and less complex behavior of respective time series (or system). If the time series is rather complex only a few forbidden words can be found. Parameters based on information theory are usually used to describe the distribution of words. We have computed Shannon entropy using following formula: Shannon Entropy = -∑ [p(i) log2p(i)] where p is a probability of given word occurrence. The sum is computed over all possible words (in our case 43 = 64 types of words). Larger values of Shannon entropy refer to higher complexity in respective time series. The other computed parameter was “wpsum02” – percentage of words consisting only of symbols “0” and “2” [11].
and error bars indicate groups means and SEM, respectively
Fig. 3 Between-groups comparison for parameter “forbidden words”. Bars and error bars indicate groups means and SEM, respectively
C. Statistics Because of Gaussian distribution of all assessed parameters, between groups comparison were performed using two-sample t-test. Values p<0.05 were considered statistically significant and results are presented as mean ± SEM. III. RESULTS Statistical analysis of between groups differences revealed that parameter “wpsum02” was significantly higher in DM group compared to control group (Control: 48.7 ± 1.9, DM: 55.1 ± 2.0, p=0.026, Fig. 1). Parameter “Shannon entropy” tended to be lower in diabetics (Control: 4.6 ± 0.1, DM: 4.3 ± 0.1, p=0.069, Fig. 2). No significant between groups difference was found in number of “forbidden words” (Control: 14.5 ± 1.7, DM: 18.6 ± 1.8, p=0.101, Fig. 3).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
768
M. Javorka, Z. Trunkvalterova, I. Tonhajzerova, J. Javorkova and K. Javorka
IV. DISCUSSION The measures based on nonlinear dynamics are increasingly used in physiology, including HRV analysis. The main limitation of traditionally used nonlinear methods (e.g. correlation dimension, largest Lyapunov exponent) is their requirements for long stationary signals which seldom (or never occur) in biology. Therefore, new parameters based on nonlinear dynamics able to describe hidden information in shorter time series are sought. Parameters of symbolic dynamics are one of them[12,11]. The concept of symbolic dynamics is based on coarsegraining of the dynamics. The time series are transformed into symbol sequences with symbols from a given alphabet. Some detailed information is lost in the process, however, the coarse dynamic behavior can be analysed. Encoding can be performed by almost infinite amount of approaches. We have performed symbolic dynamics analysis using modified encoding by [11]. The analysis of “forbidden words” count and “Shannon entropy” of three symbols words distribution enables to analyse complexity of heart rate control system. It is known, that higher complexity is one of the principal features of healthy control system [13, 14]. There is only limited number of studies with application of symbolic dynamics methods to heart rate variability in pathologic conditions. Higher number of “forbidden words” was found in patients after myocardial infarction [11]. In addition, Shannon entropy was lower in patients with ventricular tachyarrhythmias [10]. Our results suggest slightly reduced complexity (expressed by marginally nonsignificantly reduced number of “forbidden words”) even in young diabetic patients pointing out to another aspect of heart rate dysregulation in this group. The contribution of various types of words was described using parameter “wpsum02”. Significantly higher value of this parameter in DM group suggests the prevalence of R-R intervals sequences consisting only from R-R intervals with length around mean R-R interval length. This result confirms the concept of reduced richness of dynamics in diseased group. V. CONCLUSION Our results suggest slightly reduced complexity of heart rate dynamics in young patients with diabetes mellitus. Parameters of symbolic dynamics could be used (in combination with traditionally used linear HRV measures) for better description of heart rate dysregulation.
ACKNOWLEDGMENT This study was supported by grant VEGA No. 1/2305/05.
REFERENCES 1. 2.
3. 4. 5.
6. 7. 8. 9.
10.
11.
12. 13. 14.
Spallone V, Menzinger G (1997) Diagnosis of cardiovascular autonomic neuropathy in diabetes. Diabetes 46:S67-S76 Osterhues H-H, Grossmann G, Kochs M et al. (1998) Heartrate variability for discrimination of different types of neuropathy in patients with insulin-dependent diabetes mellitus. J Endocrinol Invest 21:24-30 Ziegler D (1994) Diabetic cardiovascular autonomic neuropathy: prognosis, diagnosis and treatment. Diabetes Metab Rev 10:339-383 Spallone V, Uccioli L, Menzinger G (1995) Diabetic autonomic neuropathy. Diabetes Metab Rev 11:227-257 Scaramuzza A, Salvucci F, Leuzzi S et al (1998) Cardiovascular autonomic testing in adolescents with type I (insulindependent) diabetes mellitus; an 18 month follow-up study. Clin Sci 94:615-621 Sima AAF (2000) Does insuline play a role in cardiovascular autonomic regulation? Diabetes Care 23:724-725 Donaghue KC (1998) Autonomic neuropathy: diagnosis and impact on health in adolescents with diabetes. Horm Res 50:33-37 Rollins JS, Jenkins JG, Carson DJ et al. (1992) Power spectral analysis of the electrocardiogram in diabetic children. Diabetologia 35:452-455 Javorka M, Javorkova J, Tonhajzerova I et al. (1995) Heart rate variability in young patients with diabetes mellitus and healthy subjects explored by Poincaré and sequence plot. Clin Physiol Func Im 25:119-127 Wessel N, Ziehmann C, Kurths J et al. (2000) Short-term forecasting of life threatening cardiac arrhythmias based on symbolic dynamics and finite-time growth rates. Phys Rev E 61:733-739 Voss A, Kurths J, Kleiner HJ et al. (1995) The application of methods of non-linear dynamics for the improved and predictive recognition of patients threatened by sudden cardiac death. Cardiovascular Research 31:419-433 Kantz H, Schreiber T (2000) Nonlinear time series analysis. Cambridge University Press, Cambridge Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci 88:2297-2301 Costa M, Healey JA (2003) Multiscale complexity analysis of complex heart rate dynamics: Discrimination of age and heart failure effects. Comput Cardiol 30:705-708 Author: Michal Javorka Institute: Comenius University Institute of Physiology, Jessenius Faculty of Medicine Street: Mala Hora 4 City: Martin, 036 01 Country: Slovakia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Joint Symbolic Dynamic of Cardiovascular Time Series of Rats D. Varga1,T. Loncar-Turukalo1, D.Bajic1, S. Milutinovic2, N. Japundzic-Zigon2 1
University of Novi Sad, Trg D. Obradovica 6, Novi Sad, Serbia 2 University of Belgrade, dr. J. Subotica 1, Belgrade, Serbia
Abstract – Insight in complex heart rate and blood pressure interactions reveals the most important aspects of autonomic control. Our main interest was in baroreceptor reflex (BRR), the most important autonomic cardiovascular reflex. We evaluated the joint symbolic dynamics of heart rate and blood pressure variations in assessing the BRR by opening the BRR loop at different levels using pharmacological blockade of βadrenergic, α-adrenergic and M-cholinergic receptors. Experiments were done in conscious telemetred Wistar out bred male rats. The observed changes between experimental groups are promising for use of symbolic dynamic method in assessment of impaired autonomic control of the cardiovascular system. Keywords – symbolic dynamics, heart rate variability, blood pressure variability, baroreceptor reflex
I. INTRODUCTION Short term heart period variability (HPV) is in great part induced by the negative feed back response of the baroreceptor reflex (BRR) to internal or external perturbations of blood pressure (BP). This implies that bivariate analysis of HPV and blood pressure variability (BPV) may provide valuable information on autonomic cardiovascular control.[1]. BRR is the most important autonomic cardiovascular reflex. Its physiological role is to keep the blood pressure in homeostatic range ensuring necessary blood supply to end organs. Baroreceptors perceive the information about BP changes, which are then transmitted via afferent fibres of vagus nerve to the brain to be integrated. If BP decreases, the sympathetic branch of the BRR will be activated to produce tachycardia and vasoconstriction. This will limit the initial fall of BP. However, if the increase in BP is perceived, the vagus nerve will be activated to slow down the heart and oppose the initial increase in BP. Therefore the BRR works as a negative feed back system that produces unidirectional changes of BP and heart period. In order to asses the BRR we opened the BRR loop in conscious rats at different levels using drugs. BBR loop was disrupted by blocking the effect of neurotransmitters released from the efferent fibers of the parasympathetic (vagus) nerve or sympathehic nerves of BRR on their postsynaptic receptors. Then we applied short-term joint
symbolic dynamics (JSD) on heart period and blood pressure time series to quantify these changes. Developing techniques for evaluation of the BRR is of great interest since BRR malfunctioning may be an early sign of cardiovascular disease [2]. II. MATERIALS AND METHODS A. Animals and drugs Animals: experiment were done in conscious male Wistar out bred rats (320-350g) during daytime (10-14h), housed separately in Plexiglas cages and kept under standard laboratory conditions with water and food ad libitum. Surgery: rats were submitted to surgical procedure under combined 2% xylazine and 10% ketamine anesthesia during which implants TA11 PA-C40 (Transoma Medical, DSI Inc., USA) were inserted in aorta. After full recovery period (10 days), rats were re-operated under halothane anesthesia (4% concentration in the chamber for induction of anesthesia and 1.7% for maintenance under the mask) for quick insertion of catheter in jugular vein for drug injections. Two days later rats were submitted to four different protocols. Protocol 1-PRA was designed to investigate the contribution of the vascular part of the sympathetic nervous system to JSD in the cardiovascular signals. The vascular part was eliminated by selective blockade of α1 adrenergic receptors in blood vessel wall by prazosin (1 mg/kg i.v. bolus continued by 0.5 mg/kg/h infusion) in n=6 conscious rats. Protocol 2-METO was designed to investigate the contribution of' the part of the sympathetic nervous system directed to the heart to JSD in the cardiovascular signals, under selective blockade of β adrenergic receptors by metoprolol (2 mg/kg i.v. bolus continued by 1 mg/kg/h infusion) in n=6 conscious rats. In protocol 3-ATRO we investigated the contribution of the parasympathetic part of autonomic nervous system to JSD of cardiovascular signals by blocking the muscarinic receptors in the heart by atropine methyl bromide (1 mg/kg i.v. bolus, followed by 0.5 mg/kg/h i.v infusion) in n=3 conscious rats. Protocol 4_CNTRL was designed as a control group in which saline (0.9% NaCI) was injected to n=9 rats (1 ml/kg i.v. followed 0.5 ml/kg/h i.v. infusion). All drugs were dissolved in saline (0.9% NaCI).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 773–776, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
774
D. Varga,T. Loncar-Turukalo, D.Bajic, S. Milutinovic, N. Japundzic-Zigon
B. Signal preprocessing Continuous recording of blood pressure pulse wave was done using DSI radio telemetry system. Pulse pressure was sampled at 1000 Hz. Systolic blood pressure was obtained by identification of the maxima in the pulse wave signal. A series of heart cycle durations, derived from the recorded pulse wave, was used to describe heart period variability. For each one of the observed rats, recorded time series of beat-to beat intervals (BBI) and systolic blood pressure (SBP) consisted of up to 13 five-minute recordings made with two-minute brakes. From each one of time-series, 1024 consecutive samples were chosen, the same ones for both BBI and SBP series. Artifacts were detected and replaced with the mean value of the two neighboring samples. The frequently employed stationarity test for biological time series [3, 4] was done. The time series that had not passed the test considering both the mean and standard deviation of the samples were rejected. The remaining study sample included 68 series for protocol 1-PRA, 43 ones for protocol 2- METO, 28 ones for protocol 3-ATRO and 50 ones for protocol 4-CNTRL. C. Joint Symbolic Dynamics The concept of symbolic dynamics allows a simplified description of the dynamics of the system with a limited amount of the symbols. From the matrix X which contained both the 1024-time series of beat-to-beat intervals xBBI and systolic blood pressure xSP (X= [xnBBI xnSP] n=1,..1024), new matrix of symbols is formed S=[snBBI snSP] n=1,...1023, where: BBI ⎧0 x nBBI ≤0 +1 − x n s nBBI = ⎨ BBI BBI ⎩1 x n +1 − x n > 0
(1)
SnSP was formed obeying the same rule as in (1): increase of successive sample values is coded with ‘1’, while decrease and equilibrium are coded with ‘0’. Subsequently, each of the symbol sequences was subdivided into words of the length three, using the sliding window of size 3, shifted symbol by symbol. Altogether, there were 64 different word types (23x23=64), able to map dynamics within four consecutive heart beats [1]. As a result, a word distribution density matrix W was obtained, containing the frequency of each of the 8x8 possible BBI and SBP patterns (2). ⎡ BBI 000 SBP000 ⎢ . W =⎢ ⎢ . ⎢ ⎣ BBI111 SBP000
.... BBI 000 SBP111 ⎤ ⎥ . . ⎥ ⎥ . . ⎥ .... BBI111 SBP111 ⎦
(2)
To estimate the baroreflex response, we calculated the probabilities of word types with symmetric ([001,001],[110,110]) and diametric ([001,110],[110,001]) patterns, corresponding to diagonals of matrix W [1]. Symmetric word types (JSDsym) refer to baroreflex activation (dropping SBP is associated with shortened BBI and increasing SBP is associated with increasing BBI). Diametric word types (JSDdiam) imply suppressed baroreflex activity. In addition, the Shannon entropy was calculated as a measure of complexity within W (JSDsh). 8
JSD Sh = − ∑
8
∑ (W j , k
j =1 k =1
/ N ) log 2 (W j , k / N ) (3)
Another JSD analysis was performed using the BBI time series [5,6]. All the samples of each BBI series was first normalized by subtracting the mean and then, divided by the standard deviation, thus obtaining the new series expressed in adimensional units. Afterwards, the full range of dynamics of each normalized series was divided into six interval values, the quantization levels. The normalized series were coded with symbols from 0 to 5, depending upon their values. New series symBBI was divided into patterns of length three (symBBI(i), symBBI(i+1), symBBI(i+2))i=1,...1022. All possible patterns were grouped into three groups: 1) patterns with no variations (0v, all the symbols are the same), 2) patterns with one variation (1v, two consequent symbols are equal and remaining one is different) and 3) patterns with three variations (2v, all symbols are different from the previous one). The patterns of 2V were further divided in two groups: a) patterns with two like variations (2LV, for example (3,4,5), (5,4,3))and b) patterns with two unlike variations (2UV, such as (3,5,2), (3,0,4))[6]. Statistics: Results were presented as mean ± SEM. Kruskal-Wallis test was used and Dunn’s method for multiple comparisons of different experimental groups with CNTRL group. Differences were considered significant if p was bellow 0.05. III. RESULTS Mean heart rate (HR) and SBP of rats obtained in different protocols is given in table 1. The effects of drugs were noticeable through the basic changes (mean, SD) in HR and SBP. After prazosin (blockade of α1 adrenergic receptors) mean SBP was reduced and HR was increased. The JSDsym was significantly reduced by prazosin in comparison with the CNTRL group (p<0.001), while JSDdiam didn’t show any significant difference, as well as JSDsh (see Fig.1).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Joint Symbolic Dynamic of Cardiovascular Time Series of Rats
775
0.175 0.150
JSDsym
0.125
*
*
METO
ATRO
***
CONTROL
PRAZOSIN
METOPROLOL
ATROPINE
0.100 0.075 0.050 0.025 0.000
CNTRL
PRA
***
0.175
JSDdiam
0.150
ns
0.125
ns
0.100 0.075 0.050 0.025 0.000
CNTRL
6
PRA
METO
ns
ns
ATRO
Fig. 2
*
Matrices W for different experimental protocols
3 2 1 0
CNTRL
PRA
METO
ATRO
Fig. 1 Bar graphs (mean +SEM) showing JSDsim, JSDdiam and JSDshannon in different experimental conditions (* p<0.05, ** p<0.01 and *** p<0.001)
Metoprolol (β1 blocker) lowered the mean HR and SBP, reduced the JSDsym (p<0.05), increased JSDdiam (p<0.001), while JSDsh remained similar to controls. Atropine (parasympathetic blockade) brought to remarkable increase in mean HR, reduced JSDsym (p<0.05) and JSDsh (p<0.05) while JSDdiam tended to decrease as well. Matrices W for different experimental protocols are shown in Fig.2. They reflect the differences in word probabilities between different groups.
The analysis of BBI only (see Fig.3), revealed that both metoprolol (p<0.001) and atropine (p<0.001) decrease the number of 0v, while the decrease of 0v produced by prazosin was not significant. Number of 1V patterns decreased in all protocols with drugs: prazosin (p<0.05), metoprolol (p<0.001) and atropine (p<0.001). Number of 2v patterns generally increased after treatments and was statistically significant after prazosin (p<0.05), metoprolol (p<0.001) and atropine (p<0.001) treatment. The change in the number of 2LV and 2UV patterns is separately shown in two lower graphs of Fig.3.
300
n.s. 200
***
0
Table 1
Heart rate and systolic blood pressure in different experimental protocols (* SD for all the rats within the same group, ** mean value of SD’s of individual rats in the same group)
CNTRL
PRA
500
SD*
CNTRL
338.46
41.98
PRA
415,44
METO ATRO
SD**
SBP mean (mmHg)
SD*
SD**
16.54
115.72
8.86
3.72
53.42
27.31
108.91
8.16
4.29
318.67
34.12
13.27
114.18
6.29
4.91
451,56
34.12
12.15
128,22
7.73
4.94
METO
***
400
***
CNTRL
PRA
75
200
***
METO
ATRO
***
100
ns
*** *
ATRO
300
**
ns
METO
ATRO
50 25
100 0
550 500 450 400 350 300 250 200 150 100 50 0
2UV
HR mean (bpm)
2LV
Protocol
***
100
No. 2v
4
No. 0v
JSDshann
5
CNTRL
PRA
METO
ATRO
0
CNTRL
PRA
Fig. 3
Bar graphs (mean and SEM) showing the number of OV, 2V, 2LV and 2UV words, respectively (* p<0.05, ** p<0.01 and *** p<0.001)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
776
D. Varga,T. Loncar-Turukalo, D.Bajic, S. Milutinovic, N. Japundzic-Zigon
IV. DISCUSSIONS In our experiments blockade of α1 receptors with prazosin disrupted the efferent arterial part of BRR. Consequently, the blood vessels dilate, peripheral resistance and BP decrease while HR increases to limit a fall of BP. Blockade of β1 adrenergic receptors in the heart with metoprolol prevented the effect of noradrenalin released from sympathetic nerves, thus reducing the heart rate due to vagal dominance. Oppositely, blockade of muscarinic receptors in the heart by atropine prevented the slowing of the heart induced by the vagal branch of the BRR and increased the heart rate due to sympathetic supremacy. We furher show that prazosin, metoprolol and atropine decreased JSDsym, i.e. decreased number of simultaneous increase/decrease of BBI and SBP, indicating that the baroreflex is interrupted (open loop) and is not functioning. Therefore, the method can be useful as supplement of the simplest linear model of BRR (the sequence method [1]) for interaction analysis between HP and SBP. Word matrix W can further be used for calculating some over- and underrepresented words in different experimental groups. Since the BRR does not contribute to bidirectional changes of HP and SBP, the experiment didn’t produce significant changes of JSDdiam. It is possible that bidirectional changes of SBP an HP may be induced by other, perhaps cardiorespiratory reflexes or feed forward BP and HR responses. The increase in JSDdiam caused by metoprolol, which produces the dominance of vagal over sympathetic control of the heart, might reflect an increase in respiration induced variability or respiratory sinus arrhythmia. Considering the Shannon entropy (JSDshann), its logarithmic average over the probabilities of all the patterns does not reveal significant differences between groups. The occurrences of “smooth” regions of signal are described by 0V and, in less extent, by 1V patterns (Fig. 3). Patterns 1V are actually the follow-ups of 0V patterns, since each string of same symbols divided into 0V words is preceded, as well as followed by an 1V word. Therefore, 1V patterns are not shown. Disrupted baroreflex control, caused by drugs that open BRR loop, does not counteract efficiently perturbations to the cardiovascular system, thus lowering the ‘smoothness” of signal and, consequently increasing the perturbed regions (2V words). According to others [6], various mechanisms contribute to the complexity of the HR signal and dominant action of one of these mechanisms determines the reduction of complexity (i.e. increase of regularity). It is possible that respiration induced variability, which wasn’t abolished in our experiments by atropine, may be responsible for discrepancy of results. Even when the series of 100 beats were extracted and compared, the same trend in number of
variations was observed. This is further supported by the number of 2LV and 2UV patterns which are dominated by slow and high frequency content, respectively (Fig. 3). The result confirms that withdrawal of parasympathetic or sympathetic influence to the cardiovascular system brings significant changes in JSD dynamics and number of variations. The number of 0V variations are generally decreased, indicating the disorder of homeostasis, accompanied with the increase of 2V corresponding to perturbed balance. V. CONCLUSION The joint symbolic dynamics of cardiovascular time series may be used for the assessment of the BRR, nonlinear interactions of HR and BP and autonomic control of cardiovascular system.
ACKNOWLEDGMENT This study was supported in part by Serbian Ministry of Science, Technology and Environmental Protection (grant nº145062).
REFERENCES 1.
2.
3. 4. 5.
6.
Baumert M, Baier V, Truebner S et al.(2005) Short-and longterm joint symbolic dynamics of heart rate and blood pressure in dilated cardiomyopathy, IEEE Trans Biomed Eng 52; 2112-2115 Mansier P, Clairambault J, Charlotte N, Medigue C, Vermeiren C, LePape G, Carre F, Gounaropoulou A, Swynghedauw B (1996) Linear and non-linear analyses of heart rate variability: a minireview Cardiovasc Res., Mar;31(3), pp 371-379. Bendat and Piersol (1986) Random data analysis and measurment procedures, Willey Interscience, New York Karvajal R, Zebrowski J. et al. (2002) Dimensional Analysis of HRV in Hypertropic Cardiomyopathy Patients, IEEE Eng Med Biol 21(4), pp 71-78 Guzzeti S, Borroni E, Ceriani E et al (2006) Symbolic dynamics of very short (100 beats) heart period variability: a method to investigate cardiac autonomic modulation, Proc.. ESCGO, Jena, Germany, pp 82-84 Porta A, Guzzeti S, Montano N et al (2001) Entropy, entropy rate and pattern classification as tools to typify complexity in short heart period variability series, IEEE Trans Biomed Eng 48;1282-1290 Author: Institute: Street: City: Country: Email:
Tatjana Loncar-Turukalo University of Novi Sad, Faculty of Technical Sciences Trg Dositeja Obradovica 6 Novi Sad Serbia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Recurrence Quantification Analysis of Heart Rate Dynamics in Young Patients with Diabetes Mellitus Z. Trunkvalterova1, M. Javorka1, I. Tonhajzerova1, J. Javorkova2, and K. Javorka1 1
Comenius University, Institute of Physiology, Jessenius Faculty of Medicine, Martin, Slovakia 2 Clinic of Children and Adolescents, Martin Teaching Hospital, Martin, Slovakia
Abstract— There is an ongoing effort to apply methods based on nonlinear dynamics to improve the description and classification of different states and diseases. Relatively few studies were focused on autonomic neuropathy in young adults with diabetes mellitus (DM) type 1. The aim of this study was to find out which of the heart rate variability parameters derived from recurrence plot (recurrence quantification analysis parameters) are different in young patients with DM compared to control group. We have quantified various recurrence plot measures. From RQA measures based on diagonal lines in recurrence plots, we have found higher percentage of recurrence and of determinism and increased maximal length of diagonal line in DM group. Parameter Trapping Time was higher in DM group compared to control subjects. These results suggests reduced complexity and increased predictability of heart rate dynamics even in young patients with DM. RQA parameters should be used together with other HRV parameters for better description of heart rate dysregulation in various patients groups. Keywords— Heart rate variability, nonlinear dynamics, recurrence plot, recurrence quantification analysis, diabetes mellitus.
I. INTRODUCTION Cardiovascular dysregulation and autonomic neuropathy are common complications of diabetes mellitus (DM) [1,2]. Although multicentric study EURODIAB have found the presence of cardiac autonomic neuropathy in 19% of diabetics in the age group 15 – 29 years [3], relatively few studies were focused on autonomic neuropathy in young adults with type 1 DM. The reduction of spontaneous heart rate variability (HRV) is regarded as one of the early signs of cardiac autonomic neuropathy [4]. The traditional techniques of data analysis in time and frequency domain are rather simple, take a little time, but they are often not sufficient to characterize the complex dynamics of heart beat generation. Therefore, there is ongoing effort to apply methods based on nonlinear dynamics to improve the description and classification of different states and diseases [5]. Recurrence quantification analysis (RQA) is based on analysis of recurrences in dynamical system [6,7]. The main
advantage of recurrence plots is their applicability to nonstationary and even short time series common in real physiological signals. The aim of this study was to find out which of the HRV parameters of RQA are different in young patients with DM compared to control group. II. METHODS A. Subjects Total of 34 subjects divided into 2 groups participated in this study. The first group (DM) consisted of 17 patients with type 1 DM (10 women, 7 men) aged 12.9 – 31.5 years (mean ± SEM: 22.4 ± 1.0 years). The mean duration of DM was 12.4 ± 1.2 years. The second group (Control) consisted of 17 healthy gender and age matched subjects (mean age: 21.9 ± 0.9 years). All subjects gave their informed consent prior to examination. Subjects were instructed not to use substances influencing cardiovascular system activity (coffeine, alcohol) and not to smoke prior to examination. All subjects were investigated in a quiet room from 8 to 12 AM. The device VariaCardio TF4 (Sima Media, Olomouc, Czech republic) was used for continuous beat-to-beat monitoring of heart rate expressed as R-R intervals. During measurement, subjects were under standardized conditions (supine position, rest, same time, place) for 60 minutes. We have asked the probands to avoid voluntary movements and speaking as much as possible. B. Data analysis The HRV analysis was performed off-line using special software on one time interval of R-R interval time series. This segment consisted of 3200 normal R-R intervals (1st to 3200th R-R interval). RQA was performed using method of Webber and Zbilut (1994) [6] and Marwan et al. (2002) [7]. We analysed distribution of diagonal (measures %Rec (percentage of recurrence points), %Det (percentage of recurrence points forming diagonals), Lmax (maximal length of diagonal)), and vertical lines (TT (Trapping time – mean length of vertical
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 769–772, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
770
Z. Trunkvalterova, M. Javorka, I. Tonhajzerova, J. Javorkova and K. Javorka
lines)) in recurrence plots. Several parameters of this analysis should be fixed before performing an analysis (modified parameters of Mestivier (2001) [8]): embedding dimension was set to 10, time delay was 1 sample, radius 0.5 SD (standard deviation of analysed signal) multiplied by square root of embedding dimension, distance between vectors was quantified using Euclidean norm and Theiler window was set to 15 samples to avoid time autocorrelation effect. C. Statistics Because of nongaussian distribution of several analysed parameters, between groups comparisons were performed by nonparametric Mann-Whitney U-test. Values p<0.05 were considered statistically significant. III. RESULTS In Fig. 1, an example of RQA analysis with corresponding recurrence plot is shown. From RQA measures based on diagonal lines in recurrence plots, we have found higher percentage of recurrence (%Rec, p=0.003, Fig.2) and of determinism (%Det, p=0.017, Fig.3) in DM. Also, we have found increased maximal length of diagonal line in DM group (Lmax, p=0.004, Fig.4). In addition, parameter TT was significantly higher in DM group compared to control subjects (p=0.008, Fig.5).
Fig. 2 Box plot for parameter %Rec
Fig. 3 Box plot for parameter %Det
Fig. 1 Representative example of RQA and corresponding recurrence plot
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Recurrence Quantification Analysis of Heart Rate Dynamics in Young Patients with Diabetes Mellitus
Fig. 4 Box plot for parameter Lmax
771
hidden information in shorter time series are sought. Parameters based on recurrence plot can be used also for nonstationary time series [9]. The recurrence of states is one of the basic features of deterministic systems. Recurrence plot is a graphic representation of this phenomenon. RQA enables to quantify information hidden in its structures [7,9]. The major finding of our study was the increase of %Rec and %Det and increased Lmax in young patients with DM. Referring to Gonzalez et al. (2000) [10] who found elevation in above mentioned parameters after parasympathetic blockade by atropine, these results are in agreement with the concept of reduced parasympathetic activity in diabetics. Increased Lmax was also found in adult DM patients by Mestivier et al. [8] and this observation was confirmed also on animal model [11]. These results suggests reduced complexity and increased predictability of heart rate dynamics even in young patients with DM. Although RQA was traditionally focused on diagonal lines in recurrence plots, recently developed parameters (e.g. Laminarity, Trapping time) are computed from vertical lines [12]. The vertical lines are reflection of persistence of the given state of the system for some time (so called trapping time). In this study we have shown that this phenomenon is more common in DM patients than in healthy subjects again suggesting reduced heart rate dynamics complexity in DM group. V. CONCLUSION Our results suggest significant differences in RQA measures between young patients with diabetes mellitus and healthy control subjects. RQA parameters should be used together with other HRV parameters for better description of heart rate dysregulation in various patients groups.
ACKNOWLEDGMENT This study was supported by grant VEGA No. 1/2305/05.
Fig. 5 Box plot for parameter Trapping Time
REFERENCES
IV. DISCUSSION Based on an assumption that cardiac control system is an example of nonlinear biological deterministic system, the nonlinear dynamics measures are increasingly used in HRV analysis. The main limitation of traditionally used nonlinear methods (e.g. correlation dimension, largest Lyapunov exponent) is their requirement for long stationary signals / condition that is only rarely met in biology. Therefore, new parameters based on nonlinear dynamics able to describe
1. 2.
3.
Spallone V, Menzinger G (1997) Diagnosis of cardiovascular autonomic neuropathy in diabetes. Diabetes 46:S67-S76 Osterhues H-H, Grossmann G, Kochs M et al. (1998) Heartrate variability for discrimination of different types of neuropathy in patients with insulin-dependent diabetes mellitus. J Endocrinol Invest 21:24-30 Donaghue KC (1998) Autonomic neuropathy: diagnosis and impact on health in adolescents with diabetes. Horm Res 50:33-37
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
772 4. 5.
6. 7. 8. 9.
Z. Trunkvalterova, M. Javorka, I. Tonhajzerova, J. Javorkova and K. Javorka Ziegler D (1994) Diabetic cardiovascular autonomic neuropathy: prognosis, diagnosis and treatment. Diabetes Metab Rev 10:339-383 Javorka M, Javorkova J, Tonhajzerova I et al. (1995) Heart rate variability in young patients with diabetes mellitus and healthy subjects explored by Poincaré and sequence plot. Clin Physiol Func Im 25:119-127 Webber CL, Zbilut JP (1994) Dynamical assessment of physiological systems and states usig recurrence plot strategies. J Appl Physiol, 76, 965-973 Marwan N, Wessel N, Meyerfeldt A et al. (2002) Recurrence plot based measures of complexity and its application to heart rate variability data. Phys Rev E 66, 026702 Mestivier D, Dabire H, Chau NP (2001) Effects of autonomic blockers on linear and nonlinear indexes of blood pressure and heart rate in SHR. Am J Physiol 272, H1099-1113 Schreiber T (1999) Interdisciplinary application of nonlinear time series methods. Physics Reports 308, 1-64
10. Gonzalez JJ, Cordero JJ, Feria M et al. (2000) Detection and sources of nonlinearity in the variability of cardiac R-R and blood pressure in rats. Am J Physiol 279, H3040-3046 11. Giudice PL, Careddu A, Magni G et al. (2002) Autonomic neuropathy in streptozotocin diabetic rats: effect of acetylcarnitine. Diabetes Res Clin Pract 56, 173-180 12. Wessel N, Marwan N, Meyerfeldt A et al. (2001) Recurrence quantification analysis to characterise the heart rate variability before the onset of ventricular tachycardia. Lect Notes Comp Sci 2199, 295-301 Author: Zuzana Trunkvalterova Institute: Comenius University Institute of Physiology, Jessenius Faculty of Medicine Street: Mala Hora 4 City: Martin, 036 01 Country: Slovakia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Speeding up the Computation of Approximate Entropy G. Manis1 and S. Nikolopoulos2 1
2
University of Ioannina, Dept. of Computer Science, Ioannina, Greece National Technical University of Athens, Dept. of Electrical and Computer Engineering, Athens, Greece
Abstract— Approximate entropy is a measure of regularity which finds application in many problems in biomedical engineering. One drawback of the method is its high complexity which results in large execution times. The purpose of this paper is to alleviate this problem by examining three algorithms, two of which have never been suggested before for approximate entropy computation. In our experiments heart rate signals were analyzed using the three algorithms. The speedup achieved was significant.
of complexity O(Np), but requires only S(N) memory. We used heart rate signals in order to evaluate experimentally all three algorithms. The last one was proved much more efficient than the other two. In the rest of the paper we give a definition of approximate entropy in section II. Sections III, IV and V describe the three algorithms and present experimental results for each one of them. The last section summarizes this work.
Keywords— approximate entropy algorithm, heart rate variability.
II. APPROXIMATE ENTROPY Given a timeseries x:
I. INTRODUCTION Approximate entropy (Apen) was proposed by Pincus [1] as a measure of systems complexity. It finds application in many scientific fields including biomedical engineering where it has been used for the analysis of various signals. In [2] Apen is used to identify gender and age-related differences in heart rate dynamics, in [3] to predict survival in heart failure, in [4] the authors present studies of hormone pulsatility from plasma concentration time series based on approximate entropy, while in [5] approximate entropy is applied on the electroencephalogram. There are several research papers available in the literature from diverse scientific fields. The computation of approximate entropy is a computationally intensive task. The time required for the straightforward implementation is proportional to N2, where N is the size of the signal analyzed. The author has not seen any other papers discussing how the computation of approximate entropy can become faster. The straightforward implementation is of complexity O(N2). For a similar problem, the estimation of correlation dimension [6] an algorithm of complexity O(Np) has been discussed in the literature [7], where Np is the number of neighboring points. This algorithm requires space of complexity S(Nbm), where Nb is the number of boxes used per dimension and m the window size, parameters which will be discussed more extensively later in this paper. We customized this algorithm for the computation of approximate entropy. Since this algorithm requires a large amount of memory S(Nbm) which increases exponentially with m, we propose a new algorithm which is again
x = ( x1 , x 2 ,..., xi ,..., x N ) of size N, we select a window size m (pattern length) and construct a series of vectors
x = ( x1 , x 2 ,..., xi ,..., x N − m +1 ) of size N-m+1, where
xi = [ xi , xi +1 , xi + 2 ,..., xi + m −1 ] . We also select a distance r. The distance of two vectors
x i and x j is less than r when,
xi + k − x j + k < r , for 0 ≤ k ≤ m − 1 When this distance is smaller than r the vectors are considered as similar. We can use the following notation for similar vectors of size m:
xi − x j The probability of a vector
x j is given by:
m
x i to be similar with a vector
N − m +1
C (r ) = m i
Θ( j ) =
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 785–788, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
∑ Θ( j ) j =1
N − m +1
,
where
xi − x j
{ 1,0, otherwise
m
786
G. Manis and S. Nikolopoulos
Approximate entropy is given by the formula:
⎡ C (r ) ⎤ ApEn(m, r ) = ln ⎢ m ⎥, ⎣ C m +1 (r ) ⎦ where C m (r ) is the mean of all C m (r ) . III. THE BASIC ALGORITHM An algorithm of complexity O(N2) for the calculation of the above follows: Cm=0; Cm+1=0; for i=1 to N-m+1 for j=i+1 to N-m+1 begin if
xi − x j
if
xi − x j
m
< r cm=cm+1;
m +1
< r cm+1=cm+1+1;
end apen=ln(cm /cm+1); This is the straightforward implementation and we will refer to this as the basic algorithm The algorithm is very simple and requires only S(N) space of memory but it can be proved expensive for large values of N. In figure 1, execution times for the computation of C m (r ) for a heart rate signal is presented. The value of r has been chosen r=0.25*std, where std is the standard deviation of the signal. The length of the signal is N=81274 beats which corresponds to approximately 24 hours of recording. Experiments have been performed on a Pentium M725 CPU (1,60GHz) with 512MB of memory. As shown in the figure, for small values of m the computation time is approximately half an hour. For large values it can exceed the 90 minutes IV. THE BOX-ASSISTED ALGORITHM The author has not seen any other faster algorithms for the computation of ApEn available in the internet or discussed in the literature. However, several papers are available for a similar problem, the estimation of correlation dimension (CDE). The first step in CDE is the phase space reconstruction from the timeseries, assuming an embedding dimension m and time lag τ, in a way similar to that of ApEn. We then compute the correlation dimension integral and plot the log[C(m,r,τ)] vs. log[r], where r represents distances between points in the reconstructed phase space. The same procedure is repeated for different embedding dimensions. From the resulting figure, we chose a scaling
Fig. 1 Execution times for the basic algorithm for the computation of Cm(r) for different window sizes. Even for small values of m the computation time is remarkable.
region and compute the slope. If, when increasing the embedding dimension, we see that the slopes of this region tend to a particular value, then this value is considered as the correlation dimension. For a full description of the procedure please see [6]. A fast algorithm for the estimation of correlation dimension has been proposed by [7] and is called the box-assisted algorithm. The author has modified this algorithm and adapted it to the computation of approximate entropy. The algorithm is based on a simple idea. We consider the x i vectors in space. Then we divide the space into equal sized subspaces. We do not check for similarity every possible pair of vectors, but only those located in neighboring subspaces. Figure 2 shows an example for m=2 and r=10msec. Each vector x i =(xi,xi+1) is plotted and the space is divided into equal sized boxes of side r. Points located inside a box are checked for similarity only with points located in neighboring boxes, i.e. points in box abcd are checked only with points in box ABCD. This reduces significantly the computation time since the complexity is reduced to O(Np), where Np is the number of neighboring points. However the memory required for this algorithm increases exponentially. The space complexity is S(Nbm), where Nb the number of boxes in every dimension. The selection of the size of the boxes is critical since it affects the algorithm performance. As shown in figure 3 the best performance is achieved for small boxes with side equal to r. However, small boxes imply large value of Nb and large amount of memory. Figure 3 shows that in our experiments the available memory was enough when m=2 and m=3 for both small and large box sizes. For small box sizes (30msec-60msec) and m=4 the available
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Speeding up the Computation of Approximate Entropy
787
propose here an algorithm which requires space S(N) while the computational complexity remains O(Np). We integrate the signal x and create a new signal X:
X = ( X 1 , X 2 ,..., X N − m +1 ) so that: m −1
X i = xi + xi +1 + xi + 2 + ... + xi + m −1 = ∑ xi + j . (1) j =0
Then we consider buckets Bh of equal size r. Each point Xi is mapped into the bucket Bh when:
h = ceiling ( X i / r )
Two vectors are similar when
xi − x j Fig. 2 An example for the box-assisted algorithm. The space is divided into subspaces. Points in neighboring subspaces are checked for similarities
memory was not enough and virtual memory was activated resulting into significant overheads. It was not possible to run the algorithm for m=4 and box size smaller than 30msec, or for m=5 and box sizes smaller than 60msec. V. THE BUCKET-ASSISTED ALGORITHM The main drawback of the box assisted algorithm is the large amount of memory required for small boxes or large values of m. According to figure 3, the best performance is achieved when the space is divided into small boxes. We
m
⇔ xi + k − x j + k < r , 0 ≤ k ≤ m − 1 . When x i is mapped into bucket Bh similar vectors are mapped into buckets Bh-m, Bh-m+1 , …, Bh , Bh+1, Bh+2, …, Bh+m, since the size of each bucket is r and the distance of two similar vectors can not be more than m*r due to the integration in (1) An example is shown in figure 4, where m=2 and r=10msec. Points in bucket BC (solid lines) are checked for similarity with buckets between lines A and D (dashed lines). Figure 5 compares execution times for all three algorithms. It is obvious that the last algorithm is the fastest and also requires a reasonable amount of memory. Box sizes are equal to r, except for the box-assisted algorithm for m=4 and m=5, where the box size is 60msec and 90msec respectively, for which values the best performance of the algorithm was achieved.
Fig. 3 Execution times for the computation of Cm(r) with the box-assisted algorithm. For m=2 and m=3 the execution times are small. The available memory is not enough for m=4 and small boxes, resulting into significant overheads.
Fig. 4 Example for the bucket-assisted algorithm. Points between the solid lines are checked for similarity with points between lines A and D.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
788
G. Manis and S. Nikolopoulos
ACKNOWLEDGMENT This work was co-funded by the European Union in the framework of the project “Support of Computer Science Studies in the University of Ioannina” of the “Operational Program for Education and Initial Vocational Training” of the 3rd Community Support Framework of the Hellenic Ministry of Education, funded by 20% from national sources and by 80% from the European Social Fund (ESF)..
REFERENCES 1. 2.
Fig. 5 Comparison of performance for the three algorithms. The bucketassisted seems to be the fastest.
3.
VI. CONCLUSIONS We compared three algorithms for the calculation of the approximate entropy. The first one was the straightforward approach which is of complexity O(N2) and space complexity S(N). The second one has not been examined in the literature before for approximate entropy computation but has been used for a similar problem that of correlation dimension estimation. The algorithm is of complexity O(Np), where Np is the number of neighboring points and of space complexity S(Nbm), where Nb is the number of boxes used per dimension and m the window size. The last one, first proposed in this paper is of complexity O(Np) but of space complexity S(N). Experimental results with heart rate signals showed that the last algorithm is much faster than the previous two.
4.
5. 6. 7.
Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci USA, 88:2297-2301 Ryan SM, Goldberger AL, Pincus SM, Mietus J, Lipsitz LA (1994) Gender and age-related differences in heart rate dynamics: Are women more complex than men?. J Am Coll Cardiol ;24:1700-1707 Ho KKL, Moody GB, Peng CK, Mietus JE, Larson MG, Levy D, Goldberger AL (1997) Predicting survival in heart failure case and control subjects by use of fully automated methods for deriving nonlinear and conventional indices of heart rate dynamics. Circulation 96(3):842-848 Sparacino G, Bardi F, Cobelli C. (2000) Approximate entropy studies of hormone pulsatility from plasma concentration time series: influence of the kinetics assessed by simulation., Ann Biomed Eng. 28(6):665-76. Levy WJ, Pantin E, Mehta S, McGarvey M (2003) Hypothermia and the approximate entropy of the electroencephalogram. Anesthesiology 98(1):53-7 Grassberger, P, Procaccia, I (1983) Measuring the strangeness of strange attractors. Physica, 9D:189. Grassberger P (1990) An optimized box-assisted algorithm for fractal dimensions. Physics Letters A, 148(1,2):63-68 Author: Institute: Street: City: Country: Email:
George Manis University of Ioannina, Dept. of Computer Science Douroutis Campus Ioannina Greece
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Technical problems in STV indexes application M. Cesarelli, M. Romano, P. Bifulco Dept. of Electronic Engineering and Telecommunication, Biomedical Engineering Unit, University "Federico II", Naples, Italy Abstract— Cardiotocography (CTG) is the most widely used diagnostic technique to monitor fetal health. Its usefulness is undoubted; however, several analysis methodologies were proposed in recent years to improve its reliability and objectivity. To this end, great interest was dedicated to the Variability of the Fetal Heart Rate (FHRV). In particular, Short Term Variability (STV), which reflects changes in consecutive beatto-beat intervals, in general reveals autonomic nervous system functioning. Many groups of researchers described time domain parameters for quantifying FHRV. However, there are marked differences in the methodologies used, making comparison difficult, while there is no many comprehensive studies of all the indices. This work aims to analyze technical characteristics and problems related to the CTG application and comparison of some STV indexes proposed in literature and/or used in clinical environment. 9 different STV indexes were programmed and then compared by means of simulated FHR signals. In particular, we analyzed their dependence on CTG sampling frequency and FHR mean. Moreover, the indexes were evaluated for their capability to return the imposed value of FHRV. Obtained results show that can be preferable to use the standard deviation to evaluate STV. Keywords— short term variability, cardiotocography, SD.
I. INTRODUCTION Cardiotocography is the most widely used indirect, noninvasive diagnostic technique, in daily clinical practice, to monitor fetal health, both in ante partum and in intra partum period. Fetal Heart Rate (FHR) and Uterine Contractions (UC) are simultaneously recorded by means of an US Doppler probe (FHR signal) and an indirect pressure transducer (UC signal) placed on the maternal abdomen. Since its introduction, in the ‘60s, electronic fetal monitoring led to a considerable reduction of perinatal morbidity and mortality. Nevertheless, due to the visual inspection of cardiotocographic traces (CTG), there is, still nowadays, a very high intra- and inter-observer variation in the assessment of FHR patterns, which can lead to an incorrect evaluation of fetal status. To overcome this limit, several analysis methodologies were proposed in recent years. For example, great interest was dedicated to the Variability of FHR around its baseline (FHR Variability-FHRV), which can support more detailed and objective analyses. In particular, Short Term Variability (STV) is considered very important in diagnostic phase. It refers to the continu-
ous variation in difference between successive inter-beat intervals [1]. In general, large variability reflects a healthy autonomic nervous system, and chemoreceptors, baroreceptors and cardiac responsiveness; while fetal hypoxia, congenital heart anomalies and stress cause decreased variability [1, 2]. For this reason, STV can represent a valid support to diagnose fetal health [3, 4]. In the past, many groups of researchers described time domain parameters for quantifying FHRV using different mathematical techniques. However, there are marked differences in the methodologies used, making comparison difficult, while there is no many comprehensive studies of all the mathematical indices [5]. Furthermore, even nowadays, there still exist technical problems to be taken into account in the use of STV indexes; for example, definitions of almost all the indexes were created basing on RR intervals precisely determined from direct fetal electrocardiogram [1]. This work aims to analyze characteristics and problems related to the CTG application and comparison of STV indexes. 9 different STV indexes, cited in literature and/or used in clinical applications, were programmed and then compared by means of simulated FHR signals. For example, we analyzed their dependence on CTG sampling frequency (let us remember that most commercial cardiotocographs use fixed FHR interval sampling instead of recording each heartbeats when it occurs) and FHR mean. Moreover, the various indexes were evaluated for their capability to return the imposed value of FHRV. II. MATERIAL AND METHODS A. Selected indexes of STV Seven STV indexes out of which cited in literature and/or used in clinical applications were selected for this work. The formulas needed to compute them were appropriately modified both to be programmed and to be adapted also to evenly sampled Doppler FHR signals, provided from commercial cardiotocographic devices. Let us remember, in fact, that cardiotocographs generally use a zero-order interpolation, that is each sample is held constant until the next heart beat occurs, obtaining evenly sampled series. This process provides FHR data at fixed sampling time instants, by delaying some samples and adding some duplicates (in the case of missed or undetected heart beats); so the exact
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 777–780, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
778
number of real beats is not available. For these evenly sampled FHR series, in each formula, the number ob beats was substituted with the FHR mean in the corresponding time interval, since this value was considered a good estimation of the number of beats. This substitution is particularly relevant in the case of Arduini’s index (see following table), so we introduced an other index, called Arduini modified (Arduini_mod), in order to realize specific comparisons between the two cases. Regard to Zugaib’s formula, we evaluated the number n of samples included in a time interval corresponding to 128 beats using the formula n=(128/M)*60*fs, where M is the FHR mean in 80 s and fs the sampling frequency of FHR signals (n=equivalent FHRmean). We chose 80 s because this interval corresponds about to FHR = 100 bpm, which represents the longest time interval that can contain 128 beats in not severe fetal conditions. Then, we added to the list the simple Standard Deviation index (SD), which measures, as it is well known, just the spread of the samples around their mean value. So, we used formulas reported in table 1.
M. Cesarelli, M. Romano, P. Bifulco
Table 1 Formulas to compute STV indexes Index name Arduini [6]
Formula k −1
∑
T (i + 1) − T (i) k −1
i =1
T (i + 1) − T (i )
n −1
Arduini_mod
∑
m −1
i =1
n
Dalton [7]
∑
Organ [3]
∑
i =2 n
i =1
1 Sonicaid [8]
Notes
T (i) − T (i − 1) 2 F (i + 1) − F (i ) m
h
(R(m + 1) − R(m)) h∑ m=1
n T(i +1) −T(i) Rm = ∑ m i=1
IQR [g i (t (i ) − (t (i − 1)) ]
k = num. of samples in 60 s 60 s
60 s
30 s h = num. of sub-intervals in 60 s n = num. of RR intervals in 3.75 s m, in 3.75s
1 .5
B. Simulated FHR signals To test the STV indexes performances, synthetic FHR signals were artificially generated, via software, using a slightly modified version of a method proposed by other authors [10]. Following that procedure, an artificial R-R tachogram with specific power spectrum characteristics is generated. Some model parameters were adapted to resemble real fetal cases. Mean FHR was initially set at 140 bpm (within the range of normality, 120-160 bpm). A variable SD was considered, it was set at 1 in the first part of the signal, at 4 in the second part and at 2 in the last part. In addition, to obtain signals resembling other physiological conditions, we simulated also accelerations (considering accelerations as transient increases of the FHR from the baseline of at least 15 bpm for at least 15 bpm) by using Gaussian-like signal tracts (for more details of the algorithm, please refer to our previous publication [11]). Usually, commercial cardiotocographs (for example like the HP Cardiotocographs) provide a zero-order interpolated version of the FHR series, with a sampling frequency (fs) of 4 Hz. Moreover, in some computerized systems, in order to reduce the needed memory space and computing time, the PC reads FHR values at lower frequency sampling. For example, in some commercial systems, the PC reads the buffer every 2.5 s and determines the actual FHR as the average of 10 values, corresponding at fs=0.4 Hz. There exist also software that read the FHR values from the buffer every 0.5 s, that is at fs = 2 Hz.
Van Geijn [2]
⎛ 180 ⎞ ⎟ g i = ⎜⎜ ⎟ ⎝ t i − 320 ⎠ = weighting factor where:
t (i − 1) + t (i ) 2
ti =
⎡ n−1 (D(i ) − Dave )2 ⎤ ⎢∑ ⎥ m−2 ⎣ i=1 ⎦ Yeh [4]
Di = 1000 ∗
1/ 2
T (i) − T (i + 1) T (i ) + T (i + 1) n −1
Dave =
∑ D(i) i =1
n −1
n −1
Zugaib [9]
Standard Deviation
IQR (interquartile range) computed in 30 s
1 ∗ ∑ D (i ) − Md n − 1 i =1
Di =
T (i + 1) − T (i) T (i + 1) + T (i )
(
)
2⎤ ⎡ 1 n ⎢ n − 1 ∑ F (i) − F ⎥ i =1 ⎣ ⎦
Md = median of Di n = 128 beats1/ equivalent FHRmean2
1/ 2
F = FHRmean
Note: n = num. or beats1/num. or samples2; m = num. of beats1/FHRmean2; 1: for real uneven FHR series, 2: for evenly sampled FHR series. T(i) represents the instantaneous inter-beat interval (also called in this paper RR interval) expressed in ms, while F(i), used in Organ’s and SD formulas, is the instantaneous fetal heart rate expressed in beats per minute and calculated accordingly to the equation: 60000/T(i).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Technical problems in STV indexes application
CTG traces recorded in clinical environment and stored in our database present these characteristics; so, to simulate CTG device output, with a fs of 4 Hz and stored FHR signals, with fs of 2 or 0.4 Hz, other algorithms was developed, at our lab, to provide a zero-order interpolated version of the artificial series (fs = 4 Hz) and a their decimated version (fs equal to 2 Hz and to 0.4 Hz). C. Signal processing For each characteristic of the simulated signals, (for example, FHRmean = 140 bpm, no acceleration), sets of 30 different FHR signals (actually, 30 uneven FHR series, 30 FHR series evenly sampled at 4 Hz, 30 FHR series at 2 Hz and 30 FHR series at 0.4 Hz) were generated. Regard to STV computing, to follow point by point the time course of FHR signals, STV indexes were computed using a sliding temporal window of opportune length and a 99% overlap. Therefore, the time-varying evolution of STV indexes can be represented along with the FHR signals. Moreover, in order to highlight common features of STV time-courses, all data, belonging to the same FHR signals sub-set, were synchronously averaged and mean and standard deviation of the STV values in the three tracts of FHR signals with different fixed variability, were computed.
779
Table 2 dependence on FHR sampling frequency (FHRmean=140) Variable M Arduini
1.33
5.20
Arduini_mod
1.33
5.20
2.64
Dalton
92.77
362.90
184.23
Organ
0.44
1.71
0.87
Sonicaid
0.39
1.54
0.78
VanGeijn
0.96
3.75
1.92
Yeh
0.16
0.64
0.33
Zugaib
1.55 E-3
6 E-3
3 E-3
Arduini
0.77
Variable M4 Arduini_mod Dalton Organ Sonicaid VanGeijn Yeh
At the moment, we carried out 840 test and we can conclude that, considering the whole FHR signal, all the analyzed indexes are capable to follow the variations in STV. However, no one returns the real value of fixed variability (SD is the most precise) and they present substantial differences in magnitude, so for a clinical comparison a standardization is necessary. Moreover, all the indexes, except Arduini_mod and Dalton, result clearly depending on sampling frequency of FHR signal. See, as example the following table 2 (it is important to underline that in the first test we did not consider SD and that we do not report results regarding an FHR sampling frequency equal to 0.4 Hz because they result not acceptable). Variable M is the mean computed for a sub-set of 30 uneven FHR signals. M4 and M2 refer to FHR signals evenly sampled at 4 and 2 Hz respectively. Instead, the three columns of the table refer to the three part of the FHR signals with different fixed values of variability.
3.01
1.53
1.33
5.18
2.63
92.47
361.63
183.58
0.22
0.84
0.43
8.1 E-3
32.5 E-3
16.4 E-3
0.26
1.03
0.53
9.25 E-2
0.36
0.18
Zugaib
0.9 E-3
3.52 E-3
1.8 E-3
Arduini
1.47
Variable M2 Arduini_mod Dalton Organ
III. RESULTS
2.64
Sonicaid VanGeijn
5.75
2.93
1.26
4.92
2.50
88.36
345.12
175.57
0.21
0.81
0.41
8.5 E-3
33.7 E-3
17.1 E-3
1.03
4.03
2.07
Yeh
9.89 E-2
0.38
19.6 E-2
Zugaib
1.73 E-3
6.72 E-3
3.44 E-3
In addition, all the indexes, except SD and Organ, depend evidently on FHR mean, as can seen in table 3. Just for brevity, we reported, in table 3, only results regarding uneven FHR series. At the end, we evaluated if results provided from STV indexes are affected by a floatingline (in particular, presence of accelerations). We noted that only SD follows the acceleration course, while the other indexes are very slightly modified by the floatingline (table 4). IV. CONCLUSION Obtained results proved that all the tested indexes (SD included) are capable to follow the variations in FHRV. So they are adapt for a rough, visual inspection of STV.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
780
M. Cesarelli, M. Romano, P. Bifulco
Table 3 dependence on FHR mean FHRmean = 120 Variable M Arduini
1.82
7.2
3.64
Arduini_mod
1.82
7.2
3.64
Dalton
108.4
430.32
217.38
Organ
0.44
1.73
0.88
Sonicaid
0.51
1.97
0.94
VanGeijn
0.84
3.53
1.73
Yeh Zugaib SD
Arduini
0.21
0.8
0.42
1.81 E-3
7.19 E-3
3.63 E-3
0.94 3.88 FHR mean = 180 Variable M
2.08
0.81
3.2
1.61
0.813002
3.2
1.61
Dalton
72.89
287.31
144.9
Organ
0.45
1.77
0.88
Sonicaid
0.25
1.0
0.47
VanGeijn
1.36
5.92
2.8
Yeh
0.11
0.44
0.23
1.23 E-3
4.86 E-3
2.42 E-3
0.96
3.95
2.09
Arduini_mod
Zugaib SD
Table 4 dependence on floatingline FHR Variable M Arduini
1.33
5.19
2.62
Arduini_mod
1.33
5.19
2.62
Dalton
92.8
361.68
182.43
Organ
0.44
1.71
0.86
Sonicaid
0.4
1.55
0.78
VanGeijn
0.96
3.75
1.89
Yeh
0.16
0.64
0.32
1.55 E-3
6 E-3
3 E-3
1.0
3.8
1.95
Zugaib SD
FHR + floatingline Variable M Arduini
1.34
5.1
2.61
Arduini_mod
1.34
5.1
2.61
Dalton
94.06
359.54
182.55
Organ
0.45
1.72
0.86
Sonicaid
0.4
1.54
0.78
VanGeijn
0.98
3.81
1.9
Yeh Zugaib SD
0.16
0.63
0.32
1.57 E-3
6 E-3
3 E-3
1.95
5.13
2.44
However, today’s standard in perinatology is the automated FHR analysis and our results proved also that the influence of FHR characteristics, such as sampling frequency, FHR mean, on values of variability provided by the analyzed STV indexes (except SD) is significant. It is very important, from a clinical point of view, to take into account this problems and the high difference in magnitude obtained with different indexes, especially in eventual comparisons phases. In conclusion, we propose to use the simple SD to evaluate STV. However, in this case, an opportune procedure to remove floatingline from FHR signals is necessary.
REFERENCES 1.
J.Jezewski, J.Wrobel, K.Horoba (2006) Comparison of Doppler ultrasound and direct electrocardiography acquisition techniques for quantification of fetal heart rate variability. IEEE Trans Biomed Eng. 2006 May;53(5):855-64 2. H.P.van Geijn, H.W.Jongsma, J.Haan, Eskes (1980) Analysis of heart rate and beat-to-beat variability: interval difference index. Am J Obstet Ginecol, 138 (3): 246-252 3. L.W.Organ, P.A.Hawrylyshyn, J.W.Goodwin, J.E.Milligan, A.Bernstein (1978) Quantitative indices of short- and long-term heart rate variability. Am. J. Obstet. Gynecol. 130: 20 4. Sze-Ya Yeh, A.Forsythe, E.H.Hon (1973) Quantification of fetal beat-to-beat interval differences. Obstet Gynecol. Vol. 41, No. 3. 5. Parer WJ, Parer JT, Holbrook RH, Block BSB (1985) Validity of mathematical methods of quantitating fetal heart rate variability. Am J Obstet Gynecol; 153: 402-409 6. (2000) Computer CardioTocoGrafo(2ctg). Operating manual. 7. K.J.Dalton, G.S.Dawes, Patrick(1977) Diurnal respiratory and other rhythms of fetal heart rate in lambs. Am.J.Obstet.Gynecol. 127: 414 8. C.W.G. Redman (2003) SonicaidFetalCare. Clinical Application Guide. Oxford Instruments Medical Ltd 9. M.Zugaib, A.Forsythe, B.Nuwayhid, S.Lieb, K.Tabsh, R.Erkkola, E.Ushioda, C.Brinkman, N.Assali(1980)Mechanisms of beat-to-beat variability in the heart rate of the neonatal lamb I.Influence of the autonomic nervous system.Am.J.Obstet.Gynecol,V138,N4,pp444-452 10. P.E.Mcsharry, G.D.Clifford, L.Tarassenko, L.A.Smith (2003) A dynamical model for generating synthetic electrocardiogram signals. IEEE Trans. on Biom. Eng. vol. 50, no. 3 11. M.Cesarelli, M.Romano, P.Bifulco, F.Fedele, M.Bracale (2006). An algorithm for the recovery of fetal heart rate series from CTG data. Computers in Biology and Medicine (In press, Epub ahead of print) Author: Prof. Mario Cesarelli Institute: Street: City: Country: Email:
Department of Electronic and Telecommunication via Claudio, 21 Naples Italy
[email protected]
Note: for tables 2, 3, 4, the significance of the variables is the same for table 1.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Artery movement tracking in angiographic sequences for coronary flow calculation Hanna Goszczynska Institute of Biocybernetics and Biomedical Engineering PAS, 4, Trojdena str., Warsaw, Poland Abstract— Realization of the previously elaborated by author [1,2] method of coronary flow measurement based on densitometric analysis of coronarographic images, apart from problems connected with densytometric analysis also occurs the problem of cyclic movement of measurement field (fragment of artery or cardio muscle). The introduced system of automatic measurement field tracking not only reduces the „manual effort” of the operator within setting the crosssectional line in the images sequence, but also makes possible to range out the densytometric curve with time resolution equal to the time resolution of the tested frame sequence. Most of the works concerning this subject is based on the analysis of the curves, taken from the images adequate to an indicated phase of the heart movement and then interpolated due to defined fitting curve. The realized algorithm, based on template matching method, makes possible to trace the results of automatic detection of indicated, characteristic points within the structure of arteries, correction of fault matching and incorporation of received movement trajectories for analysis of both the sequence range, where there occurs the lack of X-ray indicator within the tested artery fragment (so its not visible on the screen) and the cardio muscle part, close to the characteristic point, what makes possible to the cardio muscle perfusion degree estimation on the basis of coronarographic images. It seems that the relatively simple algorithm makes its duty in full (estimated error of the automatic analysis is less than 11%) and is acceptable for routine clinical testing due to short time for frame sequence analysis (few minutes). Keywords— moving object tracking, similarity measures, images registration, coronary flow rate measurements
I. INTRODUCTION The patient motion artifacts, not keeping the breath, organs natural movement cause great problems in performation of different approaches in contemporary medical image diagnostic methods like Digital Subtraction Angiography, registration technique, support of therapeutic intervention or morphological and functional analysis (e.g. flow or perfusion estimation based on the angiographic images sequences). In literature are reported a lot of possible types of motion artefacts and techniques to avoid and to correct them by means of digital image processing. The main problems are the construction of a 2D geometrical transformation which refers to originally 3D deformation, finding the correspondence between two images using only
the grey-level information and efficiently computer application of this correspondence [3]. Finding the correspondence (similarity) between two images is the most important element of mentioned above techniques and have been applied either in static or dynamic approaches. Present work shows the application of correspondence finding technique to the densitometric analysis performed on the image of moving artery segment in X-ray image sequences of coronary arteries in order to coronary flow estimation by analyzing injected indicator concentration. The calculation of indicator concentration from a radiographic image is based on the brightness image measurements and the Lambert-Beer law. The value VROIk of the indicator in the volume VROI viewed as a region of interest (ROI) consisting of m x n pixels can be expressed: a D(m, n) ∑ ln s m,n∈ROI D0 ( m, n )
V ROIk = -
(1)
where D and D0 - image brightness value with and without indicator in the same site and exposure parameters, s – Xray attenuation coefficient, a - area of one pixel. Denoting AROI (t) = -
∑ ln
m,n∈ROI
D(m,n,t) D0 ( m, n, t )
(2)
the equation for the densitometric curve is obtained. Eq. 1 and 2 are the basis for the densitometric coronary flow measurements using the Stewart-Hamilton equation from the indicator-dilution theory. Coronary blood flow Q can be expressed as a ratio of the mass M of the injected indicator to the area under the curve of indicator concentration c versus time : Q=
M ∞
(3)
∫ c(t )dt
0
or using Eq. 1 (where g – density of undiluted indicator): Q=
sV M M = − ROI ⋅ ∞ a ga − AROI (t ) ∞ ∫ A ROI (t))dt s 0 ∫g⋅ V ROI 0
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 793–797, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
(4)
794
Hanna Goszczynska
a)
Apart to the problems connected with densytometric testing there occurs the additional problem of the testing field cyclic movement (fragment of artery or cardio muscle area) that makes changes both within position and shape of the particular part of artery. For clinical study of the flow measurement method presented in [1] analysis of over 10 thousand images has been needed so certain automatic aid for cross-sectional line positioning has been strongly indicated.
b)
Fig. 1 Positioning of the artery cross-sectional line on the left artery image (LCA), b) densitometric measurements
II. METHOD
Goszczynska et al. [1] proposed to calculation of Q value using only analysis of densitometric curves A(t) for defined artery cross-sectional line marked on the all images in coronarographic sequence. The measurements of the image grey levels are performed along the same cross-sectional line with its adjacent background (Fig. 1a).
Known methods of movement detection or measures within images may be generally splitted into the two categories [3,4,5]: the optical flow and template matching basing on image similarity measures. Below there is described the method of automatic movement trajectory indication for the tested artery fragment based on template matching and then setting the cross-sectional line coordinates in each frame of the coronarographic frame sequence. Within the sequence being analyzed one can indicate three periods: before contrast appearance Ta, with contrast filling Tc and wash-out period Tp for the chosen fragment of artery (Fig. 2b). The movement analysis process for the indicated fragment of artery had been splitted into two stages: artery movement detection in period Tc and fault detections correction. There has been used the similarity analysis for chosen fragments of original frames with chosen template, i.e. the fragment of artery as shown on the frame where the arteries are well visible (at the end of contrast injection). The sum of difference squares CSKR has been used (R, S - the compared fragments of images with area m⋅n - Fig. 5):
Figure 1b shows brightness curve Da(x) along the crosssectional line and the approximated background image intensity distribution curve Db(x). The Da(x) and Db(x) curves allow calculation of the A(t) curve proportional to the amount of the indicator present under the measuring window (i.e. segment d1-d2, which is the inner diameter of the artery) at time t: A(t ) =
d2
⎛ Da ( x , t ) ⎞ ⎟⎟dx ⎝ b ( x, t ) ⎠
∫ ln⎜⎜ D
x = d1
(5)
226
201
176
151
126
101
76
51
26
1
where x – the number of pixel of the cross-sectional line in the digitalized space. Fig. 2b presents an example of densitometric curve A(t) for sequence of coronary images. For densitometric curve A(t) calculation over 100 crosssectional lines marking is needed (Fig. 2a).
A
2 1 0 -1 -2 -3 -4 -5 -6 -7
C SKR =
m −1 n −1
∑ ∑ ( R(i, j ) − S (i, j )) 2
(6)
i =0 j =0
To find within the frame I(t+1) the area similar to the given area R within the frame I(t), the area S within the frame I(t+1), having the same coordinates and size, as the
frame number
a)
b)
Fig. 2 Cross-sectional lines marked on parts of choosen images of an example of sequence (a), densitometric curve A(t) for all sequence (b) Fig. 3 Determining the template and searching windows
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Artery movement tracking in angiographic sequences for coronary flow calculation
area R in the frame I(t) is defined, then it gets resized by values k_max and l_max not less than maximal move of the selected fragment between the two frames (Fig. 3). Searching the area within the frame I(t+1), most similar to the chosen area R within the frame I(t) consists of moving the reference area R on the magnified area S and measuring the coefficient of mutual correlation (0<=k<=k_max, 0<=l<=l_max):
795
III.RESULTS The Fig. 1 and Fig. 4 show examples of proper matching for all frames within the Tc range.
m −1n −1
∑ ∑ R (i , j ) ∗ S (i + k , j + l )
C (k , l ) =
i =0 j =0 m −1n −1
m −1n −1
(7)
∑ ∑ R (i , j ) ∗ ∑ ∑ S (i + k , j + l ) 2
i =0 j =0
2
Fig. 4 Automatic detection of the cross-sectional line for 3 images from coronarographic sequence of LCA
i =0 j =0
The maximal value Cmax of this coefficient defines the location of the area most similar to the reference area. The indexes k and l, for which the coefficient gets the maximal value, determine the coordinates of the area searched (Fig. 3). The other similarity measures have been also proposed: • variance normalized correlation (IR, IS – mean brightness value in R and S windows):
There have been single fault matching within the analyzed in [1,2] material, like the ones shown in Fig. 5 – mismatching of template R within window S of frame 59. For this particular sequence there have been made the measurements aiming to compare the results of template matching with other similarity measures. Fig. 6 shows the x(t) and y(t) curves for the point P(x,y) (Fig. 5), counted S
m −1n −1
∑ ∑ ( R (i , j ) − I R ) ∗ ( S (i , j ) − I S )
C=
i =0 j =0 m −1n −1
m −1n −1
i =0 j =0
i =0 j =0
P(x,y)
(8)
*
∑ ∑ ( R (i , j ) − I R ) 2 ∗ ∑ ∑ ( S (i , j ) − I S ) 2
•
m −1 n −1
∑ ∑ min( R(i, j ), S (i, j ))
(9)
i =0 j =0
mutual information coefficient Cv [6]: I
J
Cv = − ∑ ∑ P ( g i , g j ) log i = 0 j =0
30
59
Fig. 5 Results of template matching in images 15 and 59
morphological mutual correlation CM : CM =
•
R 15
for the following metrics: mutual correlation (Eq. 7), normalized correlation (Eq. 8) and morphologic correlation (Eq. 9). The dot curve shows the coordinates of point P set manually
P( gi , g j ) P( g i ) • P( g j )
(10)
where P(gi) - the probability of occurrence for the i grade of grey scale, P(gi,gj) - the probability of occurrence of the pixel with the i-level gray in one frame and pixel with same coordinates and j-level grey in the other frame. For flow measurement there had been used the manual correction of fault detections during the period Tc with help of time-related interpolation. To set the movement trajectory in the period Ta and Tp there has been the manual method of time extrapolation.
Fig. 6 Curves x(t) (a) and y(t) (b) for the point P(x,y) (Fig. 5)
During the tests described in [1], in case of fault matching within the Tc period (very rare and concerning single frames only) the correction has been made by replacing the densitometric data i.e. A(t) values, obtained from the nearest frames with proper matching. The Fig. 7 shows the frame of image sequence of the right coronary artery (RCA) and movement trajectory of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
796
Hanna Goszczynska
380 370 360 350
P(x,y)
340 290
*
300
310
a)
320
330
b)
Fig. 7 Image of RCA (a), P(x,y) trajectory for one cardio cycle frames 100-121 (b) 390
390
370
211
196
181
166
151
136
91
121
76
a)
106
61
46
1
31
211
196
181
166
151
136
91
121
76
106
250 61
270
250 46
290
270 1
310
290
31
330
310
16
350
330
16
370
350
b)
Fig. 8 Curves x(t), y(t) for point P for all frames before (a) and after time extrapolation (b) in Ta i Tp periods
point P(x,y) for one cardio cycle. Fig. 8 shows the curves x(t), y(t) for point P for all frames before and after time extrapolation by using the repetition of the data of the last cardio cycle of the Tc period within the Tp period and the first cardio cycle of Tc period within the Ta period. IV. DISSCUSION The main problems of the realized method for automatic tracing the chosen fragment of artery by template matching are the fault matching and time consumption. There has been also tested the algorithm, concerning the change of the template during the sequence searching, where template is the result of the previous searching. Such algorithm, intuitionally better, contains a trap – due to a single mismatching, the template changes totally. Aside to the problem of finding the best template (templates), choosing the optimal window R size (m, n) and the optimal size of the searching area k_max and l_max it may be not excluded, that the detection faults may also be made lower by taking the stronger cryteria for similarity. Analyzing the diagrams in Fig. 6, one is not able to specify the advantages of one algorithm comparing with the other within matching precision, although the morphological correlation has sharper matching peek than the linear correlation. Basing on the visual mark for diagrams on Fig. 6, the lowest errors are present for the normalized correlation coefficient.
The proper matching during the Tp period is essential on case of perfusion estimation for the cardio muscle area around the indicated artery, which movement is estimated on the basis of the closest artery. The changes in density of a certain area are observed within the Tc and Tp periods of the artery. The example of prognostic of a single cardio cycle for the next one is shown on Fig. 8. The trajectory analysis may be also used for estimation of the changes in duration of cardio cycle phases and as the basis for modeling the epicardial strains. For estimation of an error due to the automatic data collection there has been analyzed the suitableness of choice for the following measures, connected with automatic tracing of the indicated cross-sectional line: the coordinates for cross-sectional line ends, the value of the A field (Fig.1b) and the value for the field under the curve A(t)(Fig. 2b). As the most suitable the value of area under densitometric curve was chosen. Densitometric analyses of the same artery segment were performed for automatic and manual positioning of the cross-sectional lines. Error caused by the automatic data collection was defined as: error =
max c - minc minm
(11)
where maxc, minc– max. and min. area values for “automatic” lines, minm – min. area value for “manual” lines and has been estimated at less than 11% [2]. V. CONCLUSION Aside to the occurrence of mismatching, the realized algorithm allowed to automatically collect the data used for testing the blood flow measurement method within the coronary arteries basing on the analysis of diluted indicator. The indicated error of the method doesn’t exceed 11% . The realized method of automatic tracing the measurement area not only reduces the manual effort of the operator within positioning the cross-sectional line in the frame sequence, but also allows the easy obtaining of densytometric curve with time resolution equal to the resolution of the tested sequence. The realized algorithm allows also to trace the results of automatic detection of some characteristic points within the arteries structure, correction of mismatching and incorporation of obtained movement trajectories to analyze the sequence range, where there is no contrast within the tested fragment of artery and the fragment of cardio muscle in surrounding of the characteristic point. Of the other solutions for the problem one can list the following:
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Artery movement tracking in angiographic sequences for coronary flow calculation
• Movement estimation on the basis of phase difference within the Fourier space, • Testing of the effectiveness of incorporation the binary frame after skeletonization as template frame, • Incorporation of specialized signal processors, • Inclusion of the searching strategy into the matching algorithm.
REFERENCES 1. Goszczynska H, Kowalczyk L, M. Rewicki (2006): Clinical study on the coronary flow measuremenrs method based on the coronarographic images, Biocybernetics and Bioengineering, 26, 63-73 2. Goszczynska H, Podsiadły-Marczykowska T (2006): Flow rate calculation method based on coronarographic images: method error estimation. IFMBE Proc 14:1-4
797
3. Meijering E (2000): Image Enhacement in digital X-ray angiography, Doctoral thesis, University Medical Center, Utrecht,. 4. Coatrieux J.-L. et al. (1995): 2D et 3D Motion Analysis in Digital Subtraction Angiography. Computer Vision, Virtual Reality and Robotics in Medicine, (Ayache N. ed.), Springer, CVRMed,. 5. Konrad J (2000): Motion Detection and Estimation, Handbook of Image&Video Processing, ed. Al Bovik, Academic Press, San Diego, 207-225. 6. Buzug TM, Weese J, Strasters KC (1998): Motion detection and motion compensation for digital subtraction angiography image enhancement, Philips J. Res. 51, 203-229. Author: Goszczynska Hanna Institute: Street: City: Country: Email:
IBBE PAS 4, Ks. Trojdena str. Warsaw Poland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Automatic cell detection in phase-contrast images for evaluation of electroporation efficiency in vitro Marko Usaj1, Drago Torkar2, Damijan Miklavcic1 1
University of Ljubljana, Faculty of Electrical Engineering, SI-1000 Ljubljana, Slovenia 2 Jozef Stefan Institute, Computer System Department, SI-1000 Ljubljana, Slovenia
Abstract— In the research of electroporation, we often need to know the percent of electroporated cells under different experimental conditions. Manual counting of the cells in digital images is time-consuming and subjective, especially on phase contrast images. In this paper, we present an automatic cell counting method based on optimization of ITCN (Image-based Tool for Counting Nuclei) algorithm’s parameters to fit training data that is based on counts from user or expert. In comparing the results of automatic cell counting and user manual counting 94,21 % average agreement was achieved what is good. Keywords— Automated cell counting, phase contrast images, ITCN algorithm, optimization, electroporation.
I. INTRODUCTION Electroporation (also termed electropermeabilization) is an efficient method for transient increase of cell membrane permeability. It is characterised by high reproducibility and reversibility. It is highly efficient and does not include the contamination of the cells by chemical, viral or even toxic additives like in several other methods [1]. Membrane permeabilization is obtained by application of strong enough external electric field to cells [2, 3]. With selection of electric pulse parameters (amplitude, duration, number) we can obtain transient permeabilization which does not affect cells viability i.e. reversible electroporation. Electroporation is used in many biomedical applications; the most interesting at present are electrochemotherapy of tumors [4] and gene electrotransfer [5]. For the detection of adherent cells permeabilization invitro different fluorescent dyes are used [6]. In our experiments we use propidium iodide (PI), a polar fluorescent dye. PI is membrane impermeant and generally excluded from viable cells, but it commonly diffuses trough permeabilized cell membrane. Once it is in cell it binds to DNA [6]. The percent of permeabilized cells is usually determined by manual counting on phase contrast images (whole number of cells) and appurtenant fluorescence images (number of permeabilized cells). In Figure 1 we can see example of these images. Manual counting of the cells is very time-
consuming especially of reason, that for statistical approach, repetition and independence of the experiments we deal with a lot of images. Manual countings are also not very objective. We deal with intra-person variation and interperson variation of counted number of objects. Cell detection in phase-contrast images was found to be the major problem of our mission. A typical phase-contrast images of adherent cells has the following characteristics: 1) noise and artifacts; 2) various cell shapes and cell intensity overlapping; 3) Inner points within a cell with a lower intensity; 4) cell object sticking together, therefore cell boundaries are not seen clearly [7]. All these decrease contrast between cells and background. In addition, equipmentrelated factors which contribute to the quality of the image, such as uneven illumination and electronic or optical noise, also play an important role in the effective segmentation of a digital image [8]. Several digital image segmentation techniques were investigated, such as contour-based, region-based and mixed contour-based methods, histogram-based and minimumerror-threshold and watershed algorithm [7-10]. For cell images none of above methods can be used directly [7]. The segmentation approach adopted must be robust against these problems in order to ensure that reliable information about cells number is obtained [8]. Algorithm based on three-parametric model of cell, realized in ITCN (Image-based Tool for Counting Nuclei, Centre for Bio-Image Informatics, University of California) tool has been found out to be appropriate. In this paper we introduce a procedure to set up ITCN algorithm’s parameters to obtain the most accurate number of cells possible in phasecontrast image.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 851–855, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
852
Marko Usaj, Drago Torkar, Damijan Miklavcic
Fig. 1 Images captured during experiment of electroporation of adherent cells. Phase-contrast image of cells (left) and appurtenant fluorescence image of cells (right).
II. MATERIALS AND METHODS A. Cell culture A cell line V-79 Chinese hamster lung fibroblasts (ECACC, England, EU) was used in the experiments. Cells were grown in an Eagle’s minimum essential medium supplemented with 10 % fetal bovine serum (Sigma, USA) at 37 °C in a humidified 5 % CO2 atmosphere in an incubator (Kambic, Slovenia). The images were captured using cooled CCD camera (Visicam 1280, Visitron, Germany) mounted on a fluorescence microscope (Zeiss AxioVert 200, objective ×20, Zeiss, Germany) and MetaMorph 5.0 software (Molecular Devices Corporation, PA, USA), exposure time 100 ms. During experiment we have captured 40 images - our test data. The cell images have the resolution of 640x512 pixels and 256 greyscale (8-BPP). B. Automatic cell detection For automatic cell detection the algorithm implemented within the ITCN tool was used. The algorithm is described in details in [11] and demands three parameters: (1) cell diameter, (2) minimal cell distance and (3) filter threshold. Objects (cell nuclei) in the image are detected using a template matching approach where the object model is convoluted with an image and in every position the correlation factor is calculated. Within ITCN, inverted Laplacian of Gaussian (LoG) is used as blob detector with the diameter proportional to mean nuclei size. The result of the convolution is a smooth continuous image where object centers are represented by local extremes which are detected using minimal cell distance. The procedure is depicted in Figure 2. We also test some basic image preprocessing influence
Fig. 2 Cell nuclei detection procedure.
on the final results. For that reason we use MATLAB function for Contrast-limited adaptive histogram equalization (CLAHE). CLAHE operates on small regions in the image, called tiles, rather than the entire image. We specified tiles 15 per row and column, respectively. Total number of tiles is then 225 corresponding to maximum value of cells in test images. Each tile's contrast is enhanced, so that the histogram of the output region approximately matches the histogram specified by the 'Distribution' parameter. We specified Bell-shaped histogram. C. Parameter optimization Due to small number of free parameters (3) the exhaustive optimisation approach was used. The search space was constrained and equally discretised (Table 1): Table 1 Parameter boundaries and number of samples Cell diameter (pixels) Minimal distance (pixels) Threshold
[25..35] , 11 values [15..25], 11 values [0,12..0,20], 9 values
The optimization procedure took the following steps: 1.1. the training set (6 images) was randomly selected out of 40 images 1.2. the training set (3 images) was selected out of 40 images by the expert 2. the number and the positions of cells in those images were determined by hand 3. in the case of training data from point 1.2. additional adaptive histogram equalization was performed 4. for each set of parameters of LoG filter, the image processing of training set was performed and the number of cells determined. The comparison to the number of real cells (counted by hand) was done and the mean squared error was computed and stored 5. the winning set was the one that minimized the mean squared error 6. in case of multiple winners the verification was done. Each local extreme in the resulting image was verified against the stored cell positions. If there was a match it was labeled as TRUE +, if not it was labeled as FALSE +. Similarly, we obtained TRUE – and FALSE – cases. Then, the set with minimal FALSE + and FALSE – was looked for. We tested the quality of numbers of real cells by determining the inter-person error. Ten images were processed (cells counted by hand) by seven people from Biocybernetics lab of the Faculty of Electrical Engineering, University
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Automatic cell detection in phase-contrast images for evaluation of electroporation efficiency in vitro
of Ljubljana. The results were compared to the numbers obtained by the reference person. The results of automatic cell detection were presented by error computed with Equation [1].
N m − N ITCN Nm
training data 1 training data 2
200 automatic counting
e=
250
853
(1)
training data 3
150
100
50
0
Where
0
e - relative number-of-cells error, Nm - real number of cells, NITCN - number of cells obtained by ITCN.
50
100 150 manual counting
200
250
Fig. 4 Evaluation of the algorithm in all images. Correlation between cell manual counting and automated method for all three training data defined by user.
Table 2 Trend equitation and correlation coefficient between manual count-
III. RESULTS
ing and automated counting.
A. Three random training sets
training data
trend equitation
correlation coefficient R2
relative error [%]
In Figure 3, results of our automated method running with three optimal parameters sets obtained from three random selected training sets in comparison to the manual count are represented. Results of automated method based on training sets one and two show good correlation with manual counted cell numbers. Results based on third training set show a somewhat worse correlation. Our observations are confirmed in Table 2. Average relative error with SD for all three training sets is 8,77 % ± 6,65 %.
1
y = 0,988 x
0,938
5,61 ± 5,71
2
y = 1,008 x
0,944
5,80 ± 6,05
3
y = 1,006 x
0,935
5,97 ± 5,69
250
training data 1 training data 2
automatic counting
200
training data 3
150
In Figure 4, results of our automated method running with three optimal parameter’s sets obtained from three user-defined training sets in comparison of the manual counting are represented. Results of automated method show good correlation with manual counting. Our observations are confirmed in Table 3. Average relative error with SD for all three training sets is 5,79 % ± 5,82 %. For an additional evaluation of our method we have determined a relative error between user and seven test persons (coworkers). Average relative error with SD has been 9,51 % ± 4,19 % which is larger than most examples of automatic counting.
100
50
0 0
50
100 150 manual counting
200
250
Fig. 3 Evaluation of the algorithm in all images. Correlation between cell manual and automatic counting for all three training sets.
Table 1 Trend equitation and correlation coefficient between manual counting and automated counting. training data
trend equitation
correlation coefficient R2
relative error [%]
1 2 3
y = 0,988 x y = 1,022 x y = 0,842 x
0,921 0,931 0,846
7,06 ± 5,73 7,47 ± 6,45 11,76 ± 7,65
B. User-defined training sets and histogram equalization
III. DISCUSSION First of all we have to say that there is no standard method of cell counting to which automated methods can be compared. In the absence of such a standard, expert opinion with all of its associated subjectivity represents the standard by which automated methods must be judged [12]. The accuracy of automated counting method refers to how faithfully the method replicates the count from expert opinion. The main part of the procedure is the selection of training data, which makes the method robust to different images (depending on quality, object characteristic) but it is also its
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
854
critical part. “Unlucky” selection of training data brings out poor results, what can be seen in Figure 3 for third set of training data. We have concluded that random selection is not the best method to select training data. We propose an alternative method for the selection. With regard to ITCN parameters the training data has to be representative in two ways: 1) cell shape, which is connected to the parameter “cell size” and 2) cell density, which is connected to the parameter “minimal cells distance”. Next, the observation of our test images showed that images with a lot of cells carry the information of low value of minimum cell distance and low value of cell size, because the cell are close together and for that reason smaller that in the images with little cells. Those images carry the information of high value of minimum cell distance and high value of cells size. Because of all above mentioned reasons we selected three images based on cell number (high, medium, low). With included adaptive histogram preprocessing we achieve better result (Table 2) in comparison to randomly selected training data (Table 1) with more training images. All three experiments systematically bring out (the same) outliers. There was under-counting, because of very dense cells difficult disguised even to user, and there was also overcounting (double counting of one cell), because of abnormal large cells and bad focus. If we take into account that the effect of electroporation is also influenced by cell density and that abnormal large cells in population show (changed) viability we can say that our method has given us warning that those images (cells) are untypical for our experiment and for that reason we can not take them into our results. Phase contrast images used in testing of our method were quite same quality. Because of the use of the training data, we have to emphasize that it is necessary to obtain images with the similar quality that do not dramatically differ in illumination and focus. We propose that for each experiment one set of training data is selected. With appropriate training data our method achieves good results. This is even clearer if we compare an inter-person error to systematic error. The average relative error between expert and our method (5,79 %) is smaller than inter-person error (9,5 %). The error can be significantly decreased if the illumination, the focus, the cell density (not too dense) and cell morphology (not too large) are carefully observed during acquisition of images. With the removed outliers (5 to 7 images) the average relative error drops to 4,04 % ± 3,60 %. Our exhaustive search optimization is a primitive one and takes a substantial computational time. Its greatest advantage at the moment is that user has to count only images for training data, which in our case means 3 instead of 40 images. The next challenge is to receive results quickly, if possible in real-time. We are planning to engage more so-
Marko Usaj, Drago Torkar, Damijan Miklavcic
phisticated heuristic optimization procedures, such as genetic algorithm and ant-colony optimization. IV. CONCLUSION The presented procedure at its current state (basic optimization, basic image preprocessing) produces reasonably good results. If the training set images are selected by user or expert, the mean error is 5,79 % which is less than interperson error (9,5%). The human counting time is substantially reduced, since only the cells in the training set images have to be counted.
ACKNOWLEDGMENT This research was supported by the Slovenian Research Agency (ARRS). Authors thank to Jiyun Byun, Center for BioImage Informatics, University of California, for providing us with MATLAB source code and her help in tuning algorithm’s parameters. Authors also thank students and employees in Laboratory of Biocybernetics for manual counting of cells, especially Alenka Macek Lebar for helpful and fruitful discussions during the preparation of this work.
REFERENCES 1. Rols M P (2006) Electropermeabilization, a physical method 2. 3. 4.
5. 6.
7.
for the delivery of therapeutic molecules into cells. Biochimica et Biophysica Acta, 1758: 423–428. Macek Lebar A, Sersa G, Cemazar M, Miklavcic D (1998) Elektroporacija. Med. Razgl., 37: 339-354. Kotnik T, Macek Lebar A, Kanduser M, Pucihar G, Pavlin M, Valic B, Miklavcic D (2005) Elektroporacija celicne membrane: teorija in poskusi in vitro. Med. Razgl., 44: 81-90. Sersa G, Kranjc S, Cemazar M (2000) Improvement of combined modalitytherapy with cisplatin and radiation using electroporation of tumors. Int. J.Radiat. Oncol., Biol. Phys, 46: 1037–1041. Pavselj N, Preat V (2005) DNA electrotransfer into the skin using a combinationof one high- and one low-voltage pulse. J. Controlled Release, 106: 407–415. Kotnik T, Macek Lebar A, Miklavcic D, Mir L M (2000) Evaluation of cell membrane electropermeabilization by means of nonpermeant cytotoxic agent. Biotechniques, 28: 921-926. Chen Y, Biddell K, Sun A, Relue P A, Johnson J D (1999) An automatic cell counting method for optical image. BMES/EMBS Proc., Atlanta, USA, pp 819.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Automatic cell detection in phase-contrast images for evaluation of electroporation efficiency in vitro
8. Haralick R M, Shapiro L G (1985) Survey: image segmentation techniques. Computer Vision, Graphic, Image process, 29, 100-132 9. Wu K, Gauthier D, Levine M D (1995) Live cell image segmentation. IEEE Transactions on biomedical engineering, Vol. 42, No. 1. 10. Abriz-Colin F, Torres-Cisneros M, Avina-Cervantes J G, Saavedra-Martinez J E (2006) Detection of biological cells in phase-contrast microscopy images. MICAI Proc., Mexico 2006 11. Byun J, Verardo M R, Sumengen B, Lewis G P, Manjunath B S, Fisher S. K (2006) Automated tool for nuclei detection in
855
digital microscopic images: Application to retinal images. Molecular Vision; 12: 949-960 12. Hawkins N, Self S, Wakefield J (2006) The automated counting of spots for the ELISpot assay. Journal of Immunological Methods 316:52-58 Author: Marko Usaj Institute: Street: City: Country: Email:
University of Ljubljana Trzaska 25 1000-Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Battery powered and wireless Electrical Impedance Tomography Spectroscopy Imaging using Bluetooth A.L. McEwan1 and D.S. Holder1 1
Medical Physics and Bioengineering, University College London, London, UK
Abstract— A recently developed Electrical Impedance Tomography Spectroscopy (EITS) system, the UCL Mk2.5, was modified to connect to a PC using a Bluetooth radio replacement for a RS232 cable with power supplied by a 12V DC battery. The battery powered wireless system was more robust to variation in contact impedance when used in multifrequency time difference imaging. Additional advantages are safety by complete isolation and improved freedom of movement for the subject which might confer reduced movement artifact and improved electrode contact. Keywords— EITS, instrumentation, battery powered, wireless, CMRR, noise, imaging, Bluetooth.
I. INTRODUCTION Electrical Impedence Tomography (EIT) is a medical imaging method in which images are produced rapidly using electrodes placed around the body. Electrical current is injected through one set of electrodes at frequencies ranging from dc-10MHz and the resulting voltage potential recorded from the other electrodes. EIT is currently limited by instrumentation errors such as stray capacitance, common mode voltages, crosstalk and load and frequency dependence. A wireless connection between the PC and an EIT system reduces errors such as noise coupled from the power supply and other common mode errors [1]. The purpose of this paper was to measure the imaging improvements in impedance measurement with a battery powered wireless EIT system, the UCL Mk2.5 [2]. Several authors have suggested the use of a distributed system [3,4] or active electrodes [5,6,7] to mitigate degradation of signal due to stray capacitance due to the use of cables in EIT measurements [9,10] For example, a recent active electrode based system achieved a CMRR of 100dB, although the differential amplifier and demodulators were not included on the electrode [7]). Wireless EIT is a novel method to reduce common mode errors. It isolates parts of the system by replacing ca-bling with a wireless link. A recent system from the Korean Impedance Imaging Resource Centre (KIIRC) used a RF wireless link between the system and PC [11]. This system used up to 32 parallel voltmeter channels each with a FPGA for demodulation, so that the power supply requirements were
too great for battery powered operation. As a result, a power supply cable still had to be employed. Expeced improvements in common mode error, bandwidth and crosstalk in the UCL Mk2.5 system were previously estimated using circuit simulations and cable measurements [1,2]. Two systems were considered, isolation of the system from the PC with a wireless data link, and active electrodes that operate from independent power supplies. Circuit simulations suggested a 20dB improvement in common mode error in the PC-isolated system and 30dB improvement in common mode error in the active electrode system with no degradation in bandwidth or crosstalk [1]. Measured results of the PC-isolated system found a noise decrease from 0.1% to 0.05% at most frequencies on a resistor phantom and in-vivo. The CMRR improved by 2dB at 20Hz but showed no improvement above 160Hz [2]. A significant source of common mode voltage is caused by variation in contact or skin impedance between electrodes which may vary by 15-20% on the same subject [2]. These impedances combine with the stray capacitance to produce common mode voltages and differ-ential voltage errors. An improvement in stray capacitance may not be seen in a saline tank imaging experiment where equal contact impedances are used. One method of approximating the real situation is to introduce varia-tion in the contact impedance of the electrodes of a saline tank using discrete components [12]. This paper presents results from a prototype system to asses the benefits of a wireless link between the PC and EIT system. Bluetooth was used as it is mature radio technology and drop-in serial cable replacements modules are readily available requiring no re-design of the system. Battery power was used to ensure complete isolation. The improvement in performance was assessed using a saline filled tank with discrete components to simulate variations in skin impedance. II. MATERIALS AND METHODS A. System Description The UCL Mk2.5 is based on a single channel of the Sheffield Mk3.5 system [13] and measures impedance at 30 frequencies between 20Hz and 1.6Mhz with a multiplexer
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 798–801, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Battery powered and wireless Electrical Impedance Tomography Spectroscopy Imaging using Bluetooth
to address up to 64 electrodes. The Mk2.5 EITS system conveniently uses a single Universal Serial Bus (USB) cable to provide communication and power from a PC. This connection was replaced by two 6V batteries to provide a 12V DC supply and a wireless serial cable replacement, the CB-OEMSPA13i from ConnectBlue (www.connectblue.se). This serial port adaptor uses the Bluetooth wireless protocol. The serial connection of the CB-OEMSPA13i was connected to a MAX232 level converter to converter (www.maxim-ic.com) which was used in place of the IL712 (www.nve.com) isolator in the existing Mk2.5. The 12VDC battery supply was converted to +/-5V by the existing TEN5 DC-DC con-verter (TracoPower) in the Mk2.5. B. Tank Imaging Cylindrical pieces of banana (dia=2cm, l=5cm) were introduced into a cylindrical tank (dia=9cm, h=6cm) containing 0.1% NaCl with 16 electrodes (stainless steel, dia=1cm). The ground electrode was placed in the centre of the tank. 50 frames of a polar (diametric) protocol of 448 measurements were acquired and all frames were averaged. Skin impedance was simulated by a parallel capacitor and resistor and series resistor matched to 1% (Fig. 1) which were connected to each electrode (Zc). 68Ω resistors were used in
series with the skin impedance models on electrodes 2,4,7,11,16 to introduce a variation in skin impedance of 15% (ΔZc). This test simulates the variation in contact impedance previously found on the human head [2]. The banana was located with its centre at 1.5cm and 2.5cm along the diagonal between electrode 7 and 15. Four multifrequency difference images were acquired and reconstructed using a linear SVD algorithm [14] for each of the wired and wireless configurations. The images showed the impedance difference from the reference conductivity of 0.1% NaCl (0.167 S/m). The four images were a) banana location 1, equal Zc. b) banana location 1, ΔZc. c) banana location 2, equal Zc, d) banana location 2, ΔZc. Reference frames with the same ΔZc were used with the ΔZc experiments. The impedance of the banana was directly measured using a HP 4284A Impedance Analyser (www.agilent.com). The spectrum of direct HP measurements were compared with the spectrum of the pixel at the centre of the banana for the four images (a,b,c,d), in wired and wireless modes. The centre frequency was calculated as the 3dB point from this spectrum for comparison. III. RESULTS A. Tank Imaging The images (2) in wired and wireless modes appear similar. The more centrally located position 2 showed less impedance change. Differences were seen in the spectrum of the pixel at the centre of the banana (Fig. 2, Fig4). In both positions the spectrum of the wireless configuration changed less with variation in contact impedance.
22 430Ω
200 Ω
0.5 HP wired
0.4
wired Δ Z wireless
800
0.3 Conductivity (S/m)
700 R, Human 600
X, Human R, model
500
X, model
400
799
300
wireless Δ Z
0.2
0.1
0
200
-0.1
100 0 1.00E+03
1.00E+04
1.00E+05
1.00E+06
Fr e que nc y , H z )
Fig. 1: Skin impedance model based on measurements on the human head (Zc) [2].
-0.2 1 10
2
10
3
10
4
10 Frequency (Hz)
5
10
6
10
7
10
Fig. 2: Spectrums from the pixel at the centre of the banana when imaging with banana at position 1.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
800
A.L. McEwan and D.S. Holder
Position 1
Position 2 0.3
7 Battery powered wireless
USB Wired
Zc Constant
USB Wired
7 Battery powered wireless
0.25 0.2 0.15 0.1 0.05
Δ Zc
0 -0.05 -0.1 ΔZ Fig. 3: Time difference images at 2.5kHz from banana in two positions, wired and wireless modes and contact 0.5 wired 0.45
wired ΔZ wireless
0.4
wireless Δ Z HP
Conductivity (S/m)
0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 1 10
2
10
3
10
4
10 Frequency (Hz)
5
10
6
10
7
10
Fig. 4: Spectrums from the pixel at the centre of the banana
impedance imbalance
ments. However the comparison of centre frequencies showed that wireless mode does not consistently produce a better estimate in two positions. This approach has the additional advantages of better safety by complete isolation, improved ease of use in ambulatory patients, and it may yield reduced movement artifact and improved electrode contact as the system is more mobile. The use of a single wireless connection for the entire circuit only removes the theoretical stray capacitance to the power and signal leads to a base box, but it may still be the case that this will offer greater improvements when tests are conducted in clinical subjects. However, in these cases, it will be difficult to define precisely the true signal. We plan to evaluate this in human subjects. In the future, we also plan to evaluate a system with independently floating active electrodes in which the benefits in common mode error may be expected to be greater.
when imaging with banana at position 2.
ACKNOWLEDGMENT IV. CONCLUSIONS Improvements in imaging were found with the wireless system which is more robust to variation in contact impedance. This is probably due to less stray capacitance – the largest source of error in multi-frequency EIT measure-
This work was supported by an Action Medical Research Fellowship. We would also like to thank Pete Milnes, Rebecca Yerworth, Alexander Birkett and Michael Fill for assistance with the instrumentation.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Battery powered and wireless Electrical Impedance Tomography Spectroscopy Imaging using Bluetooth
REFERENCES 1.
2.
3.
4. 5. 6. 7.
8.
McEwan, A. and Holder, D.S. Instrumentation improvements in battery powered, wireless EITS IFMBE Proceedings of the World Congress of Medical Physics, Seoul, Korea, Aug 27Sept 1, 2006. McEwan A, Romsauerova A, Yerworth R, Horesh L, Bayford R and Holder D 2006 Design and calibration of a compact multi-frequency EIT system for acute stroke imaging Physiol Meas 27 S199-S210 McEwan A, Yerworth R J, Milnes P, Brown B H, and Holder D S 2004 Wireless EIT XII international Conference on Bioimpedance and Electrical Impedance Tomography, Gdansk, Poland Jossinet J, Trillaud C, Risacher F and McAdams E T 1993 A high frequency electrical impedance tomograph using distributed parallel input channels Med. Prog. Technol. 19 167-72 Boone K G and Holder D S 1996 Current approaches to analogue instrumentation design in electrical impedance tomography Physiol Meas. 17 229-47 Jossinet J, Tourtel C and Jarry R 1994 Active current electrodes for in vivo electrical impedance tomography Physiol Meas. 15 Suppl 2a A83-A90 Rigaud B, Yue G H, Chauveau N and Morucci J P 1993 Experimental acquisition system for impedance tomography with active electrode approach Medical and Biological Engineering and Computing 593-99 Li J H, Joppek C and Faust U 1996 Fast EIT data acquisition system with active electrodes and its application to cardiac imaging Physiol Meas. 17 Suppl 4A A25-A32
9. 10. 11.
12. 13. 14.
801
York T 1996 Custom silicon for tomographic instrumentation Measurement Science and Technology 7 308-15 Brown B H 2003 Electrical impedance tomography (EIT): a review J Med Eng Technol 27 97-108 Oh TI, Lee JS., Woo EJ, and Seo JK 2005 Multi-frequency EIT and TAS Conference on Biomedical Applications of Electrical Impedance Tomography, University College London. Schlappa J, Annese E and Griffiths H 2000 Systematic errors in multi-frequency EIT Phys. Meas. 21 111-18 Wilson A J, Milnes P, Waterworth A R, Smallwood R H and Brown B H 2001 Mk3.5: a modular, multi-frequency successor to the Mk3a EIS/EIT system Phys Meas 22 49-54 Vauhkonen M, Lionheart W R, Heikkinen L M, Vauhkonen P J and Kaipio J P 2001 A MATLAB package for the EIDORS project to reconstruct two-dimensional EIT images Physiol Meas. 22 107-11
Author: Institute: Street: City: Country: Email:
Alistair McEwan University College London Gower St London UK
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Classification of Prostatic Tissues using Feature Selection Methods S. Bouatmane1, B. Nekhoul1, A. Bouridane2 and C. Tanougast3 1
2
Faculté des Sciences de l’Ingénieur, Université de Jijel 18000, Algeria School of Computer Science, Queen’s University Belfast, Belfast BT7 1NN, United Kingdom 3 LIEN, Université Henri Poincaré - Nancy I, BP 239 54506 Vandoeuvre-Lès-Nancy, France
Abstract— This paper proposes the use of sequential feature selection for classification of prostatic tissues. The technique aims to classify microscopic samples taken by needle biopsy for the purpose of prostate Cancer diagnosis. Four major classes (representing different grades of abnormality from normal to cancer respectively: Stroma, BPH, PIN, PCa) have to be discriminated. To achieve that, the same feature vector, based on texture measurements, was derived for each class. Haralick features have been used to describe textures. Sequential forward selection (SFS) and sequential backward selection (SBS) has been used to reduce the dimensionality of the generated feature vector into a manageable size. Tests have been carried out using k nearest neighbor (kNN) method and have shown that the use of feature selection algorithms SFS and SBS can significantly improve the classification performance. Keywords— Prostate Cancer Diagnosis, Classification, Texture, SFS, SBS ,KNN.
thological images. Roula [5] used multispectral images to classify prostate samples. Texture features and structural features have been used to describe images. The feature vectors for each band were then combined to give a large dimension vector. Principal component analysis has been used to reduce the dimensionality of the combination feature vector to a manageable size. Tests has been assessed using supervised Classical Linear Discrimination method and very attractive results were obtained. This paper is organized as follows Section 2 gives a brief overview of the analysis involved in a biopsy. The feature vector used for class discrimination is explained in Section 3. Section 4 is concerned with feature selection algorithms used while section 5 is concerned with results and discussion. Finally, section 6 gives a brief summary of the paper. II. IMAGE ACQUISITION
I. INTRODUCTION Over the last decade prostate cancer has become one of the most commonly diagnosed cancers in male population. About 32,000 men die from it every year in the United Kingdom alone. However, some methods of diagnosis do exist such as the prostate specific antigen (PSA) blood test. If this blood test is positive, the urologist often advises that a needle biopsy, in which a tiny piece of tissue is removed from the prostate, be analysed under microscope by a pathologist to see if there is a cancer [1][2]. The pathologist usually examines the textures and structures present in the samples to make a diagnosis [3]. Since human assessment is more subjective than objective, and in order to reduce the error rate of diagnosis, the idea of introducing computer vision techniques is gaining increasing support. The aim is to use automatic classifiers as a quantitative basis for diagnosis by applying image processing techniques to perform quantitative measurements of relevant features that can discriminate between different grades of malignancy. Significant progress has already been made in the use of computer vision techniques for cytology e.g. in developing cervical screening devices [4]. However, very little work has been done in applying such techniques to histology. The reason for this is essentially the complexity of histopa-
When analysing a biopsy four major groups have to be discriminated: • Stroma (muscular normal tissue). • Benign Prostatic Hyperplasia: BPH (a benign condition). • Prostatic Intraepithelial Neoplasia: PIN (a Precursor state for Cancer) • Prostatic Carcinoma: PCa (abnormal tissue development). These samples were routinely assessed by two experimented pathologists and graded histologically as showing Stroma, BPH, PIN and Pca. From these whole sections sub images were taken for analysis. In a real application, only a tiny piece of tissue is removed. Thus, the whole sections at our disposition constitute a large amount of data, which will be used both for the training and testing steps of the analysis. III. FEATURE VECTOR A. Texture features To identify prostatic patterns, texture features are needed as a discriminative measurement for the samples. Haralick [6] assumed that texture information is sufficiently identi-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 843–846, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
844
S. Bouatmane, B. Nekhoul, A. Bouridane and C. Tanougast
fied by a matrix indexed by grey levels and where the elements represent the frequency of having two defined grey levels separated by a defined distance in a defined direction. This matrix is called co-occurrence matrix.
Co(i, j , d , θ ) = a
(1)
The above equation means that there are α pairs of pixels having i and j respectively, as grey levels and separated by the cylindrical co-ordinate [d, θ ]. The values of d, for which the GLCM is computed, depend on the nature of the texture. Small d values are suitable for fine textures, whereas larger distances are needed to measure coarse textures. For an image of 256 grey levels (Ng=256), there would be 65536 feature elements to use as a measure for the textures. Therefore the direct use of the co-occurrence matrix is computationally intensive and as such is not practical. Instead, the textures are represented by deriving some more meaningful measurements. A set of features was proposed by Haralick to characterise the homogeneity, the coarseness, the periodicity and the linearity of textures. These features are defined as follows: Angular Second Moment
ASM =
Ng Ng
∑ ∑ P (i , j )
2
(2)
i =1 j =1
CON =
Ng
∑ ∑ (i −
(3)
2
j ) P (i , j )
i =1 j =1
Dissimilarity
DIS
=
Ng
Ng
i=1
j =1
∑ ∑
i − j P (i, j )
where
(7)
i =1 j =1
B. Structural features The use of texture features alone is not enough to capture the complexity of the patterns in prostatic neoplasia. The classification of stroma is relatively simple because of its homogenous nature at low resolution. BPH and PCa present more complex structures, as both can contain glandular areas and nuclei clusters as well. The glandular areas are smaller in regions exhibiting PCa and the nuclear clusters are much larger. The PIN pattern is an intermediate state between the BPH and PCa. It appears that accurate classification requires the quantification of these differences. Segmenting the glandular and the nuclear areas could achieve this quantification, as the glandular areas are lighter compared to the surrounding tissue, while the nuclear clusters are darker. From the segmented images, two features, f1 and f2 can be computed [5] as follows: f1 = N
W2
(8)
W2
(9)
where: G the number of pixels segmented as glandular area. N the number of pixels classified as nuclear area. W is the size of the analysis window. These two features allow the quantification of how much nuclear clusters and glandular areas are present in the samples.
SEQUENTIAL BACKWARD SELECTION
[
]
μ x , μ y ,σ x ,σ y
(5)
are the means and the variances of
the row and column sums respectively as follows Entropy or randomness Ng
ENT
P ( i , j ) /( 1 + ( i − j ) 2 )
IV. SEQUENTIAL FORWARD SELECTION AND
COR = ∑∑ (i − μ x )( j − μ y ) P(i, j ) 2 /(σ xσ y ) i =1 j =1
∑∑
(4)
Correlation Ng Ng
=
Ng
f2 = G
Contrast or difference moment Ng
IDM
Ng
= −∑
Ng
∑
P ( i , j ) log P ( i , j )
i =1 j =1
Inverse difference Moment:
(6)
The total number of features used is 128. The 6 texture and 2 structural features are computed across all the multispectral bands, giving a total of 8 for each band. 16 channels have been used which makes the total number of features, (8)*16=128. The analysis of such a large vector is computationally intensive. Furthermore, the accurate estimation of statistical parameters requires the rate: Number of Variables/Number of samples) to be as low as possible [7]. Therefore a data reduction step, such as feature selection is required. The problem of feature selection is defined as follows: Let Y be the original set of features, with cardinality n. let d represent the desired number of features in the selected subset X, X ⊆ Y.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Classification of Prostatic Tissues using Feature Selection Methods
845
Let the feature selection criterion function for the set X be represented by J(X). Without any loss of generality, let us consider a higher value of J to indicate a better feature subset. Since we are maximizing J(.), one possible criterion function is (1-pe), where pe denotes the probability of error. The use of probability of error as a criterion function makes feature selection dependent on the specific classifier used and the size of the training and test data sets. Formally, the problem of feature selection is to find a subset X ⊆ Y such that:
assessed by two experimented pathologist and labelled into 4 groups: 165 cases of Stroma, 106 cases of BPH, 144 cases of PIN, 177 cases of PCa. The k nearest neighbour has been used as a supervised classification algorithm, and applied on the feature vector after SFS and SBS. The assessment of classification results has been made using the leave one out test method which is a case of the cross validation test. In the cross validation test, the data set of size n is randomly divided into m disjoint sets of equal size n/m. The classifier is trained m times, each time with a different set used as a validation set. The estimated performance is simply the mean of these m errors. In the leave on out test method n is set to m [9]. The application of the two algorithms SFS and SBS has shown that the 15 first selected features give a better classification (low overall error classification). The SFS and SBS have comparable performances but better results have been obtained by SFS. Table 1 and 2 represent the best results obtained by the application of SFS for both criterion function Mahalanobis distance and 1NN, respectively, by using 15 features selected. The use of the 1NN classifier as criterion function gives a better classification compared to the Mahalanobis distance and same compared to classification carried out by Roula’s method which was mentioned above for almost the classes of Prostate cancer (table3)[5].
X
= d
and
J
(Χ ) =
max
J
Ζ ⊆ Υ , Ζ
(Ζ )
(10)
= d
This procedure can reduce not only the cost of recognition by reducing the number of features that need to be collected, but in some cases it can also provide a better classification accuracy due to finite sample size effects[8]. The process of choosing an appropriate criterion function is known as feature evaluation. The main goal of this process is to measure “goodness” of a subset produced by some generation procedure. In this context, goodness means the capability of a feature subset to distinguish the different class labels and the ability of providing compact and maximally distinct descriptions of every class. There are two approaches: wrapper and filter, a wrapper uses the classifier algorithm itself to evaluate the usefulness of features, while filter evaluates features according to heuristics based on general characteristics of the data. In this work, Sequential Forward Selection (SFS) and Sequential Backward Selection (SBS) were used. The idea of SFS is to select the best single feature and then add one feature at a time which in combination with the selected features maximizes the criterion function. The SBS Starts with all the features and successively deletes one feature at a time [8]. To assess the “goodness” of feature subset, two criterion functions were used: the Mahalanobis distance and the one nearest neighbour classifier (1-NN) which corresponds to the filter and wrapper approach, respectively. The Mahalanobis distance is defined as follows:
disance = (μi − μ j ) Σ −1 (μi − μ j ) t
(11)
μi and μ j are the mean vector of class i and j respectively while Σ is the total covariance matrix. V. RESULTS AND DISCUSSION The analysis has been carried out on 592 sample images (128 by 128), which have been chosen to reflect different grades of malignancy in prostatic tissues. They have been
Table 1 Cross validation table for the Mahalanobis distance criterion function Classified as: BPH PCa PIN Stroma
BPH 94 2 2 5
PCa
PIN
5 0 171 3 3 131 0 0 Overall
Stroma
Error
7 1 7 160
11.32% 3.38% 9.02% 3.03% 6.08%
Table 2 Cross validation table for the 1NN classifier criterion function Classified as: BPH PCa PIN Stroma
BPH
PCa
PIN
Stroma
Error
101 1 0 5
0 174 2 0 Overall
0 2 137 0
5 0 5 160
4.71% 1.69% 4.86% 3.03% 3.37%
Table 2 shows a very low classification error rate. it shows that cancer samples have been accurately classified, this is due to the fact that Pca has among all the present patterns, the most regular texture and the spatial nucleus distribution. The BPH and PIN present the highest error rate. Alternatively these patterns present the highest variability. Glandular areas can be of different sizes and shapes,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
846
S. Bouatmane, B. Nekhoul, A. Bouridane and C. Tanougast
which make some cases difficult to classify even for an experimented pathologist. Nonetheless, the overall classification rate is still very satisfactory. It appears that wrapper approach provides a better estimate of accuracy for a feature subset, but involves a computational overhead. Table 3 Cross validation table for the 1NN classifier criterion function Classified as: BPH PCa PIN Stroma
BPH 100 1 1 11
PCa
PIN
3 2 174 1 4 133 1 1 Overall
Stroma
Error
1 1 6 152
5.66% 1.69% 7.63% 7.87% 5.57%
as the negative diagnosis while Pca and Pin form the positive diagnosis outcome (see Figure 1). VII. CONCLUSION This paper has addressed the application of the sequential feature selection for the classification of the prostate microscope samples. The main idea was to derive some relevant texture and structural features, after sequential feature selection is applied to reduce the size of features and select the relevant one. The k-nearest neighbor was applied to assess the classification. Results have shown a good classification rate for all the four classes in general with an overall rate superior to 96% of accurate classification.
VI. ROC ANALYSIS Receiver Operating Curves (ROCs) are often used to assess the performance of classifiers in clinical practice. To plot ROC, the classifier should be tested using different parameters resulting from different values for false alarm (False Positive FP) and sensitivity (True Positive TP) rates. Since we use a kNN classifier, instead of producing 0-1 decision for cases, we have modified the algorithm to produce a numeric rating. For the 1NN and 3NN classifiers, we used the Euclidean distance from the query (instance test) to the nearest neighbors of the positive class. KNN classifier output is more reliable if distance is low, so a positive verification has been considered when output value is smaller than acceptance threshold. By varying the values of the threshold in the classification task, different values of FP and TP are obtained. This achieved by considering the BPH
1. 2. 3. 4.
5.
6. 7.
1 1NN 3NN
0.9
REFERENCES
8.
0.8 0.7 0.6
9.
0.5 0.4
McNeal, J. E.: Normal and Pathologic Anatomy of Prostate. Supplement to Urology, 17(3)(1981) 11-16 Bostwick, D.G., Eble, J.N.: Urologic Surgical Pathology. Mosby Year Book, Inc. (1997) McNeal, J. E.: Prostate Histology for Patho. Raven Press, (1992) pp.749-763 Keenan, S. J., Diamond, J., McCluggage, W.G., Bharucha, H., Thompson, D., Bartels, P.H. ,Hamilton, P.W.: An Automated Machine Vision System For The Histological Grading Of Cervical Intraepithelial Neoplasia (CIN). Journal of Pathology, Vol. 192, (2000) 351-362 Roula, M. A. “Machine Vision and Texture Analysis for the Automated Identification of Tissue Patterns in Prostatic Tumours” PhD Thesis, School of Computer Science, Queens University Belfast, 2004. Haralick, R.M.: Statistical and Structural Approaches To Texture. Proc. Of the IEEE, vol. 67 (1979) 786-804 A.K. Jain and R .P.W. Duin and J.Mao. Statistical pattern recognition: A review. IEEE Trans. Pattern Analysis and Machine Intelligence 22(1):4-37, January 2000. A.K. Jain and D. Zongker, ™Feature Selection: Evaluation, Application, and Small Sample Performance,. IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 2, pp. 153-158, Feb.1997. Fukunaga, K., Kessell, D.L.:Estimation Of Classification Error. IEEE Transactions on Computers Vol. 20 (1971) 15211527.
0.3 0.2 0.1 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 1 ROC curves obtained
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Estimation method for brain activities are influenced by blood pulsation effect W. H. Lee1, J. H. Ku1, H. R. Lee1, K. W. Han1, J. S. Park1, J. J. Kim2, I. Y. Kim1, and S. I. Kim1 1
2
Department of Biomedical Engineering, Hanyang University, Seoul, Korea Institute of Behavioral Science in Medicine, Yonsei University Severance Mental Health Hospital, GyeongGi-Do, Korea
Abstract— BOLD T2*-weighted MR images reflects cortical blood flow and oxygenation alterations. fMRI study relies on the detection of localized changes in BOLD signal intensity. Since fMRI measures the very small modulations in BOLD signal intensity that occur during changes in brain activity, it is also very sensitive to small signal intensity variations caused by physiologic noise during the scan. Due to the complexity of movement of various organs associated with heart beat, it is important to reduce cardiac related noise rather than other physiological noise which could be required with relatively simple method. Therefore, a number of methods have been developed for the estimation and reduction of cardiac noise in fMRI study. But, each method has limitation. In this study, we proposed a new estimation method for brain activities influenced by blood pulsation effect using regression analysis with blood pulsation signal and the correspond slice of fMRI. We could find out that the right anterior cingulate cortex and right olfactory cortex and left olfactory cortex were largely influenced by blood pulsation effect for new method. This observed areas are mostly on the structure of anterior cerebral artery in the brain. That is convinced with that our method would be valid and our new method is easier to apply in practice and reduce computational burden than the retrospective method. Keywords— fMRI, blood pulsation, PPG, estimation
I. INTRODUCTION Functional magnetic resonance imaging (fMRI) is the use of MR imaging to noninvasively map human brain function without the use of exogenous contrast agents[1]. The basis of this technique is the blood oxygenation level dependent (BOLD) contrast that derives from the fact that deoxyhemoglobin acts as an endogenous paramagnetic contrast agent[2]. Therefore, changes in the local concentration of deoxyhemoglobin within the brain lead to alterations in the magnetic resonance signal. Neuronal activation within the cerebral cortex leads to an increase in blood flow without a commensurate increase in oxygen extraction[3]. fMRI studies reliy on the detection of localized changes in BOLD signal intensity[4]. Since fMRI measures the very small modulations in BOLD signal intensity that occur during changes in brain activity, it is also very sensitive to small signal intensity variations caused by physiologic noise during the scan[5]. Artifacts due to gross physiological noises have been recognized as one possible source of false activa-
tion[6]. The components of physiologic noise are head movement, fluctuations in cerebral metabolism, cerebral blood flow, cerebral blood volume, respiration, and cardiac pulsation[7]. Due to the complexity of movement of various organs associated with heart beat, it is difficult, if not impossible, to describe these motions in an exact quantitative manner[4]. Therefore, a number of methods have been developed for the estimation and reduction of cardiac noise in fMRI study. And recently, increasing attention has been focused in detecting functional connectivity in a resting state[8]. Because of the signal change due to cortical function is slight typically ranging from 1 to 2%, BOLD signal changes is more sensitive in physiologic noises in resting state than experimental fMRI[9]. There were several studies for estimation and reduction of cardiac noise. These include band-rejection filter[10], adaptive filter[11] or Image based retrospective correction (RETOICOR), k-space based retrospective correction (RETROKCOR), navigator echo based correction(DORK)[4,12,13]. Filtering approaches require that the cardiac and respiratory signals not be aliased, something typically not achieved with the repetition times used for whole-head fMRI[7]. The navigator echo technique only reduces the phase-encoding artifacts in a single image and does not directly correct the image-to-image variations. Therefore, it is not applicable to echo-planar imaging (EPI)[14]. Retrospective approaches have involved the use of independently measured physiologic cycles with retrospective binning. But, restrospective approachs have limitations that consideration extra subject setup for the monitoring equipment and more cumbersome post processing[4]. In this study, we propose a new estimation method to be unnecessary cumbersome post processing for brain activities are influenced by blood pulsation effect. II. METHOD A. Subjects Twelve healthy volunteers (average age: 23.08, range: 21 ~ 31, SD: 3.46), 10 male and 2 female subjects, were recruited for this study. All were free of neurological or psychiatric illness.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 839–842, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
840
W. H. Lee, J. H. Ku, H. R. Lee, K. W. Han, J. S. Park, J. J. Kim, I. Y. Kim and S. I. Kim
B. Data Acquisition For each subject, whole brain images were obtained cardiac cycle was monitored while subjects were at rest for 5 minute. Imaging was done on a 1.5-T MRI system (Sigma Eclipse, GE Medical Systems). BOLD signals were obtained using an EPI sequence (Gradient Echo, 64x64x30 matrix with 3.75x3.75x5-mm spatial resolution, TE: 14.3, TR: 2s, FOV: 240mm, Slice thickness: 5mm, FA=90, # of slices: 30). A series of high resolution anatomical images was also acquired with a fast spoiled gradient echo sequence (256x256x116 matrix with 0.94x0.94x1.50-mm spatial resolution, FOV: 240mm, Thickness: 1.5mm, TR: 8.5s, TE: 1.8s, FA: 12, # of slices: 116). The cardiac cycle was monitored using a fiber optic photo pulse sensor placed on the left index finger to measure the photoplethysmography(PPG). The cardiac signals were recorded at a rate 200 sample/sec using MP 100 system (BIOPAC Co.,). C. Data Analysis
pressure wave to the finger and to the cerebral tissue [9], and blood mostly flows inferior to superior direction. The key idea to estimate effects on fMRI signal by blood pulsation is to consider blood pulsation signal measured by PPG device and the correspond slice of fMRI. First of all, PPG signal resampled with 0.67Hz (2/30=TR/# of slices in a volume) to synchronize on each slice acquisition time. Whole brain volume datasets separate each slice datasets. And then, first value of PPG signal indicates blood pulsation influencing to the first slice, next value of PPG signal indicate blood pulsation effect to second slice. In this manner, we estimated blood pulsation effects of each slice using regression analysis with resampled blood pulsation signal and fMRI signal change of correespond slice. Finally each separated result slice datasets was reunited into whole brain volume datasets (Fig. 1). Use retrospective estimation method to verify efficiency of new method. To get an estimate of the shape of cardiac cycle-induced change, the data for each voxel are fit to the Fourier series (1)[4]. 3
1. Preprocessing Data analysis was conducted with AFNI (Analysis of Functional NeuroImages: AFNI, Ver 2006_06_30_1332) freeware developed by R.W. Cox[15]. For each task, the first five time points in all the time series data were discarded to eliminate the fMRI signal decay associated with magnetization reaching equilibrium. All remaining fMRI data were co-registered to the first remaining time sample to correct for the confounding effects of small head motions during task performance. For all subjects, head motion was less than 1 mm throughout. Then, 'spikes' value are corrected in the 3D+time input dataset despike routine provided in AFNI. Further processing included temporal smoothing (three-point Low-Pass filter (0.15ⅹ(a-1) + 0.7ⅹ(a) + 0.15ⅹ(a+1)) as well as detrending to remove constant, linear and quadratic trends from the time series data. Estimate blood pulsation effect. Spatial Normalization was performed to transform Talairach space using Montreal Neurological Institute (MNI) N27 template provided in AFNI (bilinear interpolation, spatial resolution: 2x2x2mm3). Further processing included spatial smoothing (Gaussian filter with 9-mm full-width at half-maximum [FWHM], respectively). For the group analysis, we performed a one sample t-test at p < 0.001.
f (θ ) = a0 + ∑ (an cos(nθ ) + bn sin(nθ )) III.RESULTS
Blood pulsation effect influences were right anterior cingulate cortex, left olfactory cortex and right olfactory cortex (Fig. 2). Superior and posterior regions of the brain the effects of cardiac activity are undetected. Fig. 3 shows estimating blood pulsation effect at ACC using retrospective estimated method. BOLD signal change shape is analogous cardiac cycle. PPG s i g nal
1s t PPG
fi l t er i n g
Sam p l i ng
fi l t er i n g
Sam p l i ng
Sam p l i ng
Sam p l i ng
29t h s l i c e t i m es er i es
Dec on v o l v e 30t h PPG
fi l t er i n g
2n d s l i c e t i m es er i es
Dec on v o l v e
29t h PPG
fi l t er i n g
1s t s l i c e t i m es er i es
Dec on v o l v e 2n d PPG
2. Method for estimating blood pulsation effect In order to estimate blood pulsation effect, we hypothesize exact timing no difference between the arrival of the
(1)
n =1
30t h s l i c e t i m es er i es
Dec on v o l v e
Fig. 1 Estimating blood pulsation effect
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Estimation method for brain activities are influenced by blood pulsation effect
Right Right Anterior Anterior Cingulate Cingulate Cortex Cortex [3, [3, 27, 27, 4] 4]
Left Left Olfactory Olfactory Cortex Cortex [[ -5, -5, 7, 7, -7] -7]
Right Right Olfactory Olfactory Cortex Cortex [14, [14, 5, 5, -7] -7]
Fig. 2 Brain activity influenced by blood pulsation effects IV. DISCUSSION In this paper a new method for estimating blood pulsation effects. Experimental results have shown that this method is detecting blood pulsation effect without cumbersome post processing. Therefore, to estimate blood pulsation effect in each slice of all volume during data analysis is easier method to apply in practice and reduce computational burden than the retrospective method. Right anterior cingulate cortex, left olfactory cortex and right olfactory cortex along the anterior cerebral artery in the anterior interhemispheric fissure in the frontal lobes [9]. The areas exhibiting blood pulsation effect are generally proximal to the major artery.
Superior regions of the brain the effects of cardiac activity are undetected. This is expected due to the smaller vessel present in these regions and the dampening that occurs along the cerebral circulatory system, reducing the periodic pressure and velocity fluctuations within the minor arterial branches[9]. Moreover, posterior regions of the brain the effects of cardiac activity are not strong. This is expected due to cardiac effects depend on the location of the slice in the superior-inferior direction. During the systolic phase of the cardiac cycle, the sudden pressure increase within the cerebral vasculature causes an intracranial pressure wave that moves along the cerebral arterial tree in fraction of a second[16,17]. As a result, first in the frontal lobe arteries expand and subsequently in the posterior parts of the brain arteries expand. Because, influx of new blood delay reason for posterior regions effect of cardiac activity are undetected. This method is simple and unnecessary cumbersome post processing. But, this method have limitation that reflect correctly blood pulsation effects by artery distribution and structure. Next study we will correct blood pulsation effect in resting state network and compare with non-correct blood pulsation effect network. In resting state functional connectivity study, anterior cingulate cortex is particular brain region[18]. But anterior cingulate cortex influence by blood pulsation. Therefore, resting-state network with anterior cingulate cortex consider about influence by blood pulsation effect.
ACKNOWLEDGMENT This work was supported by grant No. (R01-2005-00010963-0) from the Basic Research Program of the Korea Science & Engineering Foundation.
6
REFERENCES
BOLD Signal % Change
4
1.
2 0
2.
-2
3.
-4 -6
841
0
0.2
0.4 0.6 Unit Cardiac Cycle
0.8
1
Ogawa S, Tank DW, Menon R, Ellermann JM, Kim SG, Merkle H et al. (1992) Intrinsic signal changes accompanying sensory stimulation: functional brain mapping with magnetic resonance imaging. Proceedings of the National Academy of Sciences of the United States of America: 89: 5951-5955 Frahm J, Bruhn H, Merboldt KD, Hanicke W. (1992) Dynamic MR imaging of human brain oxygenation during rest and photic stimulation. J Magn Reson Imaging: 2: 501-505. Fox PT, Raichle ME. Focal physiological uncoupling of cerebral blood flow and oxidative metabolism during somatosensory stimulation in human subjects. (1986) Proceedings of the National Academy of Sciences of the United States of America: 83: 1140-1144
Fig. 3 Graph of the estimated BOLD signal change over the cardiac cycle for a voxel in the ACC
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
842 4.
W. H. Lee, J. H. Ku, H. R. Lee, K. W. Han, J. S. Park, J. J. Kim, I. Y. Kim and S. I. Kim
Hu X, Le TH, Parrish T, Erhard P. Retrospective estimation and correction of physiological fluctuation in functional MRI. (1995) Magn Reson Med: 34: 201-212 5. Friston KJ, Williams S, Howard R, Frackowiak RS, Turner R. Movement-related effects in fMRI time-series. (1996) Magn Reson Med: 35: 346-355 6. Hajnal JV, Myers R, Oatridge A, Schwieso JE, Young IR, Bydder GM. Artifacts due to stimulus correlated motion in functional imaging of the brain. (1994) Magn Reson Med: 31: 283-291 7. Barry RL, Menon RS. Modeling and suppression of respiration-related physiological noise in echo-planar functional magnetic resonance imaging using global and onedimensional navigator echo correction. (2005) Magn Reson Med: 54: 411-418 8. Jiang T, He Y, Zang Y, Weng X. Modulation of functional connectivity during the resting state and the motor task. (2004) Human brain mapping: 22: 63-71 9. Dagli MS, Ingeholm JE, Haxby JV. Localization of cardiacinduced signal change in fMRI. (1999) NeuroImage: 9: 407415 10. Biswal B, DeYoe AE, Hyde JS. Reduction of physiological fluctuations in fMRI using digital filters. (1996) Magn Reson Med: 35: 107-113 11. Deckers RH, van Gelderen P, Ries M, Barret O, Duyn JH, Ikonomidou VN et al. An adaptive filter for suppression of cardiac and respiratory noise in MRI time series data. (2006) NeuroImage: 33: 1072-1081 12. Glover GH, Li TQ, Ress D. Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR. (2000) Magn Reson Med: 44: 162-167
13. Pfeuffer J, Van de Moortele PF, Ugurbil K, Hu X, Glover GH. Correction of physiologically induced global offresonance effects in dynamic echo-planar and spiral functional imaging. (2002) Magn Reson Med: 47: 344-353 14. Hu X, S.-G.Kim. Reduction of physiological noise in functional MRI using Navigator Echo. (1994) Magn Reson Med: 31: 495-503 15. RW C. AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. (1996) Comput Biomed Res: 29(3): 163-173 16. Feinberg DA, Mark AS. Human brain motion and cerebrospinal fluid circulation demonstrated with MR velocity imaging. (1987) Radiology: 163: 793-799 17. Greitz D. Cerebrospinal fluid circulation and associated intracranial dynamics. A radiologic investigation using MR imaging and radionuclide cisternography. (1993) Acta radiologica: 386: 1-23 18. Greicius MD, Krasnow B, Reiss AL, Menon V. Functional connectivity in the resting brain: a network analysis of the default mode hypothesis. (2003) Proceedings of the National Academy of Sciences of the United States of America: 100: 253-258 Author: Institute: Hanyang Street: City: Country: Email:
Jeonghun Ku, Ph. D. Department of Biomedical Engineering, Univesity Seoul Korea
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of peptides tagged nanoparticle adhesion to activated endothelial cells K. Rhee1, H.J.Moon2, K.S. Park2 and G. Khang3 1
Myongji University/Department of Mechanical Engineering, Professor, Yongin, Korea 2 KIST/Biomedical Research Center, Research Scientist, Seoul, Korea 3 Kyung Hee University/Department of Biomedical Engineering, Professor, Yongin, Korea Abstract— Various nanoparticles have recently been developed for diagnostic and therapeutic purposes. We would like to develop polymer nanoparticles functionalized by peptides which specifically bind to atherosclerosis. Hydrophobically modified glycol chitosan (HGC) nanoparticles are used as carriers, and peptide which shows specific biding to the atherosclerotic plaque is screened using phage display. We have developed atherosclerosis specific nanoparticles, and examined their binding characteristics on the activated endothelial cells in vitro. The peptide-tagged nanoparticles more avidly bound to the activated endothelial cells comparing to the unactivated endothelial cells. The adhesion of nanoparticles increased with increase in the concentration of nanoparticles. Keywords— nanoparticle, atherosclerosis, target specificity, molecular imaging
I. INTRODUCTION Various nanoparticles have recently been developed for diagnostic and therapeutic use owing to the advance of nanotechnology [1,2]. Nanoparticles made of novel materials, including quantum dots [3], polymers [4] and magnetofluorescent particles [5], can be functionalized by surface modification in order to bind to the surface of target cells and tissues. Target specificity of nanoparticles can enhance the efficiency of drug delivery and diagnosis while reducing the side effects such as toxicity and immunogenicity. Specific binding characteristic can be achieved by conjugating nanoparticles with peptides. The peptides which show specific binding to the disease can be screened by phage display without a prior knowledge of target molecules expressed on the target cells [6]. We would like to develop polymer nanoparticles functionalized by peptides which specifically bind to atherosclerosis. Hydrophobically modified glycol chitosan (HGC) nanoparticles are used as carriers because of their blood compatibility, adequate circulation retention time, and high endothelial cellular uptake [7]. HGC also can imbibe hydrophobic drugs and release them in sustained manner [8]. Peptide which shows specific biding to the atherosclerotic plaque is screened using phage display. In this study, we have developed atherosclerosis specific nanoparticles, and
examined their binding characteristics on the activated endothelial cells in vitro. II. MATERIALS AND METHODS A. Endothelial cell culture and activation Bovine aortic endothelial cells (BAECs; a kind gift from Dr. B.H. Lee, Kyungbuk National University) were cultured in tissue culture flasks in Dulbecco’s Modified Eagle Medium (DMEM-low glucose). Unless otherwise stated, all other reagents were from Sigma-Aldrich (St.Louis, MO). BAECs were then grown to monolayers. Growth medium was aspired, and the cell monolayer was washed with distilled phosphate buffered saline (DPBS without Ca2+ and Mg2+). After aspiring the DPBS, 3.0 ml of trypsin-EDTA was added to the flask. After 5 minute incubation, cells were detached. The cells were flushed with a 10 ml pipette several times, and transferred to a 15 ml tube. The cells were centrifuged at 1200 rpm for 3 min, and the supernatant was discarded. The cells were resuspended in the medium, and seeded in 8-well and 24-well cell culture clusters at a density of 1x104 and 5x104 cells per a well, respectively. Cells were allowed to grow for 1 day and monitored for uniformity. Recombinant human tumor necrosis factor-α (TNF-α) was obtained from Chemicon International (Millipore Corp., Billerica, MA). Cells were treated with 10 ng/mL of TNF-α for 6 hrs for activation. B. Preparation of fluorophore-labeled antibodies Mouse anti-human CD62E (E-selectin), CD54 (ICAM-1) and CD106 (VCAM-1) were obtained from Chemicon International (Millipore Corp., Billerica, MA). Each antibodies (500 μg) was separately transferred into Centricons (MWCO 100 kDa) containing 1 mL DPBS (pH 7.4). By centrifugation at 1,500 rpm for 20 min under cooling chamber (4 oC), antibody solutions were concentrated. After twice more washing with DPBS at 4 oC, the concentrated solutions were transferred into 1.5 mL microcentrifuge tube and the final volume was adjusted to 1 mL by adding cold DPBS. An NHS-activated cyanine dye (Cy™ 5.5 mono NHS ester) was obtained from Amersham Bioscience (GE
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 835–838, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
836
Healthcare, Piscataway, NJ). 1 mg/mL Cy™ 5.5 mono NHS ester in anhydrous DMSO (10 mg/mL) was added. Reaction was performed at 4 oC in a dark room. In 5 h, each solution was transferred into Centricon (MWCO 100 kDa) and the unreacted dyes were removed by centrifugation at 1,500 rpm (40 min, 4 oC). After washing with another 6 mL DPBS for each Centricon, concentrated antibody solutions were transferred into 1.5 mL microcentrifuge tube and the volume was made to 1 mL. Labeled antibodies were further purified by desalting column (D-Salt™ polyacrylamide 6000, 10 mL bed, Pierce Biotechnology, Inc., Rockford, IL). Cold DPBS (pH 7.4) was used as an eluent. Every 1 mL of elution volume was collected and UV absorbance of each sample was measured at the wavelength 695 nm (A695) for Cy5.5. Elution volume fraction in which UV absorbance peaks from proteins and dyes were overlapped was collected and concentrated to 1 mL by using Centricon (MWCO 100 kDa). Labeled antibodies were stored at 4 oC till further experiments. C. Adhesion molecule expression Prepared cells were seeded (1x105) in 60 mm dishes, and cultured for 3 days. As controls, dishes treated by 25 μL DPBS were also prepared. In 4 h, media of all dishes were removed and BAECs were rinsed twice by DPBS. To dishes, 2 mL of formaldehyde (FA), glutaraldehyde (GA) solution (4 and 0.02 %, v/v, in DPBS) was added and incubated for 5 min at room temperature. After twice washing with DPBS, 2 mL DAPI solution in PBS (3 μM) was applied to each dish and BAECs were incubated for 5 min at RT. Stained cells were rinsed twice by DPBS and 2 mL HEPES-buffered Krebs’ (HK) solution (5 mM KCl, 1 mM NaH2PO4, 1 mM MgCl2, 130 mM NaCl, 2 mM CaCl2, 5 mM NaHCO3, 10 mM HEPES, pH 7.4) containing 1 g/L Dglucose and 1% (w/v) bovine serim albumin (BSA) was filled in each dish. Simultaneously, 5 μg antibody was treated. Incubation was performed for an hour at room temperature. BAECs were rinsed twice by DPBS and, using 4 mL of FA/GA solution (4 and 0.05 %, v/v, in DPBS), postfixation was carried out for 5 min at room temperature. All samples were stored at – 4 oC till fluorescence microscopic observation. D. Preparation of AP-1 peptide-tagged HGC-Cy5.5 conjugate The HGC (degree of substitution=12%) was synthesized in the presence of 1-ethyl-3(3-dimethylaminpropyl) acrbodiamide hydrochloride (EDC) and N-hydroxysuccinimide (NHS), as described previously [9]. Peptide sequences with seven motifs were screened by phage display. A phage
K. Rhee, H.J.Moon, K.S. Park and G. Khang
library containing random peptides was screened for binding to atherosclerotic plaques. After three rounds of screening, the DNA insert of the selected phage clones were sequenced, and then translated into corresponding peptide sequences electronically. Of these, the most frequently occurred peptide was named as AP-1 peptide and chosen for further study. Hydrophobically modified glycolchitosan (HGC, 180 mg, 3.6 μmol) was dissolved in 6 ml of anhydrous DMSO (dimethyl sulfoxide), and then SMCC [N-succinimidyl-4(maleimidomethyl) cyclo-hexanecarboxylate, 18.05 mg, 54 μmol] was added. The reaction was allowed at room temperature for 12 h. After that, the solution was dialyzed against distilled water using dialysis membrane (MWCO 12,000~14,000, Spectrum®) for 1 day, and lyophilized to obtain product 1 (HGC-MCC) (144 mg, yield=79.6%). MCC-HGC (50 mg, 0.99 μmol) and Cyanine 5.5 (2.25 mg, 1.99 μmol) were dissolved in 4 ml DMSO and reacted for 12 h. The mixture solution was then dialyzed (MWCO 12,000~14,000, Spectrum®) for 1 day, and freeze-dried. The yield of product 2 was nearly 100%. MCC-HGC-Cy5.5 conjugate (40 mg, 0.78 μmol) was dispersed in the mixture of 10 mM PBS (pH 6.7)/DMSO (10 ml, 9:1) and AP-1 peptide (25 mg, 21.5 μmol) was added. The mixture solution was stirred at room temperature for 1 day, and then dialyzed against distilled water and further freeze-dried. E. Particle binding on BAECs : Fluorescence microscopy 1 mg of nanoparticles was dissolved in 1 ml of distilled water and sonicated three times using a probe type sonifier (Sigma Ultrasonic Processor, GEX-600). Different concentrations of nanoparticles (0, 25, 50, 100 μg/ml) were prepared. The medium in the 8-wells was replaced with the suspensions of nanoparticle and incubated for one hour. At the end of the incubation period, the nanoparticle suspension was removed from the wells and the cells were rinsed three times with DPBS. 2 mL of FA/GA solution (4 and 0.05 %, v/v, in DPBS) was added and incubated for 10 min at room temperature. After twice washing with DPBS, 2 mL DAPI solution in PBS (pH 7.4, 3 μM) was applied to each well and cells were incubated for 5 min. After washing with DPBS, gel mount aqueous mounting medium (G0918) was added. The wells were covered with coverglasses and fixed with manicure for fluorescent microscopic observation. Fluorescence microscopic observation was carried out by Axioskop 2 FS plus microscope (Carl Zeiss, Thornwood, NY). All images were obtained by using 40 × water immersion objective lens (“Achroplan” IR 40×/0.8W lens). Bright field (BF) images were captured by using phase-contrast filter with 100 ms exposure time. Exposure times for DAPI and Cy5.5 were 1 s and 100 ms, respectively. Acquired
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of peptides tagged nanoparticle adhesion to activated endothelial cells
837
images were colored and exported as a Tiff format by aid of AxioVision software (Carl Zeiss). Without further modification, image montages were prepared by NIH Image J software (ver. 1.36). III. RESULTS AND DISCUSSION Activation of endothelial cells by TNF-α was confirmed by adhesion molecule expressions on endothelial cells. Adhesion molecules (ICAM-1, VCAM-1, E-Selctin) were detected by fluorophore labeled antibodies in the activated endothelial cells while no fluorescence was shown in the unstimulated (control) cells. Nanoparticles (HGC and HGC-AP) were reacted with activated and unactivated endothelial cells for different nanoparticle concentrations. The nanoparticles without AP1 peptides did not bind either to the activated nor the unactived BAECs for the particle concentration of 25, 50, 100 μg/ml. The binding of nanoparticles tagged with AP-1 peptides (HGC-AP) to the BAECs was dependent on cytokine activation and the concentration of nanoparticles in the medium. The binding of HGC-APs to the activated BAECs increased with increase in the concentration. Figure 1 and Figure 2 shows fluorescence microscopic images of nanoparticle adhesion on the BAECs. The blue spots show the nuclei of endothelial cells and the red spots show nanoparticles binding to endothelial cells. AP-HGCs did not bind significantly to the BAECs at low concentration (0, 25 μg/ml), but they showed some binding characteristics for the concentrations of 50 and 100 μg/ml for both activated and unactivated cells. The result implied that AP-1 peptides might bind to the endothelial cells whether they were activated or not. But AP-HGCs bound more avidly to the activated endothelium at the same concentration. Therefore, AP-HGCs could specifically bind to the activated endothelium in the certain range of concentration.
(a) 0 μg/ml
(c) 50 μg/ml
(b) 25 μg/ml
(d) 100 μg/ml
Fig. 1 Fluorescence microscopic observation of nanoparticle binding on unactivated endothelial cells for different concentrations of HGC-AP
(a) 0 μg/ml
(b) 25 μg/ml
IV.CONCLUSIONS We conjugated chitosan nanoparticles with atherosclerotic plaque binding peptides and studied their binding characteristics on the endothelial cells in vitro. The peptide-tagged nanoparticles more avidly bound to the activated endothelial cells comparing to the unactivated endothelial cells. The adhesion of nanoparticles increased with increase in the concentrations of nanoparticle. Since peptide tagged nanoparticles showed specific binding characteristics in the certain concentration ranges, they could be used for the target specific imaging probes and drug delivery carriers.
(c) 50 μg/ml
(d) 100 μg/ml
Fig. 2 Fluorescence microscopic observation of nanoparticle binding on TNF-α activated endothelial cells for different concentrations of HGC-AP
ACKNOWLEDGMENT This work was supported by grant (R01-2006-00010269-0) from the Basic Research Program of the Korea Science & Engineering Foundation..
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
838
K. Rhee, H.J.Moon, K.S. Park and G. Khang
REFERENCES 1. 2. 3. 4. 5. 6.
Whitesides G.M (2003) The ‘right’ size in nanotechnology. Nat Biotechnol 21:1161-1165 Weissleder R, Kelly K, Sun EY et al (2005) Cell-specific targeting of nanoparticles by multivalent attachment of small molecules. Nat Biotechnol 23:1418-1423 Gao X, Nie S (2003) Molecular profiling of single cells ans tissue s[ecimens with quantum dots. Trends Biotechnol 21:371-373 Muro S, Dziubla T, Qui W et al (2006) Endothelial targeting of highaffinity multivalent polymer nanocarriers directed to intercellular adhesion molecule 1. J Pharmacol Exp Ther 317:1161-1169 Kircher MF, Weissleder R, Josephson L (2004) A dual fluorochrome probe for imaging proteases. Bioconjug Chem 13:242-248 Kelly AK, Nahrendorf M, Yu AM et al (2006) In vivo phage display selection yields atherosclerotic plaque targeted peptides for imaging. Mol Imaging Biol 8:201-207
7. 8. 9.
Park JH, Cho YW, Chung H et al (2003) Synthesis and characterization of sugar bearing chitosan derivatives: aqueous soluability and biocompatiability. Biomolecules 4:1087-1091 Kim YH, Jang YW, Cho H et al (2003) Biodistribution and antitumor efficacy of doxorubicin loaded glycol-chitosan nanoaggregates by EPR effects. J Control Release 91:135-145 Kwon S, Park JS, Cho YW et al (2003) Physicochemical characteristics of self-assembled nanoparticles based on glycol chitosan bearing 5-cholanic acid. Langmuir 19:10288-10193 Author: Kyehan Rhee Institute: Street: City: Country: Email:
Myongji University Cheoingu, Namdong san 38-2 Yongin, Kyunggido Republic of Korea
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of Tomographic Reconstruction for Small Animals using micro Digital Tomosynthesis (microDTS) D. Soimu, Z. Kamarianakis and N. Pallikarakis University of Patras, Dept. of Medical Physics, Patras 26500, Greece Abstract— Significant advances in the development of transgenic and knockout animal models of human disease have made whole-animal imaging an important new application for micro CT. In many studies of genetically altered animals, investigators require a non-destructive, 3D technique to characterize the phenotype of the animal. However, a fundamental limitation which should be considered, especially in experiments involving imaging the same animal over time, is the inherent use of ionizing radiations which may approach the lethal dose for small rodents. Digital Tomosynthesis (DTS) is a fast, low-dose 3D imaging approach which yields image with excellent in-plane resolution, though low plane-to-plane resolution. A stack of DTS slices can be reconstructed from a singlelimited arc scan, with typical scan angles ranging from 10°-60° and acquisition time of less than 10 seconds. This study evaluates the reconstructed tomograms for small animal imaging system using µCT and µDTS, for three different DTS scan angles (20°, 40°, and 60°). Resulting DTS slice show soft tissue contrast approaching that of full cone-beam CT. Keyword — small animal imaging, tomosynthesis, µCT, 3D reconstruction
I. INTRODUCTION The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in small animal imaging. Micro-computed tomography (µCT) systems did become in the latest years powerful imaging modalities for small animals. The earliest reported systems used x-ray image intensifiers as detectors [1], though this approach limits spatial resolution. Over the past three years, tremendous progress has been made in x-ray detectors, hardware, real/time volumetric CT algorithms, and computing techniques. Development of a volumetric µCT cone-beam fluoroscopic system with multiple x-ray sources has become feasible by 2001 [2]. Recently reported a prototype µCT system, based on a CMOS flat panel detector [3] has been successfully demonstrated in small animal imaging. CMOS based µCT demonstrated advantage in their use for whole body imaging of small animals as large as a laboratory rat. However, a fundamental limitation which should be considered, especially in experiments involving imaging the same animal over time, is the inherent
use of ionizing radiations which may approach the lethal dose for small rodents. In addition, systems designed for small animal are usually optimized for slightly reduced spatial resolution, typically with 50–100 µ voxel spacing. The image noise is proportional to (Δx)-2 (for isotropic voxel spacing Δx) if X-ray exposure to the animal is held constant [1]. Thus, extremely high-resolution imaging might necessitate unacceptably high whole-body X-ray doses for live animals. DTS is a fast, low-dose alternative imaging approach, which falls between the two extremes of 2D radiographic imaging and fully 3D CBCT [4]. The 2D radiographic approach is fast and low dose when a kilovoltage source is used, but does not yield adequate information for the soft tissue. Full CT, on the other hand, provides a true volumetric image and excellent soft-tissue information, but incurs relatively high imaging dose, similar to conventional CT (210Gy [5]) and acquisition time of more than 1 minute due to gantry speed limitation. A stack of DTS slices can be reconstructed from a single limited arc-scan, with typical scanning angles ranging from 10°-60°. The reconstruction is very high within DTS slices reconstructed parallel to the central projection image of a scan, but the slice-to-slice reconstruction is compromised by the limited scan angle. Thus, a stack of coronal section images can be rendered from a DTS scan centered about the antero-posterior view, but high quality sagital or axial views cannot be reconstructed from the same scan. Changing the scanning settings/view, another stack of high-quality DTS slices can be acquired. This study evaluates the reconstructed tomograms for small animal imaging system using µCT and µDTS, for three different DTS scan angles (20°, 40°, and 60°). Resulting DTS slice show soft tissue contrast approaching that of full cone-beam CT. II. MATERIALS AND METHODS A. Phantoms For this study noise-free projection of two simulated phantom (an analytical test phantom and a voxelized mouse phantom) were used. The first phantom, used to test the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 826–829, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Evaluation of Tomographic Reconstruction for Small Animals using micro Digital Tomosynthesis (microDTS)
827
simulated data because it is particularly useful in studying specific effects, being free of distortions and others inaccuracies inherent to radiographic units. For the low-contrast test phantom, projections data of 512x512 pixels with a resolution of 0.28 mm were obtained for a source to isocenter distance of 1000 mm and a source to detector distance of 1300 mm. System dimensions in the case of the mouse phantom were as follows: source-todetector distance equal to 500 mm, source-to-centre-ofrotation distance equal to 400 mm. The detector consisted of 1024 × 1024 elements, the pixel size being 0.5 mm. In both cases, three hundred sixty projections were computed over a full circle, using an acquisition step of 1°. C. Reconstruction of µCT/DTS acquisition Fig. 1 The digital mouse phantom (derived from a [6])
contrast sensitivity, was a simplified version of the clock phantom. It consists of a sphere containing a set of 8 balls with varying densities providing background contrast variations of 10%, 15%, 20%, 25%, 30%, 35%, 40% and 50%. These inside spheres are placed symmetrically above and below isocenter, being arranged in a clockwise fashion and gradually offset in the z-direction. Micro-CT projections of a digital (voxelized) phantom approximating the mouse (Figure 1) were also simulated. The phantom was derived from a digital mouse phantom [6]. The densities of the tissues composing the phantom were as follows: body: 1.05g/cm−3, intestines: 1.03g/cm−3, substance filling the intestines: 0.3g/cm−3, spine: 1.42g/cm−3, other bones (the hips): 1.92g/cm−3. The phantom was discretized onto a 256 × 256 × 512 matrix having a voxel size of 0.5 mm. The phantom was completely contained within the x-ray beam of the simulated µCT/µDTS system. B. X-ray CT Simulations The object/organs of the two phantoms were set to model the distribution of attenuation coefficients at 50 keV photon beam (that approximates the mean energy of the xray spectrum produced by a radiotherapy simulator 100 kVp). Cone-beam projection data were simulated from the two phantoms using Simphan, an in-house a software for radiographic imaging investigations [7, 8]. This investigative software tool can be used to simulate the entire radiological process, including the imaged object, imaging modalities, operating parameters, beam transport. It provides sufficient accuracy and flexibility to allow its use in a wide range of approaches, being of particular help in the design of an experiment and conducting first level trials. We used
Tomographic reconstructions were visually evaluated for the simulated phantoms, using a filtered-backprojection algorithm [9, 10] (cosine pre-weighting of the projection, ramp filtering along the detector line, and backprojection). In this study, we associated the ramp filter with an apodization window, like the hamming window, to attenuate the high-frequency noise. The full µCBCT 3D volume of digital mouse was reconstructed into a 256x256x256 arrays with a pixel width and slice thickness of 0.5 mm. Fully 3D and DTS slices were reconstructed for both phantoms from a subset of µCBCT projection images acquired with a limited rotation of the gantry. The experiment was repeated for DTS scan angles of 20°, 40° and 60°, and for different scanning views in the case of the mouse phantom. In any µCT systems, several factors affect the spatial resolution of the reconstructed images/volumes. These factors include the inherent resolution of the X-ray detector, geometric magnification, focal spot size, stability of the rotation mechanism and the filtering method used during filtered-backprojection reconstruction. III. RESULTS The simulation results are shown in figures 2 and 3. Figure 2 shows the central reconstructed tomograms of the low contrast phantom, for acquisition arcs of 60º(b), 40º(c) and 20º(d), and the corresponding slice reconstructed using fully CB acquisition and FDK algorithm. In all reconstructed images, the 10% contrast variance inside sphere is observed. In figure 3 are shown different views of the digital mouse reconstructed tomograms, for the three different DTS reconstruction angles, compared with the full-scan slice. The reconstructions were performed on a Pentium 4 at 2.8GHz computer, using IDL language. Using the full scan, it took about 19.5 seconds to reconstruct a 256x256 slice of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
828
D. Soimu, Z. Kamarianakis and N. Pallikarakis
Fig. 2
Central reconstructed tomograms of the low contrast phantom, for acquisition arcs of 60º(b), 40º(c) and 20º(d), and the corresponding slice reconstructed using fully CB acquisition and FDK algorithm
the mouse phantom, while for the DTS both data acquisition and reconstruction time are considerably shorter. Reconstruction times in this case are 5.5s, 3.9s, 2.2s for 60º, 40º and 20º respectively. IV. DISCUSSIONS There are numerous exciting applications for µCT in small animal laboratory investigation. However, a fundamental limitation which should be considered, especially in experiments involving imaging the same animal over time is the inherent use of ionizing radiations. In scans that combine both high-resolution and low noise, the X-ray exposure could approach the lethal dose for small rodents (~6Gy). This impact should be considered in any longitudinal experiment with live animals. In this study we evaluated the potential use of (micro) DTS in small animal, for three different DTS scan angles: 20°, 40°, and 60°. Resulting DTS slice show soft tissue contrast approaching that of full cone-beam CT. Although the resolution remains quite poor due to the rather large pixels of the detector used, soft tissue can be clearly distinguished from bones: lungs, heart and intestines can be identified on the DTS slices as shown in figure 3. With proper administration of X-ray contrast agents, organs such as brain, lungs, spleen, liver, kidneys, and colon may also be visible.
Fig. 3
Different views of the digital mouse reconstructed tomograms, for the three different DTS reconstruction angles, compared with the full-scan slice
Although DTS brings in focus most of the inner organs, the overall relative contrast of the DTS tomogram is estimated to be lower. This is due to blurring caused by out of focus structures overlaid in the tomosynthetic plane. Further post-processing of the tomosynthetic image, e.g. through noise removal via mask subtraction of (filtered) µDTS, can improve the image quality and the ability to detect specific organs and tumors. The shape distortions present in limited arc reconstructions, introduce problems (axial elongations) when defining contours of otherwise discernable structures. Further optimization involves choosing imaging parameters that minimize the impact of tomosynthesis artifacts. There are various types of artifacts that may arise in any type of geometric tomography due to an incomplete sampling of frequency space. Theirs nature is strongly dependent on the type of tube motion, the deblurring algorithm chosen, total tube angle, number of projection images, number of reconstructed planes and the type of tissue being imaged. The resulting 2D /3D micro-tomographic reconstruction can be further automatically registered with the initial µCT
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of Tomographic Reconstruction for Small Animals using micro Digital Tomosynthesis (microDTS)
using a standard intensity-based 2D/3D or 3D/3D registration technique, in situations that requires imaging the same animal in a short interval of time. V. CONCLUSIONS Significant advances in the development of transgenic and knockout animal models of human disease have made whole-animal imaging an important new application for micro CT. In many studies of genetically altered animals, investigators require a non-destructive, 3D technique to characterize the phenotype of the animal. The potential of µDTS was evaluated using two simulated phantoms, for various situations (low contrast, small animals). Based on the above observation we can state that µDTS can be an important imaging method, especially in experiments involving imaging the same animal over time.
ACKNOWLEDGMENT The authors would like to express their thanks to Dr. W.P. Segars for the digital mouse phantom and Dr. K. Bliznakova for her valuable assistance concerning the Simphan tool. We also thank the PENED 2003 programme for funding the above work.
829
REFERENCES 1.
Holdsworth D W et al. (1993) A high resolution XRII-based quantitative volume CT scanner. Med. Phys. 20: 449-462 2. Liu Y et al. (2001) Half-scan cone-beam CT fluoroscopy with multiple X-ray sources. Med. Phys. 28:1466–1471 3. Lee S C, Kim H K et al. (2003) A flat-panel detector based micro-CT system: performance evaluation for small-animal imaging. Phys. Med. Biol. 48: 4173-4185 4. Dobbins J T, Godfrey D J (2003) Digital X-ray tomosynthesis: current state of art and clinical potential Phys. Med. Biol. 48:R65-106 5. Rehani B, Golding S et al (2000) Managing X-ray dose in computed tomography: ICRP special task force report, Ann ICRP 30:7-45 6. Segars P et al (2004) Development of a 4D digital mouse phantom for molecular imaging research. Mol. Imaging Biol. 6(3): 149-159 7. Lazos D, Kolitsi Z, Pallikarakis N (2000) A software data generator for radiographic imaging investigations, IEEE Trans. Inf. Biomed. 4:76-79 8. Bliznakova K (2003): Study and development of software simulation for X-ray imaging, PhD thesis (Patras University, Greece) 9. Feldkamp L A, Davis L C, Kress J W (1984): Practical cone-beam algorithm, J. Opt. Soc. Am. A. 1(6):612-619 10. Badea C, Kolitsi Z, Pallikarakis N (2001): ‘Image quality in extended arc filtered digital tomosynthesis’, Acta Radiologica, 42: 244-249 Address of the corresponding author: Author: Delia Soimu Institute: Dept. of Medical Physics, School of Medicine, University of Patras City: Rio 26500, Patras Country: Greece Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Lung Surface Classification on High-Resolution CT using Machine Learning S. Busayarat1 and T. Zrimec1,2 1
School of Computer Science and Engineering, University of New South Wales, Sydney, Australia 2 Centre for Health Informatics, University of New South Wales, Sydney, Australia
Abstract— Lung surface is the result of lung segmentation, which is a very first step of any lung image analyses. Dividing lung surface into multiple parts according to its anatomical features provides better understanding of the lung and its functions. The paper presents an automatic classification of the lung surface on high-resolution CT images. The entire lung surface is divided into four classes according to the parietal pleura surrounding the lung. 2D shape analysis in multiple views is used to detect the pleural reflection lines, which divide lung surface into smaller sections. Two machine learning classifier, namely C4.5 and ripple-down rules (RDR) are used to classify the surface sections. After trained with 30 real patient scans and tested with 10 scans, the results show that the method is able to classify the lung surface with 92% accuracy. Keywords— lung surface, classification, HRCT, machine learning, ripple-down rules.
I. INTRODUCTION Computerized medical image analysis of HRCT has been an active research area in the past decade. In lung HRCT image analysis, the problem of lung segmentation has been intensively investigated during the period and many automatic methods have achieved high accuracy and reliability. The next step is to automatically recognize different regions of the lung according the lung anatomy and its functions. In lung HRCT imaging domain, a few different approaches to lung localization (determining locations inside the lung) have been investigated. The most common approach is using automatically detected pulmonary fissures to divide the lung into different lobes [1]. This approach is the most common because fissures are physical boundaries between lobes and are generally visible on HRCT. However, dividing lung into lobes does not give enough anatomical information in some situations. Moreover, due to some possible disease processes, automatic fissure detection has not achieved the same reliability as the lung segmentation. In this paper, we propose a new approach to lung localization by determining lung surfaces. It uses the knowledge about the parietal pleurae surrounding the lung is used to classify the lung surface into four different parts. The parietal pleura is a continuous membrane that lines the pulmonary cavities. It consists of four parts, namely costal, medi-
astinal, diaphragmatic and cervical pleura. A clear definition of each pleura is described in [2]. Each pleura is separate from each other by a line of pleural refection, a line where lung surface rapidly changes its slope. In this paper, we propose an automatic method of dividing lung surface into four parts named after the corresponding pleurae. The method analyzes the lung surface in 2D but in multiple views, including axial, coronal and sagittal views. 2D curvature analysis is used to detect the line of pleural refection. Two machine learning algorithms are used to develop rules for distinguishing between different lung surfaces. II. METHOD OVERVIEW An overview of our automatic lung surface classification is shown in Fig. 1. The input to the system is a series of axial HRCT images. The images will be analyzed by two parallel modules. The Axial-View Module divides the lung surface into a number of small surface sections in axial view. The Sagittal-view and Coronal-view modules provide additional information that is obscure in the axial view. The system detects the cervical-costal and mediastinaldiaphragmatic borders using the coronal and saggital images, respectively. The surface sections are classified into the four groups according to their properties. We experiment with two different classifiers, namely RDR and C4.5. Both classifiers were priory trained with the Reference Standard, which was constructed by an expert. HRCT
Axial-view Analysis
Sagittal-view Analysis Diaphragmatic Border
Surface Section
Surface Classification Expert Input
RDR C4.5 Classified Surface
Fig. 1 System Overview
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 822–825, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Coronal-view Analysis Cervical Border Reference Standard
Lung Surface Classification on High-Resolution CT using Machine Learning
823
III. AXIAL-VIEW ANALYSIS
IV. ADDITIONAL-VIEW ANALYSIS
A. Lung Segmentation A number of works have reported on lung segmentation [3]. The most common and successful approaches use thresholding and active contouring. We combine both approaches so that we achieve high reliability and smooth contour. We also add a main-bronchi-removal step prior to the segmentation to standardize the lung surface near the mediastinum. This step is important because including main bronchi in the segmentation result causes a significant change in surface curvature. This, in turn, would make the curvature analysis more difficult. The result of the lung segmentation is a 3D lung surface, which results in 2D lung contour on axial images (See Fig. 2-left). B. Lung Surface Partitioning The goal of the lung surface partitioning is to detect the lines of pleural reflection. This essentially divides the entire lung surface into smaller sections where the surface curvature rapidly changes. We perform the curvature analysis in 2D on axial plane to find the reflection points. We chose not to do it in 3D mainly to reduce computational complexity. Generally, the curvature of a planar curve y = f(x) is: k=
y" ≈ y" (1 + y ' 2 ) 3 / 2
(1)
In our case, the slope (y’) is small compared to the unity. The curvature can then be approximated as y”. The reflection points are defined as points on the 2D lung contour that have curvature greater than a pre-defined threshold τ (τ = 0.41). The detected reflection points are shown in Fig. 2-middle. A section of lung contour between two reflection points is called a lung surface section (See Fig. 2 – right).
Using more than one two-dimensional view for a threedimensional shape analysis of an object is common. Some shape information is more apparent in one view than another. In our case, we use representative images in saggital and coronal views to improve the analysis of the axial view. Most of HRCT scans are only reconstructed in axial plane, which is the same case for our data. Assuming that sagittal and coronal images are available will compromise the versatility of the system. Both sagittal and coronal images used in this work are generated from a series of axial images. Fig. 3 (a, c) shows examples generated images. A. Sagittal View Analysis Sagittal view provides better visualization of the pleural reflection line between mediastinum and diaphragm. In sagittal view, the pleural reflection line is the bottom-right corner of the lung, which is automatically identified using the following shape analysis. Using the axial-view lung segmentation algorithm and projecting the result onto sagittal view, a binary lung image is generated (See Fig. 3-b). The lung contour is jagged because of the low vertical resolution (15mm slice gaps in our case). By connecting the corner points on the contour, the lung is smoothened and its shape profile is restored. To identify the pleural reflection point, we look for the point from the lowest part of the lung on the right side where the slope starts to change significantly. The lung contour between the lowest part of the lung and the reflection point is defined as diaphragm (the white arrow in Fig. 6-b) and can be projected back to the axial-view images. B. Coronal View Analysis The coronal view provides the best visualization of the rib structure, which is required for identifying the cervical pleura. According to [2], cervical pleura is a cup-shaped pleural dome above the first rib. Using the coronal image,
(a)
Fig. 2 Axial-view Analysis. From left to right, an axial lung HRCT with lung contour, with surface reflection points and a lung surface section.
(b)
(c)
(d)
Fig. 3 Other-view analysis. From left to right, (a) sagittal image with reflection point (arrow), (b) result of the sagitall-view analysis, (c) coronal image with 1st rib (arrow) and (d) result of the coronal-view analysis.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
824
S. Busayarat and T. Zrimec,
the ribs appear as high-density bars next to the outer contour of the lung (white arrow in Fig. 3-c). Using a bone-density threshold of 500 HU, the ribs are extracted from the image. Part of the lung above the first rib line is defined as cervical surface (See Fig. 3-d).
Cervical Costal
Mediastinal
V. LUNG SURFACE CLASSIFICATION Each of the lung section resulted from the axial-view analysis will be classified to cervical, costal, mediastinal or diaphragmatic surface. It will be classified based on a set of attributes describing its shape and location. The diaphragm border and the first rib resulted from additional-view analyses are also used in the attribute calculation. Machine learning is used to build the classifier. In this paper, we consider two machine learning algorithms, C4.5 and ripple-down rules (RDR). A. Attribute Calculation To describe lung section attributes and how they are calculated, a mathematical description of a lung section is provided. A lung surface section is represented by a set of X-Y coordinates outlining its shape and a slice index indicating the slice that the section belongs to. In mathematical form:
S = ({( x1 , y1 ), ( x 2 , y 2 ),..., ( x n , y n )}, sliceIndex) Two sets of attributes are used for the classification. The first set consists of four shape-related attribute as shown in Table 1. The other set consist of five location-related attributes namely X, Y, Z, Distance_from_1st_Rib and Distance_from_Diaphragm. The former three refer to the positions in the lung coordinate system. The later two refer to the distance from the two reference slices. Fig. 3 illustrates the lung coordinate system, each axis’s value range and the two reference slices.
Diaphragmatic
Fig. 4
Locations relative to the lung and two reference slices.
result in 3D
In the training phase, an expert manually selects the correct class for each instance used for the training. The training instance is stored in an attribute-relation file format with the expert-selected class attached. We use the WEKA implementation of C4.5 [5]. 2,052 instances of lung surface sections were used for the training. The generated decision tree is then used to classify lung surfaces on new HRCT scans. A 3D visualization of classified surfaces of a lung is shown in Fig. 5. C. Ripple-down Rules Classifier Ripple-down rules (RDR) [6] is an incremental knowledge acquisition methodology. In the RDR framework, the human expert’s knowledge is acquired based on the current context and is added incrementally. When the expert creates a new rule, he or she only focuses on classifying cases corresponding to a particular context not classifying all cases belonging to a class. This makes the knowledge acquisition easier from the expert’s point of view. In this work, RDR is used to acquire the knowledge of which attribute(s) an expert uses to differentiate different types of lung surface. The attribute set is the same as the one used for the C4.5. Table 1 Lung surface section shape attributes
B. C4.5 Classifier An attribute-based inductive machine learning algorithm, C4.5, is used for this work. C4.5 is a decision tree generating algorithm. The algorithm selects attributes to create a rule at each node of the tree. The attributes are selected so that they split the training data up more efficiently than others. A more detailed description of C4.5 can be found in [4]. C4.5 was selected for this work because of its ability to work with continuous attributes and that the classification rules are readable by human.
Fig. 5 Surface classification
Attribute Name Length Average Curvature
Description Length of the section line in pixels Average curvature of all points in the section
Formula n −1
∑ i =1
( xi − xi +1 ) 2 + ( yi − y i +1 ) 2
n
i =1
Average Concavity
Average negative curvature (concave) of all points in the section
Curve Direction
Average of normal vectors of all points in the section.
⎛d2y
∑ ⎜⎜ dx ⎝
⎛⎧d 2 y ⎜ ⎪ 2 xi ⎜ ⎪⎨ dx ∑ i =1 ⎜ ⎪ ⎜⎪ 0 ⎝⎩ n
n −1
angle =
2
⎞ dy xi < 0 ⎟ dx ⎟ dy ⎟ : xi ≥ 0 ⎟ dx ⎠ :
∑ tan i =1
⎞ xi ⎟⎟ ⎠
y 2 − y1 ) x1 − x 2 n −1
−1
(
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Lung Surface Classification on High-Resolution CT using Machine Learning
100 95
%Accuracy
90 85 80
C4.5
75
RDR
70 65 60 1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
#Training Scans
The accuracy advantage of the RDR may be contributed by the additional expert input that justifies the need of every new rule. RDR scheme also improves the quality of the training by enforcing the expert to be consistent. RDR asks the expert to compare a new case with a previous case that he/she classified differently before a new rule is added. It is important to note, however, that RDR’s training time is significantly greater than the C4.5’s.
Fig. 6 Accuracy comparison between C4.5 and RDR classifiers 160 140
Rule Size
120 100 C4.5
80
RDR
60 40 20 0 1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
#Training Scans
Fig. 7 Rule size comparison between C4.5 and RDR classifiers In the RDR training phase, an expert selects the correct class for a lung surface similarly to the machine learning counterpart. However, he or she only needs to do that for the cases that were misclassified by the current rule set. For each misclassification, the expert needs to specify why the surface section should belong to a certain class by comparing it to a case existing in the rule base. After visually comparing the two cases, the expert selects a subset of the attributes that are able to differentiate the two cases. A new RDR rule is then constructed accordingly. VI. RESULTS AND DISCUSSION Ten HRCT scans from then different subjects, which are independent from the scans used for training, were used for evaluating the performance. The total number of the lung surface sections used for testing was 808. The ground truth for the test data was generated by an expert. Performances of the two classifiers at various training stages were compared. They were compared in two aspects, classification accuracy (See Fig. 6) and size of the classification rules (See Fig. 7). The results show that the RDR classifies lung surfaces more accurately than the C4.5 (92% to 88% when fully trained). The RDR also reaches its maximum accuracy earlier. However, the classification rule from the C4.5 is much more concise (55 to 150 nodes when fully trained). The rule growing rate of the C4.5 is also lower.
825
VII. CONCLUSION We have presented a new method for automatic classification of lung surfaces. The surfaces are on the parietal pleurae surrounding the lung. The method automatically detects the lines of pleural reflection using 2D images in three different views. After dividing the surface into smaller sections, automatically generated rules by C4.5 and RDR are used for the classification. The evaluation with 808 test cases indicates high accuracy (92%) and robustness of the method. The RDR and C4.5 comparison demonstrates their relative strengths and weaknesses. The RDR approach is capable to achieve more accurate classification. On the other, the C4.5 approach requires less training time and produces more concise rules. The same framework may be applied to other knowledge acquisition problems in medical imaging domain.
REFERENCES 1. 2. 3. 4. 5. 6.
L. Zhang, Atlas-driven lung lobe segmentation in volumetric X-ray CT images, PhD Thesis, University of Iowa, 2002. Moore KL, Dalley AF (1999) Clinically Oriented Anatomy. Lippincott Williams and Wilkins, Canada. Hu S, Hoffman EA, Reinhardt JM (2001) Automatic lung segmentation for accurate quantitation of volumetric X-ray CT images, IEEE Trans. Medical Imaging 20(6): 490-498. Mitchell TM (1997) Machine Learning. McGraw-Hill, Singapore. Witten I, Frank E (2000) Data mining practical machine learning . Morgan Kaufmann. Compton P, Jansen R (1990) A philosophical basis for knowledge acquisition. Knowledge Acquisition 2: 241–257. Author: Institute: Street: City: Country: Email:
Sata Busayarat University of New South Wales Anzac pde, Kensington Sydney Australia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Markov Chain Based Edge Detection Algorithm for Evaluation of Capillary Microscopic Images G. Hamar1, G. Horvath1, Zs. Tarjan2 and T. Virag3 1
Budapest University of Technology and Economics/Department of Measurement and Information Systems, Budapest, Hungary 2 Policlinic of the Hospitaller Brothers of St. John of God in Budapest/Department of Rheumatology II., Budapest, Hungary 3 Kokabura Ltd., Budapest, Hungary
Abstract— Nailfold capillaroscopy is a non invasive, simple examination, which provides exact information on the assessment of the microcirculation. The peripheral blood circulation is very sensible for certain illnesses e.g.: autoimmune diseases, diabetes. In many cases there exists a pattern specific for certain illnesses, therefore this test is capable of differentiate between them. Our aim is to develop a computer aided evaluation system for capillary microscopic images. The fist step of the evaluation is the detection of capillaries, which is done by edge detection. Classical edge detectors resulted in unsatisfactory output, therefore we tried a different approach, which is able to take into account not only the local properties of the image, but also the relations of pixels. During the test of the algorithm we obtained that with a suitable post processing procedure the capillaries shown in the picture can be detected robustfully, hence our procedure is applicable as the first step of the evaluation of these images. Keywords— Edge detection, medical image processing, diagnostics.
I. INTRODUCTION Capillary microscopic examination means examining the smallest vessels of the human organ, the capillaries. The peripheral blood circulation is very sensible for certain illnesses e.g.: autoimmune diseases, diabetes. In many cases the deformations in the blood circulation can be observed before other symptoms, therefore capillary microscopic tests play an important role in the early identification of these diseases [1]. Today the main problem is that there is not a cheap, easily accessible instrument, which is capable not only of the image or video recording, but has a support for computer aided evaluation. This is very important because the exact and objective evaluation requires much more time than the examination itself, and this is why quantitative measures are rarely used. The first problem of image evaluation is the detection of capillaries, this is essential for any further image processing steps. We have to achieve high hit rate with low number of false positive hits, despite of the low image quality. In our presentation we will introduce an edge-detection method
which can solve this problem with relatively high performance. II. MATERIALS AND METHODS A. Capillary microscopic images The capillary microscopic pattern of a healthy patient is examined by many researchers, hence it is precisely defined by the medical literature. As one can see in Figure 1(a) the vessels are arranged into rows, they are regular hairpin shaped, with the same orientation. A capillary loop has two parallel stems: a thinner called arterial section, and a wider called venous section. They are connected with a winding part called apical section. The most important parameters that can be extracted from the picture: the arrangement of the vessels, sizes of the hairpin (length, distance of the two stems, diameters), shape of capillaries, linear density, occurrence of micro hemorrhages and visibility of SVP (Subpapillary Venous Plexus). In certain diseases the healthy pattern changes. In many cases the regular arrangement breaks up. If the vessels become dilated, they are called giant- or mega-capillaries according to their size. The hairpin shape can also be changed: the medical literature classifies the modified shapes into the following groups: meandering, bushy, ball, tortuous and ramified. The linear density decreases in general, in certain cases micro hemorrhages can be observed, and the visibility of the SVP increases. B. The method of image recording There is a generally accepted method for capillary microscopic examinations [2], [3], hence we have also followed this method. Our present database was created with a stereo microscope with 100x magnification. We used paraffin oil for increasing the transparency of the skin, and a light source (intralux 6000) with cold light. The direction of the light was approximately 45º.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 818–821, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Markov Chain Based Edge Detection Algorithm for Evaluation of Capillary Microscopic Images
C. Edge detection
NEXT_POINT(f,x,y,v0)
As it can be seen in Figure 1(a) the capillaries do not have sharp edges, the image is very blurred and noisy. These properties are mainly come from the recording method, because capillaries are observed through the skin. Classical edge detectors such as Sobel, Laplace and Canny operators have given bad result. It was very hard to separate real edges from noise, and edge detectors found not only the borders but the whole area of the capillaries because this intensity changes continuously in perpendicular direction to the capillary. Considering this properties we decided to search not the border but the centre line. It can be detected by calculating the second derivatives of the image. The efficiency of these methods were higher, but it was not enough for our problem. The centre line of capillaries can be located more robustly if we use not only the local properties, but also the relations between pixel locations as described in later sections. D. “Walking” algorithm An exact definition can be given for the centre points of capillaries with the second derivatives of the image. Let us consider the images as a 3D surface, where the intensity is the third dimension. Capillaries are valleys on this surface, and a centre line is the bottom of a valley. Our task is to give definition for these bottom points. A point is a bottom point of a valley, if the curvature of the image is significantly different in two perpendicular directions, and the point is a local minimum along a line which is perpendicular to the direction of the valley. The direction of the valley can be calculated with the following method. Let f(x,y) be the intensity of the image in the pixel at point (x,y), let Ax0,y0 be the matrix of the second derivatives at the (x0, y0) point:
A x0 , y 0
⎡ ∂2 ⎢ ∂x 2 f x0 , y 0 =⎢ 2 ⎢ ∂ f ⎢⎣ ∂x∂y x0 , y 0
819
⎤ ∂2 f x0 , y 0 ⎥ ∂x∂y ⎥ ∂2 f x0 , y 0 ⎥ 2 ⎥⎦ ∂y
(1)
The eigenvector of A, which belongs to the smaller eigenvalue, points to the direction of the valley. The knowledge of the direction of the vessels can be used to create an iterative algorithm, which walks along the vessel from a given point. During the walk the algorithm makes little steps to the direction of the valley, which is calculated from a little surrounding of the current point. The pseudo code of NEXT_POINT(f,x,y,v0), which calculates a step of the walk, is the following:
1. Calculating A in point (x,y) 2. v ← the normalized eigenvector of A belonging to the smaller eigenvalue 3. if v0T·v < 0 then v ← -v 4. v0 ← v 5. p ← (x,y) T+ v·const 6. l ← section through p, which is perpendicular to v 7. q ← minimum point of f(x,y) from the points of l 8. return q Lines 1 – 2 calculate the direction of the valley according to the second derivatives, as described above. There are two possible ways in a valley where we can move, therefore we must choose from them (lines 3 – 4). For this decision we need the direction of the previous step (v0). To avoid cycles, the direction is chosen, so that the angle of two subsequent steps is less than 90º. The performance of the algorithm can be improved if we search the local minimum of the section which is perpendicular to the estimated direction of the valley, because this technique decreases the probability of going out of the capillary (lines 6 – 7). During a walk the algorithm performs steps one after the other. The end point of a step is the start point of the next one: WALKING_ALGIRITHM(f) 1. s ← START_POINT()
2. 3. 4. 5. 6. 7. 8. 9. 10.
v0 ← 0 w ← (s) repeat s1 ← NEXT_POINT(f,s,v0) v0 ← s1-s s ← s0 w ← s appended to w until STOP_CONDITION() return w
Lines 1 – 2 perform the initialization. The v0 vector is the direction of the previous step before the first step it is irrelevant, therefore it can be set to 0. In the repeat – until iteration (lines 4 – 9) the NEXT_POINT procedure calculates the following point of the walk, and in line 8 each calculated point is added to the walk. The performance of this method highly depends on the two procedures: START_POINT and STOP_CONDITION. The implementation of these procedures can contain complex conditions and must be based on the properties of the used images. The main problem of this approach is that it is very difficult to implement these procedures so that they give good results on high variety of images.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
820
G. Hamar, G. Horvath, Zs. Tarjan and T. Virag
E. Random walk
where so that
Both of the two procedures can be got around by using a probabilistic approach. Instead of starting the walk form one ore more specified positions, we can start it from a random position. Instead of following each walk separately we can calculate the probability of staying in a certain position after the 1th, 2nd, …, Mth step. This is similar to the approach, used by the early Google for web page ranking [4]. In the initial state we assume that we can be in any pixel position with the same probability. During an iteration of the algorithm we modify this probability distribution according to the walk. If i1, i2, … are the pixels from where we can step to j with the walking algorithm, than the probability of staying in pixel position j in the (k+1)th iteration is:
Pk +1[ j ] = Pk [i1 ] ⋅ pi1 , j + Pk [i2 ] ⋅ pi2 , j + … + + Pk [in ] ⋅ pin , j
(2)
where pi,j is the probability of stepping from i to j if we are in pixel position i. Our assumption is that if we come into a point of a valley during the walk, we stay in the valley with high probability. If it is outside the valley the direction of the step is nearly random, so the probability of staying in these points is uniform, therefore relatively low. After some iteration the probability of staying in a pixel, which belongs to a valley, becomes significantly higher than in other pixels. This problem can be formalized as a finite state homogeneous Markov chain. Let us label the pixel positions with positive integers form 1 to N. Let Xk be a random variable, where Xk = i means that the walk is in the pixel position i ∈ {1, 2, …,N} after the kth step. Let X be the stochastic process: X = (X0,X1, …,XM), and let pi,j = P(Xk+1 = j|Xk = i), that is the probability of moving to the pixel position j if we are in pixel position i. If this probability is independent from k for every i and j, then X is a homogeneous Markov chain. If Pk = (P(Xk = 1), P(Xk = 2), …, P(Xk = N)), is the probability distribution of Xk, and T = [pi,j] is the state transition matrix of the Markov chain, then Pk+1 can be calculated with a vector – matrix multiplication:
Pk +1 = Pk T
(3)
P0 is the uniform distribution, so that P0[i] = 1/N for every i. The matrix T can be derived from the NEXT_POINT procedure. From pixel position (x, y) we can move to the little neighborhood of two possible destinations according to the procedure. The probability of moving to the points of this neighborhood is calculated according to a 2D Gaussian distribution. Each probability is written to a position in the matrix. From every pixel position we have to move some-
∑
j
pi , j = 1 for every i, that is the sum of
every row in T must be 1. F. Complexity The T matrix is very large, because number of elements is N2, where N is the number of pixels. However T is sparse, because it has 2L non-zero elements in every row, where L is the number of pixels in the neighborhood of a pixel. The complexity of the matrix – vector multiplication is proportional, with the number of non-zero element in T, so that the runtime complexity of one iteration is O(NL), therefore the complexity of the whole algorithm is O(NLM), where M is the number of iterations. In our experiment we implemented the algorithm in C++, and we used a Pentium IV 3GHz computer, with 1GB RAM. With 647x435 resolution images, L=5x5 neighborhoods and M=100iterations the procedure terminated in 52s. III. EXPERIMENTAL RESULTS The resulting probability distribution can be viewed as a grey-scale image, after scaling pixel values to the 0 – 255 interval. Figure 1 shows the result on a typical capillary microscopic image. We have run the procedure for 10, 20 and 100 iterations. As it can be seen with increment of the number of iterations the noise becomes less, but the vague capillaries also begin to disappear. It is difficult to precisely measure the efficiency of the algorithm, because there is no exact reference with which it could be compared, we can only use the opinion of a human observer. We have applied this algorithm on 30 images, with variable quality. After a post-processing step (edge detection, noise filtering and edge connection) the results are compared with the human’s opinion. The vessels detected by the algorithm were 91% of the vessels which are seen in the image. IV. CONCLUSION We have introduced a novel method of edge detection, which uses a different edge definition than traditional edge detectors. A definition based not directly on local properties of the image, but utilizes the relation between pixel positions, which is calculated using local properties. We have tested our solution on a capillary microscopic image set, and we found that we can achieve higher performance, than with other edge-detectors. In our research we examined special type of edges, but our method can be easily modified to detect other types,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Markov Chain Based Edge Detection Algorithm for Evaluation of Capillary Microscopic Images
(a) Original image
(b) After 10 iterations
(c) After 20 iterations
(d) After 100 iterations
821
Fig. 1 Result of the edge detection such as step edges. Generalizing the algorithm remains a future research problem.
3. 4.
ACKNOWLEGEMENT
Zs. Tarjan, E. Koo, P. Toth, and I. Ujfalussy, “Capillary microscopic examinations (kapillarmikroszkopos vizsgalatok)”, Magyar Reumatologia, 42:207 – 211, 2001, (In Hungarian). L. Page, S. Brin, R. Motwani, and T. Winograd, “The pagerank citation ranking: Bringing order to the web,” Tech. Rep., Stanford University, 1999.
This research was supported by the EU, GVOP grant 3.3.3-05/2.-2006-01-0130/3.0.
REFERENCES 1. 2.
A. Bollinger and B. Fagrell, Eds., Clinical Capillaroscopy, Hogrefe & Huber Publishers, 1990. P. Dolezalova, S. P. Young, P. A. Bacon, T. R. Southwood, Nailfold capillary microscopy in healthy children and in childhood rheumatic diseases: a prospective single blind observational study, Annals Rheumatology Disease, 2003, pp: 444-449.
Author: Institute: Street: City: Country: Email:
Gabor Hamar Budapest University of Technology and Economics Magyar tudosok korutja 2. Budapest Hungary
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Measuring Red Blood Cell Velocity with a Keyhole Tracking Algorithm C.C. Reyes-Aldasoro, S. Akerman and G.M. Tozer Cancer Research UK Tumour Microcirculation Group, Academic Unit of Surgical Oncology, Royal Hallamshire Hospital, The University of Sheffield, Sheffield, S10 2JF, U.K. Abstract— A tracking algorithm is proposed to measure the velocity of red blood cells traveling through microvessels of tumors growing in skin flaps implanted on mice. The tracking is based on a keyhole model that describes the probable movement of a segmented cell between contiguous frames in a video sequence. When a history of movements exists, past, present and a predicted landing position define two regions of probability with a keyhole shape. This keyhole is used to determine if cells in contiguous frames should be linked to form tracks. Preprocessing segments cells from background and post-processing joins tracks and discards links that could have been formed due to noise or uncertainty. The algorithm presents several advantages over traditional methods such as kymographs or particle image velocimetry: manual intervention is restricted to the thresholding, several vessels can be analyzed simultaneously, algorithm is robust to noise and a wealth of statistical measures can be obtained. Two tumors with different geometries were analyzed; average velocities were 211±136 [μm/s] (mean±std) with a range 15.9-797 [μm/s], and 89±62 [μm/s] with a range 5.5-300 [μm/s] respectively, which are consistent with previous results in the literature. Keywords— Red Blood Cell Tracking, Blood Velocity
I. INTRODUCTION The analysis of red blood cell (RBCs) velocity is of interest in different areas like cochelar blood flow [1], cerebral microvessels [2] or tumor vasculature [3]. Despite its importance, the off-line measurement of velocity has been restricted to 1D or 2D cross-correlation or even manual measurements of distances over a screen [3]. Particle Image Velocimetry [4] (PIV) relies on the 2D cross-correlation between small windows of interest within an image which observe the relative movement of the intensities inside the window between frames. This analysis is restricted to simple geometries, like a single vessel or a branching point at the maximum since more complicated geometries could result in incorrect results due to aliasing or other artifacts. Kymographs [5] (sometimes called Space-Time Images) rely on 1D cross correlation of manually traced lines over an image in consecutive frames. This analysis is thus restricted to a single straight line and does not consider orientations but only relative movement.
In this paper tracking is understood as tracing the course or 2D movements of individual RBCs from frame to frame. For this purpose, RBCs need to be segmented and its positions identified. We then propose a tracking algorithm based on a keyhole model that describes the movement of RBCs that travel within the vasculature of tumors and links RBCs on contiguous frames to form tracks that span over the analyzed frames. The algorithm has minimal user intervention and is capable of analyzing complex vascular networks. II. MATERIALS AND METHODS A. Window chambers and RBC labeling Window chambers were implanted on male SCID mice (12-16 week-old, 28-32g) under general anesthesia using i.p. injection of fentanyl citrate (0.8mgkg-1) and fluanisone (10mgkg-1; Hypnorm) and midazolam (5mgkg-1; Hypnovel). Surgical procedures are described in [6]. Donor red blood cells were obtained by cardiac puncture from anaesthetized male SCID mice into heparinised syringe and labeled with the fluorescent dye DiI (Molecular Probes, Cambridge Biosciences, UK). The labeling method follows [7]. B. Intravital Microscopy Intravital microscopy was carried out with an inverted Nikon Eclipse E600FN fluorescence microscope with a x2.5 zoom. The microscope was set up to view the tumor preparations under epi-fluorescence illumination using a 100 W mercury arc lamp for measurement of red blood cell velocity. Fluorescence was set up to excite at 550nm and detect the emissions at 565nm from the labeled red blood cells using a custom made fluorescence cube (Nikon, UK). Video observations were recorded in digital format, using a Sony DSR-30P digital videocassette recorder at 25 fps. C. Description of the tracking algorithm The tracking algorithm consisted of three main steps: pre-processing which transformed the acquired videos into a sequence of suitable binary images which contained segmented objects, tracking, which consisted of determinin
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 810–813, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Measuring Red Blood Cell Velocity with a Keyhole Tracking Algorithm
811
Fig. 1
Pre-processing of the images from input frames to binary images. (a) Sample frame where several RBC can be identified together with labels, noise and artifacts. (b) Mean image. (c) Pixel to pixel subtraction of (a) and (b). (d) Thresholded binary image of (c). 10 objects can be identified.parent-child relationships between objects in contiguous frames, and post-processing which eliminated over-splitting of tracks and removed links that could have resulted from noise or uncertainty.
Fig. 2 RBC keyhole movement model. (a) It is assumed that between consecutive frames a RBC can move towards any direction any distance. (b) Without movement history, the only assumption possible is that its landing prediction will be within a circular region. (c, d) A predicted position is made assuming constant velocity and direction, this creates two probable regions a cone (c) and a circle (d) which when combined creates a keyhole model. that they may have been generated by noise of the segmentation process.
First, it was necessary to remove any artifacts, such as intensity inhomogeneity due to the acquisition process, noise, and all the labels that have been over-impos ed on the images. A simple, yet powerful way of removing the artifacts is using a mean image [8], which was obtained by averaging the intensity values of every pixel of the image over all the frames to be analyzed. Then, this mean image was subtracted from every frame, removing most of the artifacts. To reduce the computational complexity and smooth the resulting images, a standard Quad Tree [9] averaging was performed. A quad tree averages the intensities of 4 neighboring pixels into a single one in a new image whose dimensions in rows and columns are half of the original. Besides the size reduction, local smoothing is performed. Next, a suitable threshold was selected to segment a number of objects (that is RBCs) from the background; this is the only manual intervention from the user (Figure 1). Once the binary images were obtained, these were labeled, that is, assigned a unique label to each of the objects. Finally, the centroids of the objects were obtained together with the distances that separated them from its neighbors, if any. We propose a keyhole model to perform the tracking of the RBCs. The model arose from the movements of RBCs:
the most probable step for a RBC that is moving from frame t-1 to frame t, is to follow the direction of the previous steps with the same velocity to frame t+1. If we assume that a child RBC will move with exactly the same direction and velocity than its parent, we can predict its landing position for the next frame. Of course, this would not cover changes in speed, turns in vessels or even simple movements within a wide vessel. We therefore defined two regions of probability, a narrow wedge (60° wide) oriented towards the predicted landing position, and a truncated circle (300° ) that complements the wedge, together resemble a keyhole (Figure 2). The radius of the wedge will be longer (3 x parent-child distance) than that of the circle (1 x parent-child distance) to capture objects that increased speed. The circle in turn would capture RBCs that changed direction but only those that are relatively close to the parent. In this way, parent-child relationships are restricted to objects that are relatively close to their parents or that follow the previous movements. This model can only be assumed if there is a previous history of movements of the RBCs, otherwise the only assumption is that the predicted landing of object will be within a certain distance of the former object, that is, a circular region centered at the parent. This of course intro-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
812
C.C. Reyes-Aldasoro, S. Akerman and G.M. Tozer
duces uncertainty in the relationship assigned, but this will be tested later in the post-processing. Once that all segmented RBCs have been examined for possible parent-child relationships, a reduced number of them will have formed a series of tracks of different lengths. Post-processing consists of three steps: analysis of the first link, linking of disjoint tracks and removal of short tracks. First, for every track, the first RBC would have been assigned as a parent without any previous history of movements, thus it is possible that they were incorrectly assigned to the track. A simple way of ensuring that the top RBC, (time t) does belong to the track is analyzing the movement backwards, that is, apply the same keyhole model using child (t+1) and grandchild (t+2) to generate a keyhole. If the top RBC lands inside the keyhole, it remains as part of the track, otherwise it is removed. Next, in some cases, perhaps due to noise or incorrect segmentations, the path of a single RBC that would form 1 track can be split into 2. These tracks can be linked with a backwards analysis in a similar way as explained before. For every existing track, generate a keyhole with its top two RBCs, if the last node of another track lands within the keyhole, then link the tracks. The last post-processing step is to remove short tracks, those tracks that have more than 3 RBCs will be retained and the rest are removed under the assumption
(a)
(b)
Fig. 3 Tracks obtained for two different tumors. Each individual RBC track is presented as a line with colors representing the velocities. It can be seen that the velocity in some vessels is consistently faster (red) than in others (blue). In (a) it is easy to notice how some vessels carry RBCs that travel much faster than in others, while in (b) the RBCs travel slower before the branching point and then accelerate in the separate branches.
value and i, represents every track, and the standard deviaIII. RESULTS The velocity of RBCs through two different tumors was analyzed, a series of tracks with their corresponding average velocities are presented in Figure 3. The vasculature can be observed clearly from the tracks together with the varying velocities of the RBCs that travel through the tumors. In Figure 3 (a) it can be seen that there are several rather straight and narrow vessels with few curves, while in Figure 3 (b) the majority of the RBCs travel through a wider vessel that reaches a branching point and then follow two different paths. At this branching point, the velocity of the RBCs increased significantly. In 3 (a) the velocity of different vessels can be quite different: a “fast” vessel can carry RBCs that travel at 500-700 [μm/s] (long straight on the left), while a “slow” vessel can have RBCs moving at 15-120 [μm/s] (“J” shaped in the top center). In 3 (b) the RBCs that travel through the main vessel present velocities in the range 170-300 [μm/s]. Since the tracks can have different lengths spanning from 4 to 172 frames, the mean value was obtained as a weighted average of their Velocity (i) × length (i) velocity by their length: E(x) = ∑ ∑ length(i) where E(x) is the expected value of the velocity, or its mean
tion is: std = E (x 2 ) − E(x) 2 . The final velocity results (mean±std) for the two tumors are 211±136 [μm/s] with a range 15.9-797 [μm/s], and 89±62 [μm/s] with a range 5.5300 [μm/s] respectively. These results are consistent with previous reports in the literature [3]. A further advantage of the tracking algorithm is that the tracks are inherently 3D vectors [rows x columns x time], and therefore they can be plotted in different angles which can reveal information that is not visible in a traditional 2D time projection like the ones in Figure 3. In Figure 4, the tracks of the same tumors are presented with different view angles. First, in 4 (a) the tracks of the first tumor are presented in a “lateral” projection, where the vertical axis represents time going upwards and the horizontal axis represents the rows. This 2D plot is projected in a column plane. The time activity of the RBCs is highlighted in this view. Here, slow tracks will have a higher slope than the faster tracks that tend to be horizontal. There is even one RBC that seems to be trapped in its position and is represented by a vertical track that starts around second 8 on the left hand side of the graph. In Figure 4 (b) we have selected a few tracks for clarity. It is now easier to distinguish the paths of the RBCs, most of which travel left-to-right, which would correspond down-up in Figure 3 (a). Notice the RBC
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Measuring Red Blood Cell Velocity with a Keyhole Tracking Algorithm
813
IV. CONCLUSIONS
(a)
A tracking algorithm has been presented. The algorithm relies on a keyhole model that describes the probable movement of a red blood cell (RBC) through the vasculature of the vessels in tumors. The algorithm requires minimal user intervention and is able to track simultaneously RBCs in several straight or tortuous vessels without the use of cross-correlation. The results provide a wealth of information describing the movement of the RBC through the vasculature and not just the traditional mean and standard deviation of the velocity. A general aspect of geometry of the vessels of the tumor can also be observed. The algorithm has a series of noise reduction steps that provide better results.
REFERENCES 1. (b) 2.
3.
4. (c)
Fig. 4 Tracks from 2 tumors with different observation angles. While tracks in 4 are projected in time, tracks in (a) are projected in one column plane. Faster tracks will have lower slopes than slower tracks. (b) A reduced number of tracks from (a). Notice the track that changes direction in the middle of its path. (c) Tracks are presented in 3D with x-y dimensions (rows and columns) together with time in the z-axis.
5. 6. 7.
that changes direction in the middle of its path. Figure 4 (c) presents the tracks of the second tumor in a 3D plot where rows and columns form a base plane and time is going upwards. The majority of the RBCs travel through a wide vessels that then branches left and right. In Figure 3 (b) all these tracks appear stacked on top of each other and its is hard to distinguish their paths. Some of the tracks on the left branch then change direction very abruptly. The tracks on the right-hand side are slower than those in the center. Finally, since the movement between every frame is recorded for each RBC, it is possible to obtain a wealth of information, like angle and distance per frame, cumulative distance or distance from origin, not just average velocity.
8. 9.
Prazma J, Carrasco VN, Garrett CG, Pillsbury HC. (1989). Measurement of cochlear blood flow: intravital fluorescence microscopy. Hear Res 42(2-3):229-36. Tsukada K, Sekizuka E, Oshio C, Tsujioka K, Minamitani H. (2004). Red blood cell velocity and oxygen tension measurement in cerebral microvessels by double-wavelength photoexcitation. J Appl Physiol 96(4):1561-8. Tozer GM, Prise VE, Wilson J, Cemazar M, Shan S, Dewhirst MW, Barber PR, Vojnovic B, Chaplin DJ. (2001). Mechanisms associated with tumor vascular shut-down induced by combretastatin A-4 phosphate: intravital microscopy and measurement of vascular permeability. Cancer Res 61:6413-6422. Sugii Y, Nishio S, Okamoto K. (2002). In vivo PIV measurement of red blood cell velocity field in microvessels considering mesentery motion. Physiol Meas 23(2):403-16. Waterman-Storer CM, Desai A, Bulinski JC, Salmon ED. (1998). Fluorescent speckle microscopy, a method to visualize the dynamics of protein assemblies in living cells. Curr Biol 8(22):1227-30. Papenfuss HD, Gross JF, Intaglietta M, Treese FA. (1979). A transparent access chamber for the rat dorsal skin fold. Microvasc. Res. 18:311-318. Unthank J, Lash J, Nixon J, Sidner R, Bohlen H. (1993). Evaluation of carbocyanine-labeled erythrocytes for microvascular measurements. Microvascular Res 45:193-210. Japee SA, Ellis CG, Pittman RN. (2004). Flow visualization tools for image analysis of capillary networks. Microcirculation 11(1):39-54. Gaede V, Günther O. (1998). Multidimensional access methods. ACM Computing Surveys 30(2):170-231. Author: Constantino Carlos Reyes-Aldasoro Institute: Street: City: Country: Email:
The University of Sheffield K Floor Royal Hallamshire Hospital Sheffield U.K.
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Methods for Automatic Honeycombing Detection in HRCT images of the Lung T. Zrimec1,2 and J. Wong2 1
Centre for Health Informatics, University of New South Wales, Sydney, Australia School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
2
Abstract— Honeycombing in High-Resolution CT (HRCT) indicates the presence of a number of diseases involving fibrosis of the lung. Honeycombing is difficult to detect due to its textural and structural appearance, which changes with the progression of the diseases. Structure-based and texture-based methods, developed for detecting the honeycombing pattern, are presented and compared. Machine learning is used to generate rules for honeycomb detection using examples of its appearance in HRCT images, provided by radiologists. The effectiveness of each method was evaluated using cross validation on 16692 examples of regions with and without honeycombing from 42 images of 8 patients. Keywords— Lung HRCT, Computer Aided Diagnosis, Honeycombing, Disease Pattern Recognition.
ROI, six features from ROI with size 32x32 and six from ROI with size 96x96. An Artificial Neural Network was used to classify each ROI. The system was able to automatically detect images containing abnormalities. In most previous research, honeycombing was detected using texture-based pattern recognition. However, in the medical literature [1] the appearance of honeycombed cysts are said to show a “reticular pattern”. Reticulation occurs when there is thickening of the interstitial fibre network of the lung caused by a disease process. This results in an increase in reticular (linear or curvilinear) lung opacities as seen on HRCT (see Fig, 1). Consequently, we have explored alternative methods for detecting honeycombing, based also on its structure.
I. INTRODUCTION Honeycombing in High-Resolution Computed Tomography (HRCT) is an indicator of many diseases leading to end-stage pulmonary fibrosis. Pathologically, honeycombing is defined by the presence of small air-containing cystic spaces with thickened walls composed of dense fibrous tissue [1]. Visually in HRCT, the air spaces appear as roughly circular, dark patches, and the walls, which surround these patches, are white (See Fig. 1). As honeycomb cysts are usually clustered together, it has the characteristic appearance of “honeycombing” (See Fig. 1). This pattern is produced by a limited number of diseases and is indicative of Idiopathic Pulmonary Fibrosis (60% to 70% of cases), asbestosis, sarcoidosis, and scleroderma [1]. Various automated detection algorithms have been developed to detect honeycombing together with other lung diseases patterns. Uppaluri et al. [2] developed an adaptive multiple-feature method (AMFM) to assess 22 independent texture features to classify six different tissue patterns in a CT image. The features with the greatest ability to discriminate between the different patterns were chosen and a Bayesian classifier was trained using the selected features. The system was evaluated on six images and was compared with the manual classification done by three radiologists. Uchiyama et al. [3] proposed a scheme in which the lung was segmented and divided into 32x32 contiguous regions of interest (ROIs). Twelve features were calculated for each
Honeycombing pattern
QuickTime™ and a Graphics decompressor are needed to see this picture.
Honeycombing pattern
Normal lung anatomy
Fig. 1. Honeycombing pattern in an HRCT image (top and left) and normal bronchovascular structures that are similar to the honeycombing pattern (right).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 830–833, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Methods for Automatic Honeycombing Detection in HRCT images of the Lung
In this paper, we present two methods for detecting honeycombing. The first method generates potential regions with honeycombing by finding clusters of honeycomb-like cysts. The classification into honeycombing or nonhoneycombing is based on the image features calculated for those regions. The second method uses image features, calculated on a moving window, to represent the textural properties of the regions with and without honeycombing. Machine learning was used in both cases to automatically generate classification rules for recognizing honeycombing. The characteristics of each algorithm, their performance, as well as advantages and disadvantages are discussed.
831
represented by a set of attributes. The result of learning is a classification tree in which the most informative attributes are used to determine the correct class.
II. DETECTON OF HONEYCOMBING A. Image pre-processing and representation The data for honeycombing detection comes from a radiology practice. The HRCT lung protocol produces volume data with image resolution: 512x512 and slice thickness: 1.0 mm. The data are stored as Dicom 16-bit grayscale images, with the pixel intensity proportional to tissue density. The first step in automatic disease pattern detection is to segment the lungs from the background. The lung boundaries were determined using adaptive thresholding to segment the darker regions in the image, which represents the air-filled lung. Morphological operators were used to include structures within the lung, which have a high attenuation (appear brighter in HRCT images). Active contour snakes were used to generate smooth lung contours. To capture the texture of the lung parenchyma, many texture-based feature detectors were implemented [4]. The set of features includes: first order and second order texture attributes, grey level differences and the directional second order attributes. These features enable the system to capture both the global appearance of the diseased regions as well as local information. Shape and position were also used to describe cysts. B. Automatic generation of rules for classification Rules for classifying lung regions were built automatically using a machine learning algorithm trained on example regions containing honeycombing and regions without honeycombing. Examples with honeycombing were provided by radiologists, who used a specially developed tool for marking images. Figure 2 shows an example of an image with marked regions with different disease patterns. J48, the Weka [5] implementation of a decision treeinduction algorithm was used for learning. The input to an inductive learning algorithm is a set of classified examples
Fig. 2 Regions with diseases marked by a radiologist. C. Structure-Based Algorithm This algorithm is based on detecting honeycomb cysts and their spatial arrangements. The cysts, which appear as dark circular patches on HRCT images, are detected by growing regions that are seeded from pixels with sufficiently low intensity. The regions are grown in eight equally spaced directions. Each region stops growing when the gradient is greater than a threshold value. A region is only a candidate cyst if it satisfies certain criteria, such as having a roughly circular shape and not being too large in size. This detection method was adapted from algorithms devised to detect similar looking structures such as bronchi (see Fig. 1. bottom right). An early version of the algorithm was presented in [6]. The characteristic “honeycombing” appearance is the result of cystic airspaces occurring in clusters. After detecting the presence of honeycombing cysts, the potential regions with honeycombing were detected using clustering. The clusters are formed using Euclidean distance and the cluster boundaries are created using active contour snakes. For each candidate region the texture attributes as well as region attributes were calculated. To prepare training examples, the potential regions with honeycombing are then manually classified by trained radiologists into honeycombing and non-honeycombing. The attribute representation of the regions and the correct classes were used for learning.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
832
T. Zrimec and J. Wong
J48 produced rules for recognizing honeycombed and nonhoneycombed regions (see Fig. 3). D. Texture-Based Algorithm Due to the visual appearance of honeycombing in HRCT, textural attributes were used to describe the pattern. In contrast to the existing texture-base approaches, we experimented with a much lager set of textural attributes and we included regional attributes. Using a moving widow, a set of attributes was calculated for each region of interest (ROI) within the lung. Regions with two different sizes, 7x7 pixels and 15x15 pixels, were used. This enables us to capture the characteristic of small and large honeycomb cysts. For each ROI, the set of first order, second order texture features, and gray-level difference features were calculated. The first order texture features measure the gray level distri-
Š honeycombing regions Š non-honeycombing regions Fig. 3 Potential and detected honeycombing regions with the structure-based method: top – clusters of potential honeycombing, bottomclassified regions, red – honeycombing, green – non-honeycombing.
bution within the ROI. The specific features calculated were: mean HU, variance, skew, kurtosis, energy, and entropy. The second order features describe the spatial distribution of the gray-levels within these ROIs. To do this, a co-occurrence matrix was calculated, which specifies the frequency of a particular gray-level occurring near another gray-level. The calculated features were: energy, entropy, contrast, homogeneity, correlation and inverse difference moment. The graylevel difference features measure the distribution of the difference of pairs of gray-levels within the ROI. The cooccurrences of the gray-levels for four different directions: 0, 45, 90 and 135 degrees were calculated. The descriptions of the gray-level difference distribution in the ROI were also calculated. As honeycombing occurs predominantly in the lung periphery, attributes for regional information such as, the proportion of the region that lies in the peripheral, intermedial and central region [3] were calculated. Each ROI was described by a vector with sixty-four features.
Š Honeycombing cell wall Š Honeycombing cell Fig. 4 An example of the results from the texture-based method. Detected honeycombing in the image from Fig1: top – original image, bottom: red and green pixels classified as honeycombing.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Methods for Automatic Honeycombing Detection in HRCT images of the Lung
833
Training examples for learning were generated from the honeycombing regions marked by radiologists. The decision tree classifier, generated by J48, was applied to classify the pixels of each ROI within the lung. The results of the texture-based detection algorithm are shown in Figure 4. III. RESULTS The experiments were performed on forty-two images from eight patients, using tenfold cross-validation to evaluate each method. The structure-based approach has a high specificity (i.e. correctly classified non-honeycombing pixels as nonhoneycombing). On tenfold cross validation this method showed 91.0% accuracy, 69.4% sensitivity and 94.5.8% specificity. The algorithm fails when the cysts are too small to be detected reliably or merge with other nearby cysts due to the partial volume effect. This occurs when the structure in the lung is too small to be detected in HRCT images and causes pixels in the area to be blurred. In the case of honeycombing, this affected cyst detection. The results of the cellbased detection algorithm are shown in Figure 3. The texture-based approach has a high rate of honeycombing detection, however, the classifiers also misclassified many non-honeycombing regions as honeycombing. After tenfold cross validation the texture-based method showed 88.2% accuracy, 96.7% sensitivity and 86.8% specificity. IV. CONCLUSIONS We presented and compared two different methods for detection of honeycombing pattern. The new structurebased method showed detection accuracy higher then the texture-based method. Since this algorithm can report the number of honeycomb cysts within a region, it is more useful in practice. However, the texture-based method is more appropriate for calculating the percentage of the lung affected by the disease, because it can detect smaller cysts, which are sometimes missed by the structure-based method. We believe that further improvement of the structure-based method and combining it with the texture-based approach can lead to better detection results. In absence of common gold standard data sets, it is very difficult to compare the results of our algorithms with the results reported in the literature. Currently everyone uses different approaches and different test sets. Accurate detection and quantification of the presence of honeycombing can help radiologists in early disease detec-
Fig. 5 A 3D visualization of detected honeycombing on whole study, lung boundaries - blue, areas with honeycombing - red.
tion. We are currently developing visualization tools for 3D display of a lung affected by a disease. We are also experimenting with combining both methods to achieve better results. An example of a 3D visualization of the detected honeycombing on whole lung is shown in Fig. 5.
ACKNOWLEDGMENT We thank Peter Wilson and Michael Jones for their medical knowledge and assistance. This research was supported by the Australian Research Council.
REFERENCES 1. 2. 3. 4. 5. 6.
Webb WR, Muller NL, Naidich DP, (2001) High-Resolution CT of the Lung, Lippincott Williams & Wilkins, Philadelphia, 3rd ed. Uchiyama Y, Katsuragawa S, Abe H, Shiraishi J et al (2003) Quantitative computerized analysis of diffuse lung disease in high-resolution computed tomography,” Medical Physics 30(9): 2440–2454 Uppaluri R, Hoffman E A, Sonka M, Hartley PG, (1999) Computer Recognition of Regional Lung Disease Patterns, American Journal of Respiratory and Critical Care Medicine 160: 648–654 Haralick RM (1979) Statistical and Structural Approaches to Texture, Proceedings of the IEEE, Vol 67, May, pp.786–804 Witten IH and Frank E (2005) Data Mining: Practical machine learning tools and techniques, Morgan Kaufmann, San Francisco, 2nd ed. Wong SJ and Zrimec T, (2006) Classification of Lung Disease Pattern Using Seeded Region Growing, AI 2006: Advances in Artificial Intelligence 4304/2006, pp. 233–242 The address of the corresponding author: Author: Institute: Street: City: Country: Email:
Tatjana Zrimec Centre for Health Informatics, UNSW Kensington Sydney Australia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Sampling Considerations and Resolution Enhancement in Ideal Planar Coded Aperture Nuclear Medicine Imaging D.M. Starfield, D.M. Rubin and T. Marwala School of Electrical & Information Engineering, University of the Witwatersrand, Johannesburg, South Africa Abstract— Coded apertures have an advantage over collimators in nuclear medicine diagnostics, in that under certain conditions there is no trade-off between the resolution and the efficiency of a given system. A problem known as the ‘partial volume effect’ typically limits image resolution, and coded aperture sampling considerations show that the effect remains applicable to decoded coded aperture images. Finite sources or distributed objects, however, lessen the severity of the effect, and allow for the system resolution to be enhanced by decreasing the dimensions of the transparent coded aperture elements – without affecting the open fraction of the aperture material. Computer simulation results for idealised infinitely thin and completely opaque coded apertures are presented. Discrete point sources are simulated, and a two-dimensional digital SheppLogan phantom is used to test the proposed methodology. The results show that the resolution is enhanced, and that a rootmean-square error measurement decreases from 28 % to 23 %. Under the presence of increased point spread function blur, for a theoretical gamma camera having a sigma of 1.27 pixels, the same measurement decreases from 34 % to 25 %. Keywords— Nuclear medicine imaging, Coded apertures, Partial volume effect, Sampling considerations, Resolution enhancement
I. INTRODUCTION The collimators used in planar nuclear medicine imaging are limited by a trade-off, namely that resolution can be improved, but at the cost of efficiency. A second possible approach is to multiplex the gamma-rays by means of coded apertures. These consist of multiple transparent elements arrayed in an opaque material. Coded apertures have been applied successfully under the far-field conditions of astrophysics [1], and have the ability to improve the signal-to-noise-ratio (SNR) of the system [2]. The near-field geometry of nuclear medicine causes artifacts to arise, but research has shown that the near-field artifacts can be limited in extent [3, 4]. More particularly, the inherent trade-off between the resolution and efficiency of collimators is less problematic with coded apertures. This paper examines the limitations that are imposed on coded aperture image resolution, and proposes a methodology by which the resolution for a given system can be increased.
II. METHOD The partial volume effect: Consider a point source that is projected through an infinitely small pinhole onto a perfect detector. If the projection is recorded by a single pixel of the detector, the representation will be correct. If the projection falls on the boundary between neighbouring pixels, the counts of radioactivity will be distributed equally between those pixels. The total number of counts remains unchanged, but the measured peak is no longer representative of reality. This problem is known as the ‘partial volume effect’ [5], and is related to the digitisation of an analogue signal. The solution is to increase the radius of the pinhole, such that the projection of the point source illuminates an area that corresponds to at least 2 x 2 pixels of the detector [6]. In this manner, one pixel is always fully illuminated, and the measured peak will be correct. Each of the multiple transparent elements of a coded aperture acts as an independent hole, with each casting a projection of the source onto the detector. As such, the partial volume effect is applicable to the recorded coded aperture image [6]. The situation with decoded images is less intuitive. Coded aperture sampling considerations: In order to decode the overlapping projections, it is necessary to consider the situation from another perspective, which is as each point of the source projecting the coded aperture pattern onto the detector [6]. Provided that the pattern has specific properties, it is possible to uniquely decode the encoded data. Convolution theoretically models image acquisition. Correlation with the original coded aperture pattern returns the image to intelligibility [6]. The coded aperture pattern is a discrete binary array [2]. The decoding procedure therefore operates not in the continuous domain, but rather on the sampled measured data. This means that for a point source, a perfect reconstruction would be achieved if the projected coded aperture pattern were sampled as a single array of impulses or delta functions, all equal in amplitude, as indicated in Figure 1. The closest approximation to projecting a set of impulses would be with the use of an idealised infinitely thin but completely opaque coded aperture, having infinitely small pinholes.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 806–809, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Sampling Considerations and Resolution Enhancement in Ideal Planar Coded Aperture Nuclear Medicine Imaging
(a) 2 x 2 area
(b) 1 x 1 area
807
(c) < 1 x 1 area
Fig. 2 An illustration of idealised perfectly aligned aperture patterns projected onto a detector, for varying illumination areas, where the dotted lines represent detector pixel boundaries
Fig. 1 An illustration of a projected array of impulses, all equal in amplitude, measured by specific pixels of the detector
Consider three parallel planes representing the source, the coded aperture, and the detector. If the grids representing the discrete points of the source, the pinholes of the coded aperture, and the pixels of the detector are all perfectly aligned, a perfect detector will measure the desired sets of impulses in the pattern of the coded aperture. If these three grids are not perfectly aligned, the shift causes the impulses to fall away from the centre of each pixel. The resultant interpolation is equivalent to the measurement of overlapping impulse patterns. The impulses of an individual pattern will ideally have the same amplitudes, but this amplitude is reduced as a result of the partial volume effect. If the infinitely small pinholes are replaced with transparent coded aperture elements, such that each element illuminates a 1 x 1 pixel area of the detector, blur is then increased, but the system becomes less susceptible to the partial volume effect. Regardless of the alignment of the three grids, the partial volume effect is completely removed by applying the technique of illuminating a 2 x 2 pixel area of the detector [6]. However, the increased area of the projection results in the measurement of neighbouring impulse patterns, and thus in further blurring of the reconstructed image. System limitations: Gamma camera pixel size clearly sets the first limit on image resolution – this being the closest spacing at which samples can be obtained. The partial volume effect sets the second limit, as its solution requires the illumination of an area corresponding to 2 x 2 pixels of the detector. The point spread function (PSF) of the detector further contributes to these limitations. Transparent coded aperture elements are designed with respect to the pixel size of a specific gamma camera. Suboptimal patterns can be remedied without affecting the open
fraction, as this remains constant for a given family of coded apertures. The dimensions of the transparent elements can be decreased for a higher resolution system, and the number of elements in the array can be increased, such that both the field-of-view and the open fraction of the material are maintained. There is no trade-off between resolution and imaging efficiency, provided that the illuminated area is not below that of a single detector pixel. At this sampling threshold the number of elements in the array can no longer be increased. The concept is illustrated in Figure 2. Apart from the partial volume effect, a coded aperture designed to illuminate a 1 x 1 pixel area of the detector gives a resolution that is optimal without compromising efficiency. Optimising resolution: Consider the scenario of a source having a finite spatial extent, with the system maintaining ideality in all other respects. A finite source increases the illuminated area, and assists with countering the partial volume effect – as would be the case for the effectively continuous objects that are imaged in nuclear medicine. While a grid representing discrete points of the source may be useful for computational purposes, multiple grids of varying shifts would be necessary in order to represent continuity. A realistic source coupled with an optimal coded aperture not only limits the partial volume effect, but also allows for the enhancement of system resolution. Testing: Validation of the theory was carried out by means of a ray-tracing computer simulator. The coded apertures were taken as being infinitely thin and completely opaque, with the aim of omitting the artifacts that are introduced by the use of realistic apertures. A perfect detector PSF was used, unless otherwise stated. Discrete point sources, both perfectly aligned and misaligned, were investigated for varying illumination areas of the detector, together with the simulation of distributed objects. A PSF was also applied to the encoded distributed images, so as to allow for testing of the methodology under the presence of increased blur.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
808
D.M. Starfield, D.M. Rubin and T. Marwala
III. RESULTS The results are based on near-field imaging conditions. Accorsi’s method for the reduction of near-field artifacts [3] was applied to all images. Point sources: Two digital point sources were positioned on the image diagonal. The source grid was perfectly aligned with respect to the rest of the system. A 1 x 1 area projection (Figure 3(a)) gives a sharper image than a 2 x 2 area projection (Figure 3(d)). The peak intensities are measured correctly in both cases. The worst-case partial volume effect is obtained by shifting only the upper source by half a pixel along both axes. The effect is clearly visible for a 1 x 1 area projection (Figure 3(b)). The peak of the shifted source remains unaffected for a 2 x 2 area projection (Figure 3(e)), but the peak of the stationary source is lower by comparison. Finite sources were represented by superimposing a second point source grid over the first, shifted one quarter of a pixel along both axes. The partial volume effect is less severe for a 1 x 1 area projection (Figure 3(c)), relative to Figure 3(b). For a 2 x 2 area projection (Figure 3(f)) the peak of the stationary source has increased, relative to Figure 3(e), but the blur remains. Distributed objects: A two-dimensional slice of the digital Shepp-Logan phantom [7] was used for the simulation of distributed objects (Figure 4). The phantom was represented computationally as a grid of point sources, shifted by half a
(a) Point sources, aligned (1 x 1 area)
(b) Point sources, upper misaligned (1 x 1 area)
(c) Finite sources, upper misaligned (1 x 1 area)
pixel along both axes for worst-case alignment. The results are quantified by means of a root-mean-square error (RMSE), which is computed over the entire image, and is based on the percentage by which pixels differ from the pixels of the phantom [8]. An aperture with infinitely small pinholes is not practical in terms of efficiency, but gives a reconstruction that is close to perfect (Figure 5). Table 1 summarises the results. With reference to the infinitely small holes, a 1 x 1 area projection is blurred (Figure 6), but gives a sharper image and a lower RMSE than a 2 x 2 area projection (Figure 7). A blurring PSF having σ = 1.27 pixels was then applied to the encoded images, prior to decoding. Although the blur makes any resolution improvement difficult to discern visually, a 1 x 1 area projection (Figure 8) gives a lower RMSE than a 2 x 2 area projection (Figure 9). IV. DISCUSSION The simulation results show that an idealised coded aperture having infinitely small pinholes, used in conjunction with a gamma camera having a perfect PSF, does not give a perfect image. The reasons are twofold. Firstly a worst-case alignment of the system grids was used. Secondly, a nearfield imaging geometry means that for a single point source, the projected impulse array will no longer have impulses of equal amplitudes [4]. This is one cause of near-field artifacts. Coded apertures having finite transparent elements make it possible to adjust image resolution without affecting system efficiency, provided that the elements illuminate an area that is not below that of a single detector pixel. This allows a 2 x 2 area projection to be replaced with a 1 x 1 area projection – a methodology that both enhances resolution, and reduces the RMSE measurement. Significant blur minimises the resolution improvement. Nevertheless, the RMSE indicates that a 1 x 1 area projection remains preferable. For a distributed source it can be said that the partial volume effect has no significant influence on the results. Finally, it is recommended that the theory be validated by means of clinical gamma camera phantom studies. Table 1 Summary of the change in RMSE for a distributed source, as a function of the transparent element projection area
(d) Point sources, aligned (2 x 2 area)
(e) Point sources, upper misaligned (2 x 2 area)
(f) Finite sources, upper misaligned (2 x 2 area)
Fig. 3 Simulation results for discrete point sources positioned on the image diagonal, both aligned and misaligned, together with finite sources
Projection area
Gamma camera PSF
RMSE (%)
Infinitely small 1x1 2x2 1x1 2x2
Perfect Perfect Perfect σ = 1.27 σ = 1.27
16 23 28 25 34
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Sampling Considerations and Resolution Enhancement in Ideal Planar Coded Aperture Nuclear Medicine Imaging
Fig. 4 2D digital Shepp-Logan phantom
Fig. 5 Perfect PSF (infinitely small area), RMSE of 16 %
809
aperture projects onto an area corresponding to at least one pixel of the detector. The solution to the ‘partial volume effect’, which is associated with the digitisation of an analogue signal, requires illuminating a 2 x 2 pixel area of the detector. The resolution of a coded aperture imaging system is therefore limited by the gamma camera pixel size, by the partial volume effect, and by the gamma camera point spread function. The simulation results presented in this paper have shown that resolution can be enhanced by illuminating a 1 x 1 pixel area of the detector. This has been quantified by a root-mean-square error measurement. Furthermore, the partial volume effect has less influence on sources that are of finite dimensions, and the results have shown no significant influence on distributed sources such as those imaged in nuclear medicine diagnostics.
REFERENCES 1.
2.
Fig. 6 Perfect PSF (1 x 1 area),
Fig. 7 Perfect PSF (2 x 2 area),
RMSE of 23 %
RMSE of 28 % 3. 4.
5. 6. 7.
Fig. 8 σ = 1.27 PSF (1 x 1 area), RMSE of 25 %
Fig. 9 σ = 1.27 PSF (2 x 2 area), RMSE of 34 %
V. CONCLUSIONS The trade-off between resolution and efficiency of a nuclear medicine imaging system does not exist with coded apertures, provided that each transparent element of the
8.
In 't Zand J (1996) Coded aperture imaging in high-energy astronomy. Laboratory for High Energy Astrophysics (LHEA) at http://lheawww.gsfc.nasa.gov/docs/cai/coded_intr.html. Last date of access: 30-03-2004 Accorsi R, Gasparini F, Lanza R (2001) A coded aperture for highresolution nuclear medicine planar imaging with a conventional Anger camera: experimental results. IEEE Transactions on Nuclear Science 48(6):2411–2417 Accorsi R, Lanza R (2001) Near-field artifact reduction in planar coded aperture imaging. Journal of Applied Optics 40(26):4697–4705 Starfield DM, Rubin DM, Marwala T (2006) Near-field artifact reduction using realistic limited-field-of-view coded apertures in planar nuclear medicine imaging, IFMBE Proc. vol. 14, World Congress on Medical Physics and Biomedical Engineering, Seoul, South Korea, 2006, pp 1558–1561 Cherry SR, Sorenson JA, Phelps ME (2003) Physics in nuclear medicine – 3rd ed. Saunders, Philadelphia Accorsi R (2001) Design of near-field coded aperture cameras for high-resolution medical and industrial gamma-ray imaging. PhD Thesis, Massachusetts Institute of Technology Shepp LA, Logan BF (1974) The Fourier reconstruction of a head section. IEEE Transactions on Nuclear Science 21(3):21–43 Choi Y, Koo J-Y, Lee N-Y (2001) Image reconstruction using the wavelet transform for positron emission tomography. IEEE Transactions on Medical Imaging, 20(11):1188–1193 Address of the corresponding author: D.M. Starfield School of Electrical & Information Engineering Private Bag 3, University of the Witwatersrand, Johannesburg, WITS 2050, South Africa
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Stochastic Rank Correlation - A novel merit function for dual energy 2D/3D registration in image-modulated radiation therapy W. Birkfellner Center for Biomedical Engineering and Physics, Medical University Vienna, Vienna, Austria Abstract— Image-modulated radiation therapy is a field of considerable interest for both the clinical and the research community. By using on-board imaging equipment, patient pose and tumor position can be monitored during the course of therapy. 2D/3D registration is a key technology to achieve this goal. In a recent study in FluoroCT/ CT registration, we have shown that conventional cross correlation (CC), together with repeated use of conventional local optimization algorithms, provides an optimum measure for slice-to-volume registration for monoenergetic CT imaging data. If the required linear relationship between corresponding pixel pairs is offended (e. g. by using X-rays of different energy or by varying detector characteristics), CC becomes an unreliable measure of image similarity. A more general merit function like normalized mutual information (NMI) serves better in such a case but is stricken with local minima caused by sparse population of joint histograms. We present a novel merit function for 2D/3D registration named stochastic rank correlation (SRC), which is well-suited for intramodal dual-energy imaging. Here, the rank correlation coefficient is computed for pair of X-ray (or portal) images and the corresponding digitally rendered radiograph. Since computing rank correlation requires an ordering of gray values in each iteretive registration step, this approach
by itself cannot be considered useful. We show that by subsampling the image using a random selection of a small number of image pixels, a valid estimate for the rank correlation coefficient can be computed. A first evaluation of SRC is given on a set of simulated and clinical image data sets. It can be seen that both in terms of accuracy, stability, numerical behaviour, and time requirements SRC is a highly competitive new paradigm for 2D/3D registration. It is independent of varying grayscale content in intramodal images and is therefore ideally suited for dual energy imaging in image-modulated radiation therapy Keywords— 2D/3D, Image Guided Therapy, Image Modulated Radiation Therapy, Image Registration. Author: Institute: Street: City: Country: Email:
Wolfgang Birkfellner Medical University Vienna Waehringer Guertel 18-20 Vienna Austria
[email protected]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 834, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Texture Classification of Retinal Layers in Optical Coherence Tomography M. Baroni1, S. Diciotti1 , A. Evangelisti2, P. Fortunato3 and A. La Torre3 1
Department of Electronics & Telecommunications, University of Florence, Firenze, Italy 2 Department of Systems and Computer Science, University of Florence, Firenze, Italy 3 Department of Oto-Neuro-Ophtalmological Surgical Sciences, University of Florence, Firenze, Italy Abstract— This work investigates the ability of texture analysis to yield discrimination of retinal tissue layers in the images provided by Optical Coherence Tomography (OCT). In fact, this relatively new imaging technology allows noninvasive visualization of retinal layers. Their segmentation is a prerequisite for any computer method that aims to objectively extract valuable information, regarding the condition and the progression of disease and therapy. Since the regularities of biological tissue can be captured by texture analysis in a straightforward way, a computer approach is proposed based on co-occurence matrices and artificial neural networks (ANN) for the classification and analysis of single retinal layers. A subset of ten normal eyes has been used for the training phase, and another subset of ten normal eyes has been used for testing the system performance. For inner retinal layers, accuracy was 79%, specificity about 71% and sensibility was 87%. Slightly lower values were obtained for outer retinal layers. These preliminary results suggest that this approach may be useful as a prototype system for a quantitative characterization of retinal tissue. Keywords— Optical Coherence Tomography, Image segmentation, Texture analysis, Neural networks.
I. INTRODUCTION Optical Coherence Tomography (OCT) is a non-invasive high resolution, high sensitivity technique [1, 2] that uses interference patterns of low coherence laser for imaging subsurface tissue structure and that is particularly suited for transparent tissue such as the eye [3]. Interpretation of retinal OCT images is basically qualitative, with the exception of the measurement of retinal thickness and other simple geometric features, such as the size of optic nerve or the depth of macular holes. On the other hand, quantification of other aspects of the structure and status of retinal tissue can improve clinical applications, such as the discrimination among various grades of pathological severity as well as the follow-up of therapeutic procedures. Intensity levels of OCT signal represent different values in optic properties of retinal tissue (reflectivity); unfortunately, they do not always correspond to structural differences [3, 4]. At the same time, the structural status of cellular layers is apparent in the fine-scale organization of
grey levels of OCT images, that have spatial resolution of about 10 μm. The regular spatial repetition of grey level patterns is usually referred to as texture. For this reason a classical method for texture analysis, the co-occurrence matrices [5], has been considered. In a previous work [6], retinal layers were segmented by a multistep edge detection computer system, based on dynamic programming, and then they were quantitatively described with texture analysis. In this research, texture analysis is first performed on OCT images. Descriptors of cooccurrence matrices of grey levels [5] have been then used for discriminating among two main retinal layers and the background. These data have been given as input to an artificial neural network (ANN). The system has been trained by means of the segmentation given by an expert observer and its performance has been tested. II. METHODS A. OCT imaging OCT is an imaging modality similar to echography, that uses light instead of ultrasound. Infrared light (wave length λ ∼ 800 nm, band width Δλ ∼20 nm, power less than 1 mW) is not visible and so does not disturb patients during examination. It can be focused on the retina and is partially absorbed and partially back-reflected by tissue interfaces having different optical properties. Instead of measuring time delays, impossible at the light velocity, the distances of the reflecting interfaces are measured through the correlation between the backscattered light and another light beam reflected by a reference mirror at a known distance (low coherence interferometer) [2]. In the Time Domain OCT instruments, the reference mirror moves at a constant speed, so that a reflectivity signal is measured along a line across retina (Axial scan). By deflecting the scanning beam laterally, a number of these A-scan lines are obtained, covering some millimeters on the retina. Finally, the A-scans are put near by to form an image matrix (B-scan). Axial resolution (∼10 μm) is limited by the coherence length of continuous laser or by the time duration of pulsed laser; whereas lateral resolution (∼20 μm) depends on both
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 847–850, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
848
light diffraction and beam focusing, just like in microscopy. Usually, the A-scan signal is over-sampled, in order to achieve about 4 μm pixel spacing, based on the speed of the detector electronics and computer. On the contrary, the lateral (longitudinal) pixel spacing depends on the length of lateral scansion, that can be chosen from 2 to 10 mm, as the number of A-scans is usually predetermined, in order to limit the overall acquisition time (no more than 1 s) and hence to minimize the motion artifacts. The intensity level of the light reflected by retinal tissue is very low; however, the OCT has a very high sensitivity: the raw signal has a maximum level of –50 dB, with respect to the incident light power, and a minimum level of –95 dB. Because of such a large dynamic range, images are displayed with a logarithmic scale and false color is commonly used to better discriminate tissue details. A typical retinal cross-section obtained by OCT imaging is shown in Fig. 1. A-scans are aligned column-wise in rectangular bitmaps, whereas the retinal layers are clearly visible as longitudinal bright and dark bands. In fact, tissue reflectivity is strongly directional [7]: axial structures, like bipolar cells and photoreceptors, produce lower backscatter than longitudinal ones, such as nerve fiber and plexiform layers. As the eye has a spherical geometry, the retinal layers towards the center are usually called inner, whereas the others, towards the choroids, are called outer. As far as this work is concerned, only two retinal layers are considered: the inner retina (IR), enclosing retinal nerve fiber layer, inner ganglion cells and inner plexiform layer, and the outer retina (OR), including outer plexiform and photoreceptor layers (thicker in fovea). In fact, they represent the most important features in clinical applications. Moreover, the available OCT image quality is not always enough to resolve the other more subtle cellular layers. A StratusOCT scanner (Carl Zeiss Meditec, California
Fig. 1 A retinal OCT image with the labels indicating the retinal layers as defined in this study. RNFL = Retinal Nerve Fiber Layer; IR = Inner Retina, including inner ganglion cells and inner plexiform layer; OR = Outer Retina, including outer plexiform and inner photoreceptor layers (thicker in fovea); HRC = Hyper Reflective Complex including Retinal Pigment Epithelium fused with Bruch’s membrane and choriocapillaris.
M. Baroni, S. Diciotti , A. Evangelisti, P. Fortunato and A. La Torre
USA) was used to perform OCT imaging and 20 normal eyes were considered in this study. A pathological eye was also analysed. Horizontal B-scans of 6 mm length, across fovea, were acquired with the regular line protocol, choosing the OCT image where the foveal area is better delineated, according to the fundus image provided by the OCT instrument. B. Texture analysis OCT images are qualitatively similar to ultrasound images and many processing methods are based on these previous experiences. In particular for processing retinal images, one must take into account that A-scans are acquired individually. This fact and the multiplicative nature of speckle noise [8] make the application of 2D isotropic filters problematic. On the other hand, simple 1D edge detectors cannot always yield coherent retinal boundaries [4, 6]. Various denoising, edge-detection and filtering techniques were employed [9-11]. Specifically, anisotropic diffusion [12], as improved in [11], exhibits very attractive properties, such as smoothing parallel to the direction of retinal structures and enhancement perpendicular to them. However, rather than edge-based segmentation, region classification appears a more proper choice in identifying the two main retinal layers, IR and OR. Moreover, it is less sensitive to pathological changes in retinal structure, like in case of fluid cysts, that show stronger edges than retinal boundaries. Finally, image description in terms of texture features, that is our objective, is obviously region based. Various approaches were proposed for describing texture [13-15]. In retinal OCT images, texture appears fine and grained, without well defined texture primitives. Therefore, statistical approaches seem to be more appropriate than the syntactic ones. In this work, both the first and second order grey level statistics, estimated through the histogram and the co-occurrence matrices, were investigated. In the latter case, multi-scale analysis can be easily done. In fact, it is uncommon that only a single scale is appropriate all over the image. Each entry of a co-occurrence matrix is defined as the frequency of grey levels a and b in pixel p and q having a distance d in direction θ , so that a small set of such matrices must be obtained for a ROI (θ) of an image I(px,py):
C ϑ ,d (a, b ) =
| {( p , q ) ∈ J | I ( p x , p y ) = a ∧ I ( q x , q y ) = b}| / N with
J =
{(p, q )∈ ROI (ϑ ) ∧ p
x
− q x = d , p y − q y = d }.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Texture Classification of Retinal Layers in Optical Coherence Tomography
Fig. 2 Parametric images of horizontal (upper) and vertical correlation (lower), scale 4 , for a normal retinal OCT.
849
Fig. 3 Normalized values of texture correlation vs. 100 random ROI of IR (crosses), OR (circles) and background (dots).
They are computed for two scales (d chosen from 1 to 4 pixels) and for two directions (longitudinally, θ = 0°, and axially, θ = 90°). Another important issue is to choose the size of the window (ROI) over which the co-occurrence matrices are computed: small windows may loss statistical significance, big windows may include inhomogeneous tissue. Then, different sizes were used in our experiments (from 7x7 to 21x21). In order to capture relevant properties of texture, the following five parameters [5] were derived from co-occurrence matrices: energy, entropy, contrast, inverse moment, correlation. Here there is the definition of the texture correlation (Fig.2), that is high in flat regions and low in structured texture:
cor =
∑ [(a ⋅ b)Cϑ a ,b
,d
( a, b ) ] − μ x μ y
σ xσ y
with μ and σ mean and s.d. along rows and columns of C. Finally, also the mean and variance of grey level histograms were used as first order texture parameters. In summary, vectors of 18 features, or eventually less, were given as input to the classifier.
feed-forward ANN architecture was considered, with standard gradient descent back-propagation learning. Several simple modifications were tried in order to improve the training of the ANN. These include changes in the ANN parameters and different features given as inputs. The lowest training error and the best testing performance were achieved with 7x7 window size and for a smaller set of features: grey level mean, vertical and horizontal texture correlation, with d=2 and 4. Feature selection was accomplished by simple inspection of parameter plots, like in Fig.3. The final ANN has five input, seven hidden and three output neurons, and was trained with 0.1 learning rate and 5000 iterations. III. RESULTS A typical result of texture classification of a normal OCT image, belonging to the test set, is shown in Fig.4. In both training and test sets there were some discrepancies between the computed and manual outer retinal boundaries of OR, whereas only small errors for the other interfaces. Results for the three classes are shown in Table 1.
C. ANN classifier Among other classifiers, feed-forward ANNs [16] exhibit several advantages, as they can be easily trained. As already said, three classes were predefined, that are IR, OR and background. The ANN assigns a class membership to each pixel, based on its local texture features. To this aim, the retinal layers of our set of OCT images were manually segmented by a clinical operator; ten randomly chosen images were used as a training set, the other ten as a test set. A random sampling of 100 pixels for each class and for each image was performed to obtain the feature vectors. Only
Fig. 4 A normal OCT (upper) and its ANN texture classification (lower): the texture classified similar to IR is displayed in light grey; OR-like texture is in dark grey and background is black. The manually traced retinal boundaries are shown in white.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
850
M. Baroni, S. Diciotti , A. Evangelisti, P. Fortunato and A. La Torre
Texture classification of the OCT image of a pathological eye can be seen in Fig.5, where edematous tissue in IR and OR is classified respectively as OR and background. This is a paradigmatic example, suggesting possible applications of the method. Table 1. Results IR
for the test set OR
Background
Sensibility
87.0 %
70.5 %
87.3 %
Specificity
71.4 %
74.9 %
98.5 %
Accuracy
79.2 %
72.7 %
92.9 %
REFERENCES 1
Huang D, Swanson EA, Lin CP, et al. (1991) Optical Coherence Tomography. Science 254: 1178 –1181.
2
Brezinski M. (2006) Optical Coherence Tomography: principles and applications. Academic Press, New York.
3
Schuman JS, Puliafito C, Fujimoto JG. (2004) Optical coherence tomography of ocular diseases. 2nd ed. Thorofare, NJ.
4
Ray R., Stinnett S.S., Jaffe G.J. (2005) Evaluation of Image Artefact Produced by Optical Coherence Tomography of Retinal Pathology. Am J Ophthalmol 139:18-29.
5
Haralick R.M. (1979) Statistical and Structural Approaches to Texture. Proc.IEEE 67:786-804.
6
Baroni M., Fortunato P., La Torre A. (2007) Towards quantitative analysis of retinal features in OCT. Med. Eng. & Phys. 29, 432-441.
7
Knighton R.W. and Huang X.R. (1999) Directional and spectral reflectance of the rat retinal nerve fiber layer. Invest Ophthalmol Vis Sci. 40: 639-47.
8
Schmitt JM, Xiang SH, Yung KM (1999) Speckle in optical coherence Tomography. J. Biomed. Optics 4: 95-100.
9
Koozekanani D., Boyer K., and Roberts C. (2001) Retinal thickness measurements from optical coherence tomography using a Markov boundary model. IEEE Trans. Med. Imaging 20: 900–16.
10
Shahidi M., Wang Z., Zelkha R. (2005) Quantitative Thickness Measurement of Retinal Layers Imaged by Optical Coherence Tomography. Am J Ophthalmol 139:1056–61.
11
Cabrera Fernández D., Salinas H.M., Puliafito C.A. (2005) Automated detection of retinal layer structures on optical coherence tomography images. Opt. Express 13: 10200-216.
12
Perona P., Malik J. (1990). Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12, 629-639
13
J. S. Weszka, C. R. Dyer, and A. Rosenfield (1976) A comparative study of texture measures for terrain classification. IEEE Trans. Syst., Man, Cybern., vol. SMC-6.
14
C.-M. Wu, Y.-C. Chen, and K.-S. Hsieh (1992) Texture features for classification of ultrasonic liver images. IEEE Trans. Med. Imag., 11: 141–152.
15
T. Ojala, M. Pietikainen, and D. Harwood (1996) A comparative study of texture measures with classification based on feature distributions. Pattern Recogn. 29: 51–59.
16
Bishop C. (1995) Neural networks for pattern recognition. Oxford Univ. Press, New York.
17
Bocchi L., Coppini G., De Domenicis R., Valli G. (1997) Tissue characterization from X-ray images. Med. Eng. & Phys. 19: 336-42.
18
Binder T, Sussner M, Moertl D. et al.(1999) Artificial neural network and spatial temporal linking for automated endocardial contour detection on echocardiograms. Ultrasound in Med. & Biol. 25: 1069-76.
Fig. 5 A pathologic retinal OCT, with macular edema and vitreo-retinal traction (upper), and its classification result (see legend of Fig. 4).
IV. DISCUSSION AND CONCLUSIONS ANN classification of texture parameters of cooccurrence matrices is not new [17, 18], however, to our present knowledge, it was not yet applied to retinal OCT images. It is worth noting that the proposed method makes use of texture analysis to classify tissue status rather than to segment cellular layers. It is apparent that for retina segmentation, further refinement of the classified map would be necessary, through the integration of edge detection and topology information. In Fig. 5, for example, edematous areas can be easily merged into IR or OR regions. In conclusion, quantitative analysis of retinal layers may represent a promising tool for an improved monitoring of patients, earlier detection of pathology, and more precise treatment protocols.
Address of the corresponding author: Author: Maurizio Baroni Institute: Department of Electronics & Telecommunications, University of Florence. Street: via S. Marta 3 City: Firenze Country: Italy Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using Heuristics for the Lung Fields Segmentation in Chest Radiographs D. Gados1 and G. Horvath1 1
Budapest University of Technology and Economics/Department of Measurement and Information System, Budapest, Hungary
Abstract— The cancerous attaches, mainly the lung cancers, make a serious medical problem all over the world. The early diagnoses based on chest radiographs could notably lower its mortality. The efficiency of the computers gives the possibility to facilitate the work of the radiologists by a CADsystem. But first the region of the interest, i.e. the lung fields should be determined. The lung segmentation in our sense differs from the trends accounted in literature (where the area hidden by the heart is ignored), because the left border of the left lung is located beneath the heart. In this paper we describe a method based mainly on heuristics and rules which can be used to find the contours of the lung. The algorithm is divided to five main steps: (1) finding some parameters of the lungs without long processing, (2) determining the usual lung contours, (3) finding the mediastinum, (4) finding the lower border of the left lung and (5) applying a model to achieve better results, to make some refinement. Keywords— Lung, Chest, Radiographs, X-Ray, CAD-system.
I. INTRODUCTION Chest radiography is the most frequently used method in medical examination and screening of the lungs. Although there are lots of other imaging techniques, chest radiography is relatively cheap and quite fast, so it will remain the most common procedure for the next several years. For the past century the risk of tuberculosis notably assisted the installation of chest screening centers all over the world. Thanks to the grave efforts the tuberculosis slightly disappeared, and the screening centers have been closed in many countries. From the 1990s the number of tuberculosis infections has appreciably grown. On the other hand the amount of lung cancers is increasing too. Nowadays cancer is accounted for 20-30 per cent of all deaths in developed countries, moreover lung and bronchus cancer makes only 12-13 per cent of cancerous incidences, yet 30 per cent of mortality. The problem culminates in Eastern Europe, especially in Hungary, which takes the first place in lung cancer deaths among men in the world [1]. The cancer statistics call attention to the necessity of early diagnosis, which can be made by regular screenings. To cope with the remarkable number of patients a CAD system is needed to reduce the work of radiologists. Thus the computer program should discover the structures located on the images and understand their meanings (i.e. lung
fields, the ribcage, the clavicle, and many others), it should contain image enhancement features (such as subtracting the ribcage or reducing the shadow of the heart), but first the region of interest (ROI) must be determined. In this case it means segmenting the lung fields or in other words finding their contours. First of all large interpersonal anatomical variations can be noticed in the pictures, also the tube voltage and the amount of inhaled air have remarkable effect on the images (for example the visibility of the bones mainly depends on the former while the latter has impact on the contrast of the lungs), moreover chest radiographs are projection pictures, so anatomical objects are superimposed [2]. II. HEURISTICS AND RULES FOR SEGMENTATION The rule-based approaches are popular in the lung field segmentation. The main reason is that the rules solving smaller parts of a difficult task can be combined in any order, giving notable freedom. Moreover the rules can easily express the human knowledge about the problem. Of course there is a difficulty: the output of each step of the solution is an input of another one, so the error may accumulate. The lung segmentation in our sense differs from the trends accounted in literature (where the area hidden by the heart is ignored), because the left border of the left lung is located beneath the heart (anatomically there is some part of the lung behind the heart, which cannot be observed in the X-Ray images without image processing). Thus in general the lung contours are in coincidence with the usual contours, but the inner border of the left lung is much fainter than in other places. The heuristic which gives an acceptable approximation of the lung fields is organized as follows [3]: • • • • •
determination of some parameters of the picture and the lungs approximating the usual lung fields (which are accounted in literature) finding the mediastinum, which gives the opportunity to find left border of the left lung finding the diaphragm, so that the stomac gases could be excluded from the lung fields combining the results to achieve the lung contours.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 802–805, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Using Heuristics for the Lung Fields Segmentation in Chest Radiographs
First the area out of the body must be excluded. We may use the flood-fill algorithm from the picture borders with a threshold: all pixels from which the picture border can be reached only through the pixels whose intensity is higher than the threshold are excluded. Now we can find the line of the backbone. Taking the expected value of the rows (intensities are transformed to probabilities) we have an x-coordinate for each row. Smoothing these x-coordinates we achieve a continuous line, which approximates the backbone. This approximation is not accurate enough, but the lungs (i.e. the right and the left lung) are separated quite well. Sometimes the arm of the patient is visible too, so about 10-10% of the picture width on the left and right side of the picture must be cut and ignored. Similarly we can find the lung centers: taking the expected value for the left and right part of the picture (these parts are separated by the backbone mentioned above and the area which lies out of the body is excluded). From these centers almost all points of the contours are visible (not all, because the lungs are concave), but neglecting that not all points of the contour are visible from the centers we have the possibility to assign an angle and a distance to them. We registered that the outer borders of the lung are in coincidence with local minima. The intensities of these pixels are smaller than the intensities of the points which are closer to the picture borders. So from the picture borders we can mark all points whose intensity is smaller than the previous ones. This is done horizontally, down form the top of the picture and at 45 and 135 degree angle from the left and right side of the image. To have a good heuristic which gives the inner borders too, a transformation of the picture is needed. From the vertical center line of the picture to the left and to the right as well we add small but increasing values to the pixel intensities. The points which were marked are out of the lung fields. Fortunately we have two points which are guaranteed to be located in the lung fields: these are the lung centers. From these points we may run a flood-fill algorithm to register the points which were not marked minimal before. In the obtained region there may be several “islands” (small area which is located entirely in the lungs and all its points were marked minimal). With the flood-fill algorithm we can set these areas to be parts of the lung. Using these rules we can obtain a first approximation of the lung fields (this is very similar to the task accounted in literature). The result is shown in the Figure 1 (the picture is inverted). It is far from faultlessness. On the basis of this initial approximation a better solution which gives the inner and lower borders of the left lung correctly can be made. Moreover the error might be corrected by applying a model, which is described below.
803
Fig. 1 Points of the lung contours with the lower corners The right border of the mediastinum is in coincidence with left border of the left lung. So if we could find the mediastinum, we could find the inner border of the left lung as well. The mediastinum is a dark, quite homogenous area in the middle of the picture, so the histogram of the middle of the picture may be used. The height of the window equals to the picture height, but its width is 40% of the image width. There is an intensity which is higher than 35-55% of the pixel intensities in this window (40% is a good parameter for this). From the lower side of the vertical center line of the picture by a flood-fill algorithm we can mark the pixels below the intensity mentioned above. These pixels approximately determine the mediastinum (see Figure 2). Its right border may be found easily, and it would be the left border of the left lung.
Fig. 2
The mediastinum
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
804
D. Gados and G. Horvath
rules for the lower corners, which are the nearest pixels from the first approximation to the lower corners of the picture). Using these rules we can obtain the second approximation of the lungs (see Figure 4.), which is nearly acceptable, of course some refinement should be done, which is described in the further section. III. APPLYING A LUNG MODEL A model to the lung contours can be obtained using PCA (Principal Component Analysis) [5]. Let xi describe the i-th object example (right or left lung in this case, which is determined by humans to obtain a model), which contains n xcoordinates and n y-coordinates (i.e. xi = (x1(i), y1(i), …, xn(i), yn(i))T). The covariance matrix of the examples is
Fig. 3
Classification to find the diaphragm
To find the diaphragm we can use a classification based on the contrast. The contrast between to regions is defined as [4]: c=γ
A−B , A+ B
(1)
where A and B are the mean values of pixel intensities of the two regions, and γ is a suitable parameter. Using (1) we can evaluate the contrast at each pixel in the picture. In this case A and B will be mean intensities in a direction of a line at 135 degree angle (in some distance), the two regions are separated by the pixel which is actually evaluated. Applying a threshold to the results we can obtain a classification like in Figure 3. From the dark classes we can select the one which is the best approximation to the diaphragm (this is done by
Fig. 4
The second approximation of the lung contours
1 s (2) ∑ (x i − x)(x i − x) T , s − 1 i =1 where s is the number of examples, x is the mean of examples. Some eigenvectors of S belonging to the largest eigenvalues form matrix Φ : S=
.
Φ =[ ϕ 1 , ϕ 2, …, ϕ t ]T , t < n
(3)
Thus the vector describing the lung might be approximated as xˆ = A(x − x) + x , (4) where A = ΦT Φ
(5)
which is easier to evaluate than the original approximation rule, since no eigenvalues are taken into account. The result is shown in Figure 5.
Fig. 5
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using Heuristics for the Lung Fields Segmentation in Chest Radiographs
IV. RESULTS
V. CONCLUSIONS
The model was made from 63 images, and the algorithm was tested on a set of 62 (other) pictures. The results (for lung classification correctness) are shown in Table 1, where A stands for Accuracy, Sn stands for Sensitivity, Sp is the Specificity and Ω is the Overlap measure [2], [6]: A=
TP + TN TP + TN + FP + FN
(6)
Sn =
TP TP + FN
(7)
Sp =
TN TN + FP
(8)
TP TP + FP + FN
(9)
Ω=
We have proposed a new algorithm for the lung field segmentation, which can find the lung border hidden by the heart as well (we recall, that our definition for the lung contours differs from the one accounted in literature). The results are acceptable, but naturally some refinement should be done in the future.
REFERENCES 1. 2. 3.
where TP is the number of true-positive, TN the falsenegative, FP the false-positive and FN is the number of false-negative pixels.
4. 5.
Table 1 Results Algorithm
Area
A
Sn
Sp
Ω
Heuristic
Left lung Right lung Both Left lung Right lung Both
96 % 98 % 95 % 97 % 97 % 94 %
90 % 88 % 90 % 85 % 85 % 85 %
98 % 99 % 97 % 99 % 99 % 99 %
82 % 87 % 85 % 83 % 84 % 83 %
With lung model
805
6.
Cancer Facts & Figures 2004, American Cancer Society, USA Ginneken B van (2001) Computer-Aided Diagnosis in Chest Radiography. Ph.D. thesis, University Medical Center Utrecht, Wageningen. Gados D (2006) Mellkasröntgen felvételek elemzése (Tüdõárnyék körülhatárolása), MSc. Thesis, Horvath G and Horvath A (advisors), Budapest University of Technology and Economics, Department of Measurement and Information Systems, Budapest Sonka M, Hlavac V, Boyle R (1998) Image Processing, Analysis and Machine Vision. Brooks-Cole Cootes T. F, Taylor C. J (1999) Statistical Models of Appearance for Computer Vision, Technical report, University of Manchester, Wolfson Image Analysis Unit, Imaging Science and Biomedical Engineering, Manchester. Ginneken B. van, Stegmann M. B, Loog M (2006), Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database, Medical Image Analysis 10, pp. 19-40, Elsevier. Author: D. Gados Institute: Budapest University of Technology and Economics/ Department of Measurement and Information Systems Street: Magyar Tudósok Körútja 2/A. City: Budapest Country: Hungary Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Web-based Visualization Interface for Knee Cartilage C.-L. Poh1,3, R.I. Kitney1 and R.B.K. Shrestha2 1
2
Imperial College London, London, UK University of Southern California, Los Angeles, USA 3 Nanyang Technological University, Singapore
Abstract— Osteoarthritis (OA) of the knee can be described as the degradation and loss of articular cartilage. Adequate visualization of cartilage is paramount in allowing accurate and clinically meaningful assessment of cartilage surface morphology and thickness. In this paper we present a web-based user interface that allows the visualization of quantitative results (i.e., cartilage thickness) derived from MR knee images. The use of web-based technology has allowed greater access to the interface and clinically useful interactive functions for the viewing of data (i.e., cartilage thickness WearMap and MR images). Keywords— Osteoarthritis, Web-based, Visualization Interface, Cartilage Wear.
I. INTRODUCTION Osteoarthritis (OA) of the knee can be described as the degradation and loss of articular cartilage [1]. In order to carry out effective diagnosis and treatment, there is a need to understand the progression of cartilage loss and to study the effectiveness of therapeutic interventions. Adequate visualization of cartilage is paramount in allowing accurate assessment of cartilage and thickness. Magnetic Resonance (MR) Imaging (MRI) allows multiplanar analysis of the knee joint anatomy, as well as cartilage and underlying status of the bone. Because of the excellent soft tissue contrast that can be achieved by MRI, it is possible to visualize cartilage thickness and surface distribution in 2D and 3D - derived from image post processing (e.g., segmentation) [2]. The use of an interface is important to allow data to be analyzed in a manner that could aid in diagnosis. The use of web-based information and communication technology (ICT) is becoming increasingly important in medicine. Specifically, the use of web-based technologies will allow the clinician to gain universal access to a patient’s data in real time anywhere across the Enterprise (e.g., the hospital) using PCs, MACs etc [3]. Consequently, webbased technologies (e.g. universal web-browser and standard communication formats) will be used to access and view a wide range of data. In this paper we present a web-based user interface that allows the visualization of quantitative results (cartilage
thickness) derived from MR images. The interface was developed using web-based languages. II. METHODS A. Web-based Interface Design A fully web-based interface was developed using Scalable Vector Graphics (SVG) and JavaScript. SVG is a recently introduced graphics file format based on XML [4]. It allows user interaction via JavaScript and inherits the features of XML (e.g. ease of storage and dissemination). Fig. 1 shows the design of the interface. Referring to the figure, the interface is divided into 3 main sections i.e. WearMap window, Image window and Navigator control. 2D cartilage thickness map (i.e. WearMap) is displayed in the WearMap window and MR images are displayed in the image window. In order to provide browsing and zooming functions for the WearMap and images, two Navigators are implemented. The original full view of the WearMap and images are displayed in each navigator, respectively. Each navigator has a rectangle dragger which serves as a navigation tool to view the image. By interactively moving the dragger using the mouse within the navigator different region of the image
Wearmap Window
Image Window
Navigator Control
Fig. 1.
Layout of the web-based visualization interface.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 814–817, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Web-based Visualization Interface for Knee Cartilage
can be viewed. This method of viewing allows the user to zoom and examine details without losing the overall view of the image. Hence, the location of the enlarged image in relation to the original size image is known. WearMaps was shown to be useful in visualizing cartilage thickness [2]. However, during the viewing of WearMap, there is a need to be able to directly relate the WearMap information to the relevant original 2D MR images. The Trackback function was shown to be an effective tool for allowing any suspected region on the 2D WearMap to be immediately examined on the original MR image [2]. In our interface the Trackback function was implemented using web-based language, i.e. Javascript. This allows the function to be performed via a web-browser. Using the coordinates of the point of interest on the WearMap, the Trackback function displays the original MR image that corresponds to the point. This is performed by clicking the mouse at a point on the map. For example, a suspected local cartilage defect, observed in the thickness map, can be displayed in the corresponding cartilage surface points on the parent MR image. This method of viewing maintains geometric integrity between the WearMaps and the radiological data. B. Creation of Cartilage Thickness WearMaps Segmentation and measurement of the femoral articular cartilage are required in order to generate the WearMap using MR images. The segmentation, measurement and creation of the WearMap were implemented using Matlab [5]. The femoral articular cartilage in the MR images is segmented using a semi-automated segmentation method developed in a previous work [5]. The segmentation process uses a radii search threshold method. For images where the contrast of the cartilage is not well-defined, the results are manually edited to achieve satisfactory results. The thickness of the cartilage is then measured automatically using the segmented images. Thickness measurements are made from the inner cartilage boundary (cartilage-subchondral bone interface) in a direction normal to the inner boundary toward the outer cartilage (cartilage-synovium interface) boundary. The points along the inner cartilage are detected at a 4° angular increment. Thickness values for all points are plotted as a contour plot, in terms of their angular and slice position, to outline the 2-D shape of the cartilage. The thickness values for each point are represented by a color, which is defined by a color scheme. A linear color scheme is generally used to represent the cartilage thickness [2, 6]. In our implementation we developed a log scale color scheme to allow better contrast between thin and normal
815
region of the cartilage. This enables the focus to be drawn to regions that are thin, which are represented by other colors. Cartilage thickness maps are generally generated using raster image formats, e.g., JPEG, PNG, TIFF, etc. However, raster image formats lose definitions when the images are zoomed and limit the incorporation of interactive features. In order to overcome these problems, we generate the WearMap using SVG format. This allows interactive functions to be implemented using Javascript. More importantly, using SVG format allows zooming of the images without loss of definitions [4]. A Matlab program was written to export the WearMap generated in Matlab using SVG format. The WearMap is first calculated using the ‘imcontour’ function provided by Matlab. The function generates contours that correspond to different thickness values of the cartilage. In order to produce the WearMap, SVG path data type is used to represent the contours generated by the ‘imcontour’ function (e.g. each path represents a contour). Hence, the SVG WearMap comprises a number of SVG paths that represent the thickness values of the cartilage. C. WearMap Interactive Features Because the WearMap is generated using SVG format, it is possible to add interactive functions for the WearMap. Two interactive functions are implemented (e.g. highlighting regions with the same thickness; and displaying the thickness value range for different locations of the WearMap). 1) Thickness Highlight: The ability to focus on a region of the WearMap, e.g., where it is thin, will likely aid in the study of the cartilage WearMap. Hence, a Thickness Highlight function was implemented. This allows the region of the WearMap with the same thickness range to be highlighted. This is achieved by changing the visibility of the contours. 2) Thickness Display: During the studying of the WearMap, there is often a need to know the value of the cartilage thickness at specific location of the WearMap. Hence, the Thickness Display function which displays the range of the cartilage thickness at specific locations of the WearMap was developed. Because SVG is a graphics language which is based on XML, the thickness range for each contour is coded. This enables the thickness value range at specific locations of the WearMap to be displayed by moving the mouse over the map, D. 3D Visualization 3D models have been shown to be important in understanding the complex anatomy present in tomographic imaging data (e.g. MR knee images). 3D visualization of the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
816
C.-L. Poh,, R.I. Kitney and R.B.K. Shrestha
model removes the need for the user to mentally visualize the 3D anatomy structure from image slices. Hence, 3D visualization is incorporated into our interface. 3D model of the cartilage is generated using the segmented MR images. This is achieved by using the 3D reconstruction program which has been described in our previous work [7]. The program was developed using a Visualization ToolKit (VTK) [8]. The 3D surface model is reconstructed by means of a marching cube algorithm and is generated using Virtual Reality Markup Language (VRML) format. VRML is the standard for transmitting 3D content over the web. This allows the 3D model to be displayed by a standard webbrowser with a VRML plug-in.
(a)
III. RESULTS A web-based interface was developed using SVG and JavaScript. The interface was viewed on a web-browser, i.e., Internet Explorer, with a 3rd party SVG plug-in, i.e. Adobe SVG viewer, on a standard personal computer. A set of MR knee images was acquired. The images and results (i.e. cartilage thickness WearMap) were successfully viewed with the interface. Using the Navigator function implemented for the interface, it is possible to pan and zoom the image/WearMap, whilst maintaining the overall view of the original image.
(b) Fig. 2. Highlight region with the same thickness. (a) WearMap displaying all the thicknesses. (b) WearMap displaying only region with thickness between 1 – 1.17 mm.
A SVG WearMaps The MR knee images were segmented using the semiautomated method described in [5] to delineate the femoral articular cartilage. Thickness of the cartilage was measured from the segmented images. The WearMap was generated in the SVG format and interactive functionalities were implemented. Consequently, it is possible to highlight region on the WearMap with the same thickness (i.e. same color coding) using the Thickness Highlight function (see Fig. 2). This was performed by moving the mouse over the color bar. This allowed only the region with the same color (i.e. thickness) to be visible. This approach assists the user in identifying regions of wear using the WearMap. The range of cartilage thickness at different locations of the WearMap was displayed by means of the Thickness Display function. This was achieved by coding the thickness range for each contour within the SVG file. Hence, by moving the mouse over the map the thickness value range at different locations of the WearMap can be retrieved and displayed. This removes the need to refer back to the color bar to determine the thickness range for the region of interest.
There is often a need to be able to directly relate the WearMap information to the relevant 2D MR images of the joint. Hence, the Trackback function was implemented via Javascript. This allows the interface to be fully web-based. The function allows MR image which corresponds to a
Fig. 3. Viewing interface displaying the WearMap and the corresponding MR image using the Trackback function.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Web-based Visualization Interface for Knee Cartilage
817
scheme. This approach causes the user to be drawn to regions where possible cartilage degeneration is occurring. This paper relates to a project on the development of advanced clinical information systems (CIS) [7]. These systems facilitate the storage, display and manipulation of multiple data types across the “Biological Continuum” (BC) – the continuum which comprises systems, viscera, tissue, cells, proteins and genes. Data at various levels of the BC need to be handled and utilized in an integrated and efficient manner. The web-based interface presented in this paper was developed to visualize data at the visceral level of the BC. Fig. 4. Screenshot showing the viewing interface displaying the 3D model of the cartilage, using cortona VRML plug-in.
ACKNOWLEDGMENT
location on the WearMap to be displayed in real time. This was performed by clicking on the WearMap using a mouse. Fig. 3 shows a screenshot of the interface displaying the WearMap with a MR image corresponding to a specific point on the WearMap. Referring to the figure, 3D model of the cartilage can be visualized by clicking on the “view 3D” button of the interface (see Fig. 4).
The authors are grateful for the partial financial support of the EU Similar Network of Excellence in the execution of this project.
IV. DISCUSSIONS & CONCLUSIONS In this paper we have presented a fully web-based user interface which allows the viewing quantitative results (cartilage thickness measurement) derived from MR images. We have developed a fully web-based viewing interface. The interface was implemented in SVG and JavaScript. As a result, the interface can be viewed on a web-browser with SVG plug-in on a standard PC. The advantage in using a web-based interface is that the client workstation hardware can be platform independent, as long as the web-browser is supported. This allows greater access to the interface for the viewing of data. The Trackback function allows the user to reference back to the parent MR images. This method of viewing removes the need for the user to mentally visualize the exact location of the thickness measurement within the original MR image. In addition, geometric integrity between the WearMaps and the radiological data was maintained. Hence, the focal thickness on the cartilage surface can be ascertained using the WearMap and directly compared to the original MR images. An important feature of using WearMap is that it is possible to study the cartilage thickness of a 3D cartilage in one single view, as opposed to the current practice of viewing the MR images in a ‘slice’ by ‘slice’ manner. The use of WearMap is further enhanced using a log scale color
REFERENCES 1. 2.
3.
4. 5.
6.
7. 8.
Pelletier J-P and Martel-Pelletier J. (2003) Therapeutic targets in osteoarthritis: from today to tomorrow with new imaging technology. Ann Rheum Dis 62:79-82. Cashman PMM, Kitney RI, Gariba MA et al (2002) Automated techniques for visualization and mapping of articular cartilage in MR images of the osteoarthritic knee: a base technique for the assessment of microdamage and submicro damage. IEEE Transactions on Nanobioscience 1: 42-51. Claesen S, Kitney RI, Shrestha RB et al (2003) Web based clinical information systems, presented at IFMBE Proceedings WC2003 "World Congress on Medical Physics and Biomedical Engineering", Sydney, Australia. Quint A (2003) Scalable vector graphics. Multimedia, IEEE 10:99-102. Poh CL, Kitney RI (2005) Viewing Interfaces for Segmentation and Measurement Results, Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, pp 5132-5135. Cohen ZA, McCarthy DM, Kwak SD et al (1999) Knee cartilage topography, thickness, and contact areas from MRI: in-vitro calibration and in-vivo measurements. Osteoarthritis and Cartilage 7:95-109. Poh CL, Kitney RI, Shrestha RB (in press) Addressing the future of Clinical Information Systems - Web-based Multi-layer Visualization IEEE Trans. Inform. Technol. Biomed. Schroeder W, Martin K, Lorensen W (1998) The visualization Toolkit: An Object-Oriented Approach to 3-D Graphics., 2nd ed. NJ: Prentice Hall. Professor Richard I Kitney Department of Bioengineering Imperial College London Exhibition Road London SW7 2BX
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Model of Flow Mechanical Properties of the Lung and Airways B. Kuraszkiewicz1, T. Podsiadly-Marczykowska1 and M. Darowski1 1
Institute of Biocybernetics and Biomedical Engineering , PAS, Warsaw, Poland
Abstract— The paper describes a lung model, which illustrates the pressure - volume - flow relationships in the lungs. The model includes three airway segments in series; the resistance of one of them is a function of transmural pressure and a constant related to airway compressibility. The model can be used to obtain IVPF curves and flow-volume curves, and various assumptions concerning the distribution of airway resistance, the magnitude of lung elastic recoil; and other factors can be tested with it.
Fig. 1, Model of the lung and airway. (Palv - alveolar pressure, Ppl - intrapleural pressure, PL - intrabronchial pressure, k1, k2, k3 parameters of segments).
Keywords— flow-volume curves, isovolume-pressure-flow curves (IVPF), a model of lung and airway, transmural pressure, airway segments
I. INTRODUCTION 1. Hyatt, Schilder and Fry described the experimental observation that at constant lung volume, expiratory flow increases as the driving pressure is increased until the critical level is reached, and further increase in driving pressure does not result in an increase in the expiratory flow [1,2,3]. These phenomena are demonstrated in isovolume pressure flow curves (IVPF) [2,3]. 2. The primary purpose of the present study is to see whether simple physical processes, which can be solved by model analysis, might explain a number of experimental observations on the lung and airway dynamics in terms of the pressure – volume – flow relationships. Definitions and symbols used in the present study are: Ppl Palv Pst(l) PL Ptm
– intrapleural pressure [cmH2O] – alveolar pressure [cmH2O] – static recoil pressure of the lung [cmH2O] – intrabronchial pressure in general [cmH2O] – transmural pressure across the airway [cmH2O]and Ptm = PL - Ppl R – resistance of the airway [cmH2O/l/s] Ptm’ – critical transmural pressure [cmH2O] V – volume of the lung [l] V or Q – flow rate [l/s] Palv’ – alveolar pressure at that moment that the airway narrows [cmH2O] V’ – volume of the lung at that moment that the airway narrows [l]. A scheme of the model used in the analysis is presented in Fig.1.
3. It consists of a single elastic lung, with an airway connected to it. The assumptions made on this model are as follows: 1. 2.
3. 4.
Flow through the airway is assumed to be laminar under all conditions. The airway is separated into three parts as illustrated in Fig.1. The distal segment (alveolar side) has a variable resistance to flow, which varies with the state of the lung inflation. The intermediate segment has a negligible resistance till transmural pressure across the airway at the end of the distal segment reaches a critical level (critical transmural pressure, Ptm’ ). If the transmural pressure exceeds this critical level, the resistance of the segment increases as a linear function of Ptm. The proximal segment has a fixed resistance, which is independent of the lung volume. Elastic recoil pressure of the lung varies with lung volume. Separation of the airway in the model has no anatomical basis and is only specified in terms of pressure in general.
The equations, which describe the static and dynamic behaviour of the model, are: (1) (2) (3) (4) (5) (6) (7)
Ppl + Pst(l) = Palv Palv / (R1 + R2 + R3) = V V’= V - ∫Vdt Pst(l) = f (V) R1 = k1/V R2 = k2 (Ptm) where Ptm ≤ Ptm’ k2 = 0, Ptm > Ptm’ k2 >0 R3 = k3
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 871–874, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
872
B. Kuraszkiewicz, T. Podsiadly-Marczykowska and M. Darowski
Index 1, 2 and 3 denote the distal, intermediate and proximal segments, respectively. Parameter k1 is the reciprocal of the conductance to volume ratio in the distal segment. It was assumed that the linear relationship between the airway conductance and the lung volume, which has been reported by others, mainly reflects changes of airway conductance with the state of lung expansion in small bronchi (distal segment in the model). The value of k1 used in the present study was obtained from the literature [1,4]. The value of k2 cannot be obtained experimentally and is therefore assumed.
[l/s] k3
6
0.5 2
.
4
6
V 2
0
10
20
30 [cmH2O]
Palv
Fig. 3. Effects of increasing k3 on IVPF curve.
II. STATIC ANALYSIS OF THE MODEL The analysis of the model was carried out under the static and dynamic conditions. The term “static” is used here for those conditions, where the time is neglected and the term “dynamic”, for those conditions, where time factor is introduced. The static behaviour of the model was carried out, in most cases, with at least one of the model parameters constant. This is to avoid the complex graphic presentation of a three-dimensional diagram and also to better understand of the physiological mechanisms of the pressure-flow-volume relationship. For instance, to obtain IVPF (isovolume- pressure-flow) curves, the lung volume was held constant. Figure 2 shows a typical IVPF diagram obtained from the model. Preliminary trials disclosed that in order to obtain the IVPF curve, which is consistent with values of Qmax and Palv' experimentally observed in normal persons, with a normal time constant, the value of Ptm’ must be –5cmH2O. However, it has not to be made such a simplified assumption. In other factors, such as traction forces in the surrounding tissues, elasticity, or smooth muscle tone, affect on the airway wall, the actual transmural pressure at the locus of airway narrowing cannot be specified. In the model
which has been considered, this Ptm’ was assumed to remain unchanged at different lung volumes; a constant value has been ued in the following experiments. The dotted line in Fig.2 is the IVPF curve obtained from the model, if Ptm’ is set near zero. In this case, the observed values of Vmax (=Qmax) and Palv’ both were too low. If flow through the airway is assumed to be turbulent the pressureflow relationship is curvilinear until Qmax is reached. The value of Qmax was slightly lower and the value of Palv’ was higher than with the assumption of the laminar flow. Preliminary trials also disclosed that the value of k2 must be more than 20. If a value less than 20 is used, the IVPF curve does not show any plateau, as it is shown by the broken line in Figure 2. This may suggest that in the model, 1cmH2O of positive transmural pressure is associated with an increase of more than 20cmH2O/l/s resistance in the intermediate segment. The effect of increasing downstream resistance (k3) was also tested; the results are shown in Figure 3. There was no
[l]
[ ]
[[ ]
[l]
[l/s] K2 < 20
6
Ptm’ – 5cmH2O
. V 2
Ptm’ – 0cmH2O 0
10
20
[ ]]
[
]
30 [cmH2O]
Palv
Fig. 2. Effects of Ptm’ and k2 on IVPF curve.
Fig. 4. IVPF curves at different lung volumes. A: normal; k1=6, Pst(l) =20cmH2O at TLC (total lung capacity). B: obstructed; k1=12, Pst(l) =10cmH2O at TLC.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Model of Flow Mechanical Properties of the Lung and Airways
[
873
]
B
A
V[l]
Palv
10 cmH2O
V
1l
[l] [cmH2O]
.
V
Fig. 5. A: Relationship between alveolar pressure at the moment that the airway narrows (Palv’) and upstream B: Relationship between the lung volume at the moment that the airway narrows (V’ ) and the upstream resistance (k1) at constant Palv’.
effect on Qmax but a significant effect on Palv’. At a given k1, Palv’ is higher when k3 is increased. This suggests that the airway is less collapsible when k3 is high. With values of Ptm’ = -5cmH2O and k2 = 20, IVPF curves at different lung volumes were computed in the model; the results are shown in Figure 4. It may be seen that there is a linear relationship between the alveolar pressure and the flow until Qmax is reached. As soon as the airway starts narrowing, there is no further increase of expiratory flow even though the driving pressure is further increased. Figure 5A shows the effect of increasing upstream resistance (k1 in the model) on Palv’ . It demonstrates that at a constant lung volume, airway narrowing (in k2) occurs at lower alveolar pressures as k1 is increased. Since alveolar pressure reflects the breathing effort, another way of expressing this is that the higher k1, the less the effort at which k2 collapses. Figure 5B illustrates the same data, with Palv’ instead of volume as a parameter. It may be seen that as k1 is increased, V’ decreases. This suggests that with the same effort (fixed Palv’ ) the airway collapses at a higher lung [cmH2O] 50 Palv 25
0
1
2
3
[s]
time
Fig. 6. Assumed pattern of alveolar pressure for forced expiration.
0
1
2 time
3
[s]
0
1
2
1l/s
3
[s]
time
Fig. 7. Expired volume, flow rate and alveolar pressure plotted against time; forced expiration. A - normal, B - obstructed.
volume can thus be interpreted to represent an effective alveolar pressure generated by the respiratory muscles, in the sense that any energy spent on increasing the alveolar pressure beyond Palv’ is wasted as far as the mechanics of breathing is concerned. III. DYNAMIC ANALYSIS OF THE MODEL The major difficulty in analysing the dynamic behaviour of the model lies in the fact that information on respiratory muscle pressure is still not experimentally available in terms of the magnitude and its time course pattern. Therefore, in the following analysis, the pattern of the alveolar pressure as shown in Figure 6, was assumed and used as the driving force does not affect the flow rate once it exceeds Palv’. Figure 7 shows the expired volume, flow rate, and applied alveolar pressure on the ordinate and time in seconds on the abscissa. It clearly demonstrates that the flow reaches Qmax at the point as indicated by the arrow in the figure and thereafter decreases despite further increase in the driving pressure. Figure 7A shows the volume and flow patterns obtained from the model, assuming an obstructed state. It may be seen that narrowing of the airway occurs in an earlier stage of expiration and at a higher lung volume than in the normal state. In the model analysis, it is possible to perform some studies which are difficult to do under actual experimental conditions. For instance, Figure 8 is a plot of the alveolar pressure and the lung volume at a constant flow rate. If a subject starts expiration with a constant flow rate, he/she can expire with this flow until the inflexion point in the figure is reached. Beyond this point, he/she cannot continue to expire with this flow rate even when the alveolar pressure rises to almost infinite values. Consequently, the flow must
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
874
B. Kuraszkiewicz, T. Podsiadly-Marczykowska and M. Darowski
across the airway. As soon as the airway narrows, an opposing force to reopen the airway works the narrowed locus, and these two forces are balanced with negative feed back processes. This effects results in fixation of the transmural pressure at Ptm’.
[cmH2O]
[cmH2O]
[ ]
[l]
REFERENCES
[ ]
[l]
1. 2.
Fig. 8. Iso-flow - volume - pressure curves at different constant flow rates. A - normal, B - obstructed 3.
be reduced. Therefore, the line connecting all inflexion points (dotted line in the figure) represents the maximal effective alveolar pressure generated by the respiratory muscles at different lung volumes. IV. CONCLUSIONS In conclusion, most experimental observations on the pressure-flow-volume relationships of the lung so far reported can be explained by introducing the simple assumption that the resistance to the gas flow somewhere in the airway increases as a function of the transmural pressure
4.
Bouhuys A., Jonson B.: Alveolar pressure, airflow rate and lung inflation in man . J. Appl. Physiol. 22, 1967, 1086-1100 Fry D.L., Hyatt R.E.: Pulmonary mechanics. A unified analysis of the relationship between pressure, volume and gas flow in the lungs of normal and diseased human subjects. Amer. J. Med., 29, 1960, 672689. Fry D.L.: Theoretical considerations of the bronchial pressure-flowvolume relationships with particular reference to the maximum expiratory flow volume curve. Phys. Med. Biol ., 3, 1985, 174-195 Pride N.B., Permutt S., Riley R.L., Bromberger-Barnea B.: Determinants of maximum expiratory flow from the lungs. J.Appl.Physiol. 32, 1976, 646-662
Author: Institute: Street: City: Country: Email:
Bozena Kuraszkiewicz of Biocybernetics and Biomedical Engineering Trojdena 4 02-109 Warsaw Poland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Acetabular forces and contact stresses in active abduction rehabilitation H. Debevec1, A. Kristan2, B. Mavcic1, M. Cimerman2, M. Tonin2, V. Kralj-Iglic3, and M. Daniel1,4 1
Laboratory of Physics, Faculty of Electrical Engineering, University of Ljubljana, Slovenia 2 Department of Traumatology, University Medical Center Ljubljana, Slovenia 3 Laboratory of Clinical Biophysics, Faculty of Medicine, University of Ljubljana, Slovenia 4 Laboratory of Biomechanics, Faculty of Mechanical Engineering, Czech Technical University in Prague, Czech Republic Abstract— Operative fixation of fragments in acetabular fracture treatment is not strong enough to allow weight bearing before the bone is healed. In some patients even passive or active non-weight-bearing exercises could lead to dislocation of fragments and posttraumatic osteoarthritis. Therefore, early rehabilitation should avoid loading the acetabulum in the regions of fracture lines. The aim of the paper is to estimate acetabular loading in non-weight-bearing upright, supine and side-lying leg abduction. Three-dimensional mathematical models of the hip joint reaction force and the contact hip stress were used to simulate active exercises in different body positions. The absolute values of the hip joint reaction force and the peak contact hip stress are the highest in unsupported supine abduction (1.3 MPa) and in side-lying abduction (1.2 MPa), lower in upright abduction (0.5 MPa) and the lowest in supported supine abduction (0.2 MPa). The results are in agreement with the clinical guidelines as they indicate that upright abduction should be commenced first. Keywords— acetabular fracture, biomechanics, hip contact stress, rehabilitation.
I. INTRODUCTION Acetabular fractures are produced by high energy injuries that often cause dislocation of the fragments with gaps and steps [1]. The goal of operative treatment of such fractures is to restore acetabular anatomy with perfect fragment reduction and stable fixation in order to enable early joint movement [2],[3]. The fixation of the fragments is not strong enough to allow weight bearing before the bone is healed [4],[5] and in some patients even physical therapy with initial passive motion and continued active exercises without weight bearing could lead to dislocation of fragments and early posttraumatic osteoarthritis [2]. Early physical therapy of patients with acetabular fractures therefore requires careful selection of exercises in order to prevent excessive loading of the injured acetabular region. Current guidelines for nonoperative management of acetabular fractures and postoperative management of surgical procedures in the acetabular region recommend initial bed rest followed by passive motion in the hip joint. Initial active non-weight-bearing exercises commence a few days after surgery and include active flexion, extension and abduction
in the hip in the upright position. The same set of exercises in supine or side-lying abduction is usually postponed until 5-14 days postoperatively. Partial weight-bearing with stepwise progression usually starts 6 weeks postoperatively and full weight bearing is eventually allowed at 10 weeks [6]. Recently, interesting information was obtained by direct measurements of acetabular contact pressures during rehabilitation exercises in subject with pressure-instrumented partial endoprostheses where it was found that acetabular pressures may not follow the predicted rank order corresponding to the commonly prescribed temporal order of rehabilitation activities [7],[8]. Due to technical complexity and invasiveness of direct contact stress distribution measurement, various mathematical models for calculation of the hip joint loading force and contact stress distribution in the hip joint have been proposed [9]-[16]. Recently, a mathematical model has been developed that enables computation of the contact stress distribution at any given position of acetabulum and also allows simulation of different body positions and variations in pelvic morphology [10]-[12]. The aim of the paper is to compare acetabular loading in non-weight-bearing upright, supine and side-lying leg abduction by using a muscle model for computation of the hip joint reaction force and a previously developed mathematical model of contact hip stress distribution. With this knowledge the range of motion and body position during active exercises can be suggested that would prevent excessive loading of particular acetabular regions and displacement of fracture fragments. II. METHODS Biomechanical estimation of the hip joint loading was based on a mathematical model for computation of the hip joint reaction force and a previously developed model for computation of the contact stress distribution in the hip articular surface. The model for force assumes that the abduction exercise is performed slowly, i.e. the dynamic effects related to motion can be neglected and therefore the static calculation for given position of the leg is considered.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 915–918, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
916
H. Debevec, A. Kristan, B. Mavcic, M. Cimerman, M. Tonin, V. Kralj-Iglic and M. Daniel
Table 1 Muscles included in the musculoskeletal model of the hip joint No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Muscle adductor brevis
No. 15
Muscle. gluteus minimus 3
adductor longus adductor magnus 1 adductor magnus 2 adductor magnus 3 gemelli inf. et sup. gluteus maximus 1
16 17 18 19 20 21
iliacus pectineus piriformis psoas quadratus femoris biceps femoris long
gluteus maximus 2 gluteus maximus 3 gluteus medius 1
22 23
gracilis sartorius semimebranosus
gluteus medius 2 gluteus medius 3 gluteus minimus 1 gluteus minimus 2
24 25 26
semitendinosus tensor fascie latae
27
rectus femoris
In computation of the hip joint reaction force (R), the equilibrium equations of forces and torques acting on the lower leg are solved. The body weight is taken to be 800 N and the weight of the leg is taken to be 0.161 of the body weight [10]. The musculoskeletal geometry defining positions of proximal and distal muscle attachment points in neutral position and cross-sectional areas of the muscles is based on the work of Delp et al. [17]. Muscles attached over a large area are divided into separate units. Hence, the model includes 27 effectively active muscles of the hip (Tab. 1). Muscle activity required to maintain equilibrium in a given position of body is computed using the method of inverse dynamic optimization [18] proposed by Crowninshield et al. [19]. Each specific type of abduction exercise was modeled by rotation of the leg in the frontal plane of the body around the center of the femoral head (Fig. 2) while the pelvis was taken to be fixed in a laboratory coordinate system. The position of the leg during abduction exercise was defined by the abduction angle (Fig. 2a).. Supine abduction of unsupported straight leg without touching the ground and supine abduction of straight leg with 80% of the weight of the leg support were analyzed separately. The supporting force of the ground was considered to act in the center of the gravity of the leg. The distribution of the hip contact stress for given position of the leg was computed using the computer program HIPSTRESS [10]-[12]. Radius of acetabular surface was taken to be 25 mm, the lateral inclination and anteversion of acetabulum was taken to be 30 and 15 degrees, respectively.
Fig. 1 Body position during standing abduction (a), sidelying abduction (b), supported supine abduction (c) and unsupported supine abduction (d). III. RESULTS The magnitudes of the hip joint reaction force R and the peak contact stress pmax during abduction exercises in different body positions are shown in Fig. 2a and Fig. 2b respectively. The loading of the acetabulum is the lowest in supported supine abduction and the highest in unsupported supine abduction. The force R as well as the peak contact stress pmax increase with the angle of abduction during standing and decrease during side-lying. When the supine abduction is performed, the hip joint reaction force R and the peak contact hip stress pmax vary only a little.
Fig. 2 Magnitude of hip joint reaction force R (a) and the peak contact stress pmax (b) during abduction exercises
IV. DISCUSSION We have found that in the neutral leg position the hip joint reaction force is high for side-lying or unsupported supine body position and low for upright standing. This can
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Acetabular forces and contact stresses in active abduction rehabilitation
be explained by considering the equilibrium of the moments of the gravitational and muscular forces with respect to the center of rotation of the hip joint in different body positions. In standing and side-lying abduction the equilibrium is maintained by the activity of abductors. In side-lying abduction higher abductor force is required to compensate the weight of the lower leg than in the standing abduction because of larger lever arm of the weight of the lower leg in former case. After increasing the angle of abduction in upright standing, the center of the gravity of the lower leg moves laterally, which further increases the gravitational moment. Hence the counteracting muscle activity as well as the hip joint load must be increased. On the other hand, abduction of the lower leg in the side-lying exercise decreases the gravitational moment of the lower leg with respect to the hip and the hip load is decreased. However, in the unsupported supine abduction, the leg has a tendency to extend and hence the activity of flexors is required. In the supine abduction flexors that are required to maintain this posture have smaller moment arms and thus demand high flexor forces. Therefore the hip joint reaction force magnitude in unsupported supine position is considerably higher when compared to other body positions, however, ground support of the leg can proportionally reduce its magnitude. The course of pmax follows the course of the hip joint reaction force for upright and side-lying abduction (Fig. 2b). In contrast, abduction of the leg does not considerable change the peak contact hip stress in supine abduction and pmax remains almost constant throughout the abduction arch both with unsupported and supported leg. The average loading of the hip joint is the lowest in 80% supported supine abduction and the highest in unsupported supine abduction. Computed values of hip joint reaction force and peak contact hip stress reported in our paper are of the same order of magnitude as the ones performed in non-weightbearing exercises measured in vivo [7],[8]. Peak stress in direct measurements was also located in the posteriorsuperior acetabular quadrant, which is the case also in our study. Direct measurements of peak contact stress in supine abduction were found to be 2.8 MPa and 3.8 MPa in vivo [7],[8] versus 1.3 MPa in our study. The reports do not specifically mention the amount of vertical leg support in supine adduction, but considering the fast velocities it could be inferred that the abduction was unsupported. Contrary to our findings and to clinical guidelines, the only in vivo study that compared abduction in different body positions has found quite a different rank order of the peak contact hip stress values with 8.9 MPa in standing hip abduction, 5.6 MPa in side-lying hip abduction and 2.8 in supine hip abduction [7]. It should be noted, however, that these in vivo measurements were performed with angular velocities
917
above 30°/s and therefore also include the dynamic component of loading. Furthermore, a change from a side-lying body position to an upright position considerably reduces the moment arm of the leg weight but it does not substantially influence the moment arms of individual muscles. In static conditions, a reduced moment arm of the leg weight in the upright position reduces the calculated muscle forces and consecutively lowers the hip load, as shown in Fig. 2. However, in dynamic motion, a smaller moment arm of the leg weight in the upright position would facilitate an initial acceleration of the leg that later requires higher muscle strength to stop the movement at maximal abduction. Comparison between dynamic measurements and static computations therefore indicates that at very slow motion the upright abduction causes lower contact hip stresses than sidelying abduction, but this may be reversed in maximum abduction at high angular velocity. One of the reasons for performing only high speed measurements may have been the measurement error of approximately 0.2 MPa that was not accurate enough for slow non-weight bearing measurements with magnitudes below 1 MPa. When direct measurements of contact hip stress were compared with simultaneous hip stress estimations through kinematics measurements, it was found that direct measurements of the same activities yield considerably higher contact stress than inverse Newtonian analyses [20]. This effect has been attributed to cocontraction of muscles that is especially apparent in relatively slow, controlled movements [20] and this may to some extent explain the discrepancy between our results and results obtained by direct dynamic measurements. V. CONCLUSION We conclude that absolute values of the hip joint reaction force and the peak contact hip stress are highest in unsupported supine abduction, slightly lower in side-lying abduction and lowest in upright abduction. Our results are in agreement with the clinical guidelines as they indicate that upright abduction should be commenced first [6]. Supine abduction in initial rehabilitation phases should be recommended with ground support (on the bed) without excessive vertical leg lifting. Our results complement the results of direct measurements of stress during exercises and the experience – based exercise protocols in elucidating the mechanical impacts on the rehabilitation.
ACKNOWLEGMENT The research is supported by the Czech Ministry of Education project No. MSM 6840770012 and by the Slovenian ARRS Projects No. P2-232J3-619 an BI-CZ/07-08-006 and BI-S/05-07-002.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
918
H. Debevec, A. Kristan, B. Mavcic, M. Cimerman, M. Tonin, V. Kralj-Iglic and M. Daniel
REFERENCES 1.
S. A. Olson , B. K. Bay, and A. Hamel (1997) Biomechanics of the hip joint and effects of fracture of the acetabulum, Clin Orthop, vol 339, pp. 92-104. 2. E. Letournel, and R. Judet, Fracture of the acetabulum. New York: Springer; 1993. 3. M. Tile, “Fractures of the acetabulum”, in The Rational of Operative Fracture Care, 2nd ed., J. Schatzker and M. Tile, Eds., Berlin: Springer 1996, pp. 271-324. 4. S. A. Olson, B. K. Bay, M. W. Chapman, and N. A. Sharkey (1995) Biomechanical consequences of fracture and repair of the posterior wall of the acetabulum, J Bone Joint Surg (Am), vol 77, pp. 11841192. 5. J. A. Goulet, J. P. Rouleau, D. J. Mason, and S. A, Goldstein, (1994) Comminuted fracture of the posterior wall of the acetabulum. A biomechanical evaluation of fixation methods.J Bone Joint Surg (Am), vol.76, pp. 1457-1463. 6. S. F. Maurer, B. Mutter, K. Weise, H. Belzl (1997) Rehabilitation nach Hüftgelenkfrakturen, Orthopäde, vol. 6, pp. 368–374. 7. S. J. Tackson, D. E. Krebs, and B. A. Harris,. (1997) Acetabular pressures during hip arthritis exercises,” Arthritis Care Res, vol 10, pp. 308-319 8. D. L. Givens-Heiss, D. E. Krebs, P. O. Riley, et al. (1997) In vivo acetabular contact pressures during rehabilitation, Part I: Acute phase, Phys Ther, vol 72, pp. 691-699, 1992. 9. R. A. Brand (2005 )Joint contact stress: a reasonable surrogate for biological processes?, Iowa Orthop J, vol. 25, pp. 82-94. 10. A. Iglič, V. Kralj-Iglič, M. Daniel, and A. Maček-Lebar, (2005) Computer determination of contact stress distribution and the size of the weight-bearing area in the human hip joint, Comput Methods Biomech Biomed Engin, vol 5, pp.185-192. 11. M. Ipavec, R. A. Brand, D. R. Pedersen, et al. (1999) Mathematical modelling of stress in the hip during gait, J Biomech, vol 32, pp. 1229-1235. 12. B. Mavčič, B. Pompe, M. Daniel, et al.(2002) Mathematical estimation of stress distribution in normal and dysplastic human hip, J Orthop Res, vol 20, pp. 1025-1030A.
13. R.A. Brand, A. Iglič, and V. Kralj-Iglič, (2001) Contact stresses in human hip: implications for disease and treatment, Hip Int. vol. 11, pp.117-126. 14. E. Genda, N. Konishi, Y. Hasegawa, and T. Miura (1995), A computer simulation study of normal and abnormal hip joint contact pressure,” Arch Orthop Trauma Surg, vol 114, pp. 202-206. 15. H. Legal, “Introduction to the biomechanics of the hip”. in Congenital dysplasia and dyslocation of the hip, D. Tönis Ed., Berlin: SpringerVerlag, 1987, pp. 26-57. 16. R. A. Brand, D. R. Pedersen, D. T.Davy, et al. “Comparison of hip force calculations and measurements in the same patient,” J Arthroplasty., vol. 9, pp. 45-51, 1994. 17. S. L. Delp, P. Loan, M. G.Hoy, et al. (1990) An interactive graphicsbased model of the lower extremity to study orthopaedic surgical procedures, IEEE Trans Biomed Eng, vol 37, pp. 757-767. 18. D. Tsirakos, V. Baltzopoulos, and R. Bralett. (1997) Inverse optimization: functional and physiological considerations related to the force sharing problem, Crit Rev Biomed Eng, vol. 25, pp. 371-407. 19. R. D. Crownishield, and R. A. Brand (1981), “A physiologically based criterion for muscle force prediction and locomotion,” J Biomech, vol 14, pp. 793-801. 20. S. Park, D. Krebs, and R. Mann, “Hip muscle co-contraction: evidence from concurrent in vivo pressure measurement and force estimation,” Gait Posture.vol 10, pp. 211-222, 1999. Author: Dr. Matej DANIEL Institute: Laboratory of Biomechanics, Department of Mechanics, Biomechanics and Mechatronics, Faculty of Mechanical Engineering, Czech Technical University in PRague Street: Technicka 4 City: Prague Country: Czech Republic Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Experimental verification of the calculated dose for Stereotactic Radiosurgery with specially designed white polystyrene phantom B. Casar, A. Sarvari Institute of Oncology Ljubljana/Department of Radiophysics, Ljubljana, Slovenia Abstract— Accuracy in the dose delivery in the Stereotactic Radiosurgery is one of the most important components in this sophisticated radiotherapy treatment of benign and malignant intracranial diseases. In the present study, we carried out the measurements with small volume cylindrical ionization chamber PTW 31006 (PinPoint), together with specially designed and elaborated bullet-shaped white polystyrene phantom in order to verify the dose calculation by the commercially available 3D treatment planning system BrainScan from BrainLab company. Comparison of the doses was done in four simulated simple treatments, applying non-coplanar circular arc technique with tertiary conical collimators on linear accelerator Varian Clinac 2100 C/D with high energy photon beams of 6 MV. We found systematic differences in all four cases. The differences were found to range from 2.4% to 3.9% - the measured doses were always higher than the calculated ones. Although the results of our study could confirm the accuracy of the treatment planning dose calculations, as the differences lie within the recommended 5% value of the International Commission on Radiation Units and Measurements (ICRU), it is advisable to investigate further the origin of these, most probably systematic errors. However, the use of small volume ionization chamber and of homemade polystyrene phantom for dosimetrical verifications, has proved to be appropriate. Keywords— stereotactic radiosurgery, conical collimators, linear accelerator, dosimetry
I. INTRODUCTION Stereotactic radiosurgery (SRS) is a special focal radiotherapy technique for the treatment of small malignant and benign intracranial and lately also extracranial lesions. Two main components, a dosimetrical and a geometrical one, are important in allowing a successful delivery of the prescribed dose to the preselected stereotactically localized lesion. The requirements for dosimetrical accuracy follow the specifications of the International Commission on Radiation Units and Measurements (ICRU) [1], where an overall accuracy in the dose delivery of ± 5% is recommended, which is similar to that in standard radiotherapy. The requirements for geometrical accuracy are more stringent than in standard radiotherapy because SRS is commonly applied in the treatment of small lesions in the proximity of vital organs and critical structures that are at risk. The positional accuracy of dose delivery should be within ± 1 mm.
The name stereotactic radiosurgery was given by Swedish neurosurgeon Leksell in the early 1950 s when he introduced single session treatments of intracranial targets using 200 kVp x-ray unit [2, 3]. Even though the number of focused beams was large, they were not penetrating enough to deliver a satisfactorily concentrated dose to the target volume. These treatment modes were soon given up. In 1968, a prototype of a specially designed radiosurgical unit based on 60Co gamma rays was introduced into clinical practice [4]. Also this unit was originally developed by Leksell. Due to a higher energy of 60Co gamma rays (E=1.25 MeV), the beams were penetrating enough to deliver a satisfactorily concentrated dose to the target volume. A modified unit has been commercially available for a few decades under the name “Gamma knife”. In the 1980's several radiosurgical techniques were developed, all having an isocentric linear accelerator as the source of radiation [5, 6, 7, 8, 9]. Although also other radiosurgical techniques were developed in the last decades (treatments with protons, light ions, etc.), we will not describe all of them here. At the Institute of Oncology in Ljubljana, SRS technique with linear accelerator was introduced into clinical practice in 1999. The majority of the equipment for linear accelerator Philips SL–75/5, producing 5 MV photon beams was designed and assembled by B. Casar and colleagues [10]. Due to specific circumstances, this treatment modality was soon given up, therefore, only one patient was treated, yet this one very successfully. After the purchase of commercially available radiosurgical equipment, this technique regained its appreciation and was reintroduced. In this study, we limited our research intentions to the verification of accuracy of the delivered dose in a few simple treatment plans. II. MATERIALS AND METHODS A. 3D SRS treatment planning For the purpose of dose verification, we designed and elaborated a special bullet-shaped white polystyrene phantom in order to simulate a real clinical situation. A phantom consists of a hemisphere with the diameter of 16 cm and a cylinder with the diameter of 16 cm and the height of 8 cm. Inside the hemisphere, there is a cylindrical hollow, with the diameter of 2 cm, in which a rod insert for
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 887–890, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
888
a chosen ionization chamber fits exactly. In the inserted rod a cavity is drilled so that the chamber exactly fits in. For the dose verification we selected waterproof PTW 31006 PinPoint chamber with an active volume of 0.015 cm3. Small chamber dimensions ensure that correct measurements are obtained also in case of measuring the doses for small treatment volumes. The phantom was fixed with 4 carbon pins in the stereotactic head ring which is part of our SRS system manufactured by BrainLab company. Onto the head ring, we attached a localization box which was used for the determination of local coordinate system using Z-shaped fiducial rods (Fig. 1). In order to acquire a set of 3D images of our phantom, the assembled set, together with PinPoint chamber, was scanned on a Philips MX8000D spiral CT and images were then exported to the 3D treatment planning system BrainScan (company BrainLab). The slice thickness was 3.2 mm and the slice increment 1.6 mm, which allowed good spatial resolution. Four simple SRS simulated treatment plans for our polystyrene phantom were made for the Varian 2100 C/D linear accelerator which will be clinically used in later treatments. For all plans, we used the technique with which we irradiated our phantom with 3 non-coplanar circular arcs. Circular conical collimators were used as beam shaping devices – these collimators can be easily attached onto the head of the linear accelerator. We used collimators with the nominal field diameters of 15.0 mm, 17.5 mm, 20.0 mm and 22.5 mm. Field diameters are defined in the isocenter of linear accelerator at the SAD distance of 100 cm from the source of radiation.
B. Casar, A. Sarvari
Fig. 2. Dose distribution in a phantom, calculated with the 3D planning system BrainScan. Isocenter was put in the middle of the active volume of the PinPoint ionization chamber. In this figure, the distribution of the dose in 3 orthogonal planes through the isocenter is shown. We chose a simple spherical object as target volume. The isocenter was put exactly in the middle of the active volume of the ionization chamber to minimize possible biases in dose measurements. For one of the plans, the dose distribution in three orthogonal planes through the isocenter is presented in figure 2 (Fig. 2). Important data of all treatment plans and corresponding setup of linear accelerator are shown in the table 1. Table 1 Data for four treatment plans using various conical collimators. Plans were calculated for 6MV high energy photon beams on Varian clinic 2100 C/D linear accelerator. Calculations were performed with 3D treatment planning system BrainScan. Collimator diameter [mm]
Table angle [°]
Gantry start [°]
Gantry stop [°]
MU
Dose in isocenter [Gy]
15.0
45 0 315 45 0 315 45 0 315 45 0 315
75 330 225 75 330 225 75 330 225 75 330 225
135 30 285 135 30 285 135 30 285 135 30 285
734 701 704 718 682 686 709 675 678 691 662 665
15.0
17.5
20.0
22.5
Fig. 1. SRS system together with the phantom fixed with pins in the head ring. The PinPoint chamber is inserted into the phantom. Tertiary circular conical collimator is attached onto the head of linear accelerator.
15.0
15.0
15.0
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Experimental verification of the calculated dose for Stereotactic Radiosurgery with specially designed white polystyrene phantom
889
B. Cross calibration of PinPoint ionization chamber For dose (charge) measurements, we need a properly calibrated ionization chamber. Since the calibration factor in terms of Dose to Water – ND,w,Qo,P - for reference beam quality 60Co for our PinPoint chamber was not determined, we had to obtain it by cross calibration with the ionization chamber of known calibration factor. The cross calibration was performed on 60Co treatment unit Theratron 780C (company Nordion) using the calibrated PTW 30013 (#1669) Farmer type ionization chamber. The charge was measured with PTW Unidos electrometer. For each chamber, five measurements in water phantom were made under the following conditions: SSD (source to surface distance) = 80 cm, radiation field size at the water surface = 10 x 10 cm2, gantry and collimator were set to 0°, and the irradiation time was 1.00 minute. The central axes of the chambers were at the depth of 10 cm in the water phantom. During the measurements, the reference voltage on chamber electrodes was +400 V for both chambers. Prior to the measurements each chamber was left in water for 15 minutes for temperature equilibration and water temperature and air pressure were controlled with digital thermometer and barometer. Both chambers were waterproof and vented through the connecting cables. The calibration factor ND,w,Qo,P for PinPoint chamber for reference beam quality Q0 in 60Co beam was determined according to IAEA TRS–398 dosimetry protocol [11] from the equation MQ,P ND,w,Qo,P = MQ,F ND,w,Qo,F
(1)
where MQ,F is the average of five readings of collected charge for the reference Farmer chamber, and MQ,P is the average collected charge for PinPoint chamber, both readings corrected for temperature and pressure. ND,w,Qo,F is the calibration factor for Farmer chamber in the reference beam quality Q0 of 60Co beam.
Fig. 3. Setup for one of the arc beam with the table rotation and gantry start angle position.
For each arc, we measured the collected charge with PTW Unidos electrometer and then calculated the absorbed dose Dw,Q (zref) at the reference point zref of the chamber according to the dosimetry protocol IAEA TRS-398 [11] Dw,Q (zref) = MQ ND,w,Qo,P kQ,Qo
where MQ is the sum of electrometer readings for one session (3 arcs), corrected for temperature and pressure, ND,w,Qo,P is the obtained calibration factor for PinPoint chamber in 60Co beam (reference beam quality Q0), and kQ,Qo is the chamber specific factor correcting for the difference between the reference beam quality Q0 and the quality Q of the high energy photon beams of 6 MV which were used in our study. Factor kQ,Qo was calculated from the updated version of IAEA TRS–398 protocol and was found to be 0.993. III. RESULTS
C. Verification of calculated dose The calculated dose in the isocenter was checked for four treatment plans using PinPoint chamber. Polystyrene phantom with the inserted PinPoint chamber, head ring and localization box were fixed onto the linear accelerator table. Using the system of 3 orthogonal lasers that intersected at the same point, we located the chamber reference point, which is in the middle of the chamber’s active volume, to the isocenter of linear accelerator. The calculated data (irradiation times for each arc and other geometrical parameters) were transferred to the linear accelerator console and the irradiation was performed according to the planned data (Fig. 3).
(2)
Two independent cross calibrations were performed. The calibration factors were calculated using the equation 1. The results of calibrations are shown in Table 2. Table 2 Two independent measurement results for the determination of calibration factor for PinPoint chamber ND,W,Q0,P in terms of Dose to Water MQ,P1 [nC]
MQ,F1 [nC]
325.2 14.683 ND,W,Q0,P1 = 2.423 mGy/nC ND,W,Q0,P = 2.427 mGy/nC
MQ,P2 [nC]
MQ,F2 [nC]
323.9 14.676 ND,W,Q0,P2 = 2.431 mGy/nC
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
890
B. Casar, A. Sarvari
The average of two calculated calibration factors from two independent cross calibrations was regarded as the valid calibration factor for PinPoint chamber, thus, the calibration factor for PinPoint chamber was found to be ND,w,Qo,P = 2.427 mGy/nC. The dose measurement results for all four treatment plans are presented in Table 3. The doses were calculated according to the equation 2 where the electrometer readings were corrected for the influence of temperature and pressure.
erance limit according to the recommendation of ICRU (5%). It was shown that the choice of small volume PinPoint detector for such measurements was correct and that the polystyrene phantom was proved to be an easy and acceptable solution for quality assurance and quality control purposes of such and similar dosimetrical tests. Further investigation of the observed differences is needed.
Table 3 Results of comparison between the calculated and measured doses
Present study was partly done in the framework of research program “Development and Evaluation of New Approaches to Cancer Treatment” (P3-0003(D)), supported by the Slovenian Research Agency (ARRS).
for treatment plans using four different conical collimators.The doses were calculated and measured in the isocenter of linear accelerator. The last column represents ratios between the measured and calculated doses. Cell diameter [mm]
Dcalc [Gy]
Dmeas [Gy]
Dmeas/Dcalc
15.0 17.5 20.0 22.5
15.0 15.0 15.0 15.0
15.36 15.44 15.59 15.39
1.024 1.029 1.039 1.026
IV. DISCUSSION Comparison of the results showed that the measured doses were always higher than the calculated ones. For the selected conical collimators, the differences between the calculated and measured doses varied from 2.4% to 3.9%. The average value of our set of measurements was 15.45 Gy, giving the average difference of 3.0% between the measured and calculated doses and standard deviation of the mean value 0.10 Gy (coverage factor k= 2). Although these differences lie within the ICRU recommendations of 5%, a challenge to investigate the origin(s) of these differences has been posed, especially because their tendency is in the same direction and it is highly probable, that the found differences could reveal a systematic error. There are many possible reasons for the dose differences and they are to be verified in further studies: error in cross calibration, errors in the treatment planning algorithm, errors in the import of the basic dosimetrical data into the treatment planning system, error in path length correction for the phantom material and probably a few more.
ACKNOWLEDGEMENT
REFERENCES 1.
International Commission on Radiation Units and Measurements (ICRU). (1976) ICRU Report 24. Determination of Absorbed Dose in a Patient Irradiated by Beams of X or Gamma Rays in Radiotherapy Procedures 2. Leksell L. (1949) A stereotactic apparatus for intracerebral surgery. Acta Chir. Scand. 99:229-233 3. Leksell L. (1951) The Stereotaxis method and radiosurgery of the brain. Acta Chir. Scand. 102:316-319 4. Leksell L. (1968) Cerebral radiosurgery I.Gamma thalamotomy in two cases of intractable pain. Acta Chir. Scand. 134:585-595 5. Betti OO, Derechinsky VE. (1984) Hyperselective encephalic irradiation with a linear accelerator. Acta Neurochir.Suppl. 33:385-390 6. Colombo F, Benedetti A, Pozza F, et al. (1985) External stereotactic irradiation by linear accelerator. Neurosurgery 16:154-160 7. Hartmann GH, Schlegel W, Sturm V, et al. (1985) Cerebral radiation surgery using moving field irradiation at a linear accelerator facility. Int. J. Radiation Oncology Biol. Phys. 11:1185-1192 8. Houdek PV, Fayos JV, Van Buren JM, et al. (1985) Stereotactic radiotherapy technique for small intracranial lesions. Med. Phys. 12:469-479 9. Podgorsak EB, Olivier A, Pla M, et al. (1988) Dynamic stereotactic radiosurgery. Int. J. Radiation Oncology Biol.Phys. 14:115-126 10. Casar B. (1998) Tertiary collimator system for stereotactic radiosurgery with linear accelerator. Radiology and Oncology 32(1):125-128 11. International Atomic Energy Agency (IAEA) (2000) Absorbed Dose Determination in External Beam Radiotherapy. An International Code of Practice for Dosimetry Based on Standards of Absorbed Dose to Water, Technical Report Series no. 398, Vienna. IAEA Author: Bozidar Casar
V. CONCLUSION Although the measurements do not confirm the calculated doses within our expectations (2%), they confirm the treatment planning calculations of the doses within the tol-
Institute: Street: City: Country: Email:
Institute of Oncology Ljubljana Zaloska 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Implantable brain microcooler for the closed-loop system of epileptic seizure prevention I. Osorio1, G. Kochemasov2, V. Baranov2, V. Eroshenko2, T. Lyubynskaya2, N. Gopalsami3 1
Flint Hills Scientific, Lawrence, KS and University of Kansas Medical Center, Kansas City, KS, USA 2 BioFil (Biophysical Laboratory) and Russian Federal Nuclear Center-VNIIEF, Sarov, Russia 3 Argonne National Laboratory, Argonne, IL, USA
Abstract– A method of thermal suppression of abnormal brain activity observed in epileptic patients during preictal stage was considered for seizure blockage. The development of an implantable brain microcooler as a part of a closed-loop epileptic seizure prevention system is reported. An array of 7 needle-like probes of the diameter ~1 mm and length ~2 cm provides cooling of 1 cubic inch of brain tissue from ~370C to ~160C in ~30 sec. Convective method of heat exchange with tube in a tube design was adopted. Two coaxial steel tubes formed channels for the direct and reverse flows of precooled water. Theoretical studies and numerical modeling based on Pennes' equation were performed to investigate the process of brain tissue cooling. Experimental tests demonstrated good agreement with calculations. A closed cycle cooling system with a peristaltic pump and thermoelectric cooling device is being prepared for animal tests. As an additional option a single-probe microcooler of the probe length ~5 cm was developed, fabricated, and tested for cooling of deep brain areas such as hippocampus. Keywords– epilepsy, seizure blockage, closed-loop system, implant, brain microcooler.
I. INTRODUCTION Epilepsy is a neurophysiological disorder and its consequences are severe resulting in repetitive seizures accompanied by disturbance in brain electrical activity, convulsions, uncontrolled changes in behavior, loss of consciousness and at worst – death. Statistics over industrialized counties gives 40 million people suffering from epilepsy in North America and Europe. The rate for underdeveloped countries is believed to be an order higher. Most people suffering from epilepsy are of normal intellect and could be valuable members of society, but permanent threat of seizure puts severe restrictions on their way of living and work. Only part of them can be returned to normal life by means of chemical therapy. As an alternative to drug treatment, brain stimulation systems are under development over the world. The most advanced “closed-loop” technologies combine seizure prediction, stimulation and control units into automated autonomic seizure prevention system [1, 2]. The physical principles of epileptic seizure detection and stimulation may
vary, but the most widespread is the electrical one [3]. An alternative approach suggested in the literature is temperature control and brain cooling as a possible and effective way of epilepsy treatment [4]. Though the mechanism of seizure development is not well understood, some authors believe that in the preictal state neural electrical activity become more ordered and correlated compared to that of normal state [4]. Brain cooling results in reduction of synaptic electrical conductivity thus decreasing the inter-neural coupling. It was proved in clinical studies that the local temperature in the vicinity of the epilepsy region increased by ~1.5 K about 30 sec before the seizure onset [5]. The device described here is intended for thermal suppression of abnormal neural activity based on sensing local temperature changes in the brain. The cooling device will be integrated with an implantable SAW-sensor for temperature monitoring [6] and a telemetry controlling system with signal processing, seizure prediction, decision-making, and triggering functions. The technical requirements on brain microcooler development were to cool 1 cubic inch of brain tissue from ~370C to ~160C in ~30 sec. In animal tests it was proven that the lower limit of 160C is sufficient for the idea to work. On the other hand, the temperature of the implantable part of the microcooler must not be lower than 40C to avoid irreversible damage of brain tissue. II. THEORETICAL APPROACH TO MICROCOOLER DEVELOPMENT
An estimate of thermal wave penetration depth as a function of time t is given by: Ld = χ ⋅ t
where, χ is thermal conductivity. Assuming χ = 0.0014 cm2/sec and t = 30 s, one obtains Ld ∼ 2 mm. It means that in order to cool quite a large brain area a multiprobe construction with probe-to-probe distances ~ 2⋅Ld must be applied.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 911–914, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
912
I. Osorio, G. Kochemasov, V. Baranov, V. Eroshenko, T. Lyubynskaya, N. Gopalsami
For numerical simulation of heat transmission process in the brain-like tissue, a suite of computer programs was created providing calculations in 1D-, 2D- and 3Dapproaches. The cooling process was described by Pennes' equation [7]: ∂T = ∇(λ ⋅ ∇T ) + μ ⋅ (Ta − T ) + ν c⋅ρ ⋅ ∂t
The physical parameters used in the calculations are given in Table 1. Table 1. Physical parameters used in the calculations Parameter Heat capacity Density
Meaning c = 3.6 J/g/K
Thermal conductivity Exchange rate with arterial blood thermal pool
ρ = 1 g/cc λ = 0.005 W/cm/K μ = 0.029 W/cc/K
Arterial blood temperature
Ta = 37°C
Metabolic heat production rate
ν = 0.025 W/cc
Initial temperature of the tissue
T0 = 37°C
To provide constant temperature on the probe surface the heat obtained from the surrounding tissue must be constantly removed. For this purpose three possible operation principles were considered: passive heat removal using probe material with high thermal conductivity, evaporation cooling based on Joule-Thompson effect, and convective cooling by pumping of precooled water inside the probe. The third option was chosen for realization as the simplest and the most reliable. Two coaxial tubes provided channels for direct (the inner tube) and reverse (annular gap between tubes) liquid flows. The following considerations were taken into account for microcooler design development: • Thermal flux per probe and acceptable heating of cooling liquid determine required mass consumption: m ≥ Pth c ⋅ ΔT . For Pth = 0.5W, c = 4.2 J/g/K and ΔT = 3K mass consumption is m ≥ 0.04 g / sec •
δ
d
≤π ⋅
λl
cl ⋅ m
⋅L ,
where δ is the ring channel width and d is the probe diameter, λl – cooling liquid thermal conductivity, cl – cooling liquid heat capacity, dm/dt – liquid consumption, L – probe length. For d = 1 mm, λl = 0.0058
W/cm/K, cl = 4.2 J/g/K, dm/dt = 0.04 g/sec, and L = 2 cm one can obtain the ratio δ/d ≤ 0.2. III. 7-ELEMENT MICROCOOLER FABRICATION AND TESTS A 7-probe microcooler (Fig. 1) was fabricated and tested in the laboratory. Six probes are placed in the form of regular hexagon around the central 7th probe. Center-tocenter distance between the probes is 4.5 mm. Probe length is 2 cm. Each probe is made of a pair of syringe needles of the diameters 0.6 and 1.0 mm located concentrically. The distal end of the external pipe is soldered. The internal pipe is 2 mm shorter than the external one. Each probe is fed with cooled water through the internal pipe from the common collector. The layout of the cooling experiments is presented in Fig. 2.
Fig. 1 ImplanTable 7-probe convective brain microcooler. The precooled subsystem shown in the Fig. 2 with Pelltier element and peristaltic pump is intended for animal tests and at present is under development. In the laboratory tests the source of cooling was a perfusor with a syringe filled with icy water. Liquid mass consumption is 800 ml/h for the cooler as a whole or 0.0317 ml/sec per probe. Agar jelly was used as brain tissue surrogate in the laboratory experiments. The temporal behavior of temperature was measured by thermocouples placed into the flowing water at the microcooler input, output, and in agar jelly approximately in the center of one of the regular triangles formed by the probes. For future animal tests thin insulated copper wire was soldered to each probe to provide brain electrical signal measurements.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Implantable brain microcooler for the closed-loop system of epileptic seizure prevention
IV. COMPARISON BETWEEN EXPERIMENTAL AND
913
25
CALCULATION RESULTS 20
Direct modeling of the cooling process for the micro cooler is extremely difficult mainly because of short time of water staying in the probe. The diameter of internal channel is 0.35 mm; the internal and external diameters of the annular channel are 0.6 and 0.75 mm, respectively. So for the flow rates mentioned above, the estimated time of elementary volume being inside the probe is 0.16 sec. To make calculation of heat exchange between liquid flow and tube walls stable the step of integration must be at least by an order of magnitude smaller. Taking into account the fact that the task in general is 3-dimentional with total number of nodes up to ~106, one can see that the computation time is unreasonably large. So, a different approach was chosen. First, a 2D axially symmetric calculation was done for convective single probe put into cylindrical block of surrounding matter with adiabatic boundary conditions at the external surface. The input water temperature was assumed constant as used in the experiment. The goal of this step was to obtain the temperature of the probe surface as a function of longitudinal coordinate and time. These dependencies were used in 3D calculation as boundary conditions between the probes and the surrounding matter. The results of comparison of experimental and calculated temporal temperature behavior are given in Fig. 3. The graph Tin (blue curves) show the changes in water tempera-
8
1
10
9 2
15 10
Texp Tin
5
Tcalc 0 0
50
100
150
200
250
time, sec
Fig. 3 Comparison of experimental results and numerical modeling for 7element convective cooler. Tin – water temperature at the cooler input, Texp and Tcalc – the temperature resulted from experimental measurements and numerical modeling at the same point between probes. ture at the cooler input. The graph Texp (red curve) gives the temperature in agar jelly between the probes at the point most distant from the nearest three probes. The graph Tcalc (green curve) shows the results of numerical modeling at the same point as Texp. Generally, the agreement between the experimental and calculated curves is good. In the beginning of the measurements some amount of residual warm water in the system gave a slight rise in the input temperature as well as in the agar jelly temperature. Because of heat exchange of the precooling system with surrounding air, the input water temperature, after 100 seconds of measurement, increased causing difference between experimental and calculated temperature of ~1.50C toward the end of experiment. V. LONG SINGLE-PROBE MICROCOOLER DEVELOPMENT
3 4 7 6
T, 0 C
11
12
5
Fig. 2 Layout of the cooling experiments. 1 – peristaltic pump; 2 – thermoelectric unit; 3, 9 and 10 – heat exchanger radiator; 4 – micro cooler case; 5 –brain section under cooling; 6 – cooling probes; 7 – needle thermocouple; 8 – temperature indicator; 11 – power supply battery for thermoelectric unit; 12 – power supply unit for peristaltic pump.
In addition to the array probe, a ‘long’ single-probe microcooler was fabricated and tested in a similar way. It is intended as a prototype of the cooling device for deep area of the brain such as hippocampus. Apart from the number and the length of cooling probe the design of the singleprobe microcooler is similar to that of the 7-element one. The cooling probe is a tube in a tube with their external diameters 0.85 and 1.6 mm and length 50 mm. The inner tube is made of a 0.45 mm ID quartz capillary which helps decrease the heat flux through the sidewalls of the probe. A syringe needle of the internal diameter 1.1 mm served as the external tube. The capillary was glued with epoxy adhesive into the input collector and the external tube was glued into the bottom washer. To decrease the cold loss an additional isolating PVC covering was placed on the external tube. The length of the covering is 10 mm shorter than that of the external pipe. So the most intensive cooling is provided on
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
914
I. Osorio, G. Kochemasov, V. Baranov, V. Eroshenko, T. Lyubynskaya, N. Gopalsami
the section of 10 mm from the probe tip. The tip is a passive thermoconductive cone of the length ~10 mm.
REFERENCES 1.
VI. CONCLUSIONS Based on the premise that local cooling can reduce synaptic electrical conductivity and in turn suppress the abnormal brain activity, an implantable brain microcooler has been developed, fabricated and tested in the laboratory. It is designed to cool a volume of ~1 cubic inch of brain matter from ~370C down to 160C in about 30 s. It is a part of an epilepsy prediction and control system comprised of a prediction sensor, the micro cooler, and a telemetry unit that starts the cooler based on the sensor reading. Theoretical study and numerical modeling were performed to investigate the brain tissue cooling process and offer a design solution for the microcooler. The experimental results of the microcooler in a brain surrogate are in good agreement with model and demonstrated that the device met the technical requirements. In addition, a single-probe microcooler of the probe length ~5 cm was developed, fabricated and tested for cooling deep brain regions such as hippocampus. Although the microcooler under development was meant to be a part of a closed-loop epilepsy seizure prevention system, it also could be engaged in an open-loop system as a less traumatic and risky alternative to drug and electrical stimulation. The cooling regime in this case is simpler than that of the closed loop autonomous system. Animal tests are supposed to be started soon by the industrial partner of the project Flint Hills Scientific.
ACKNOWLEDGMENT This work was supported by funding from IPP (Initiatives for Proliferation Prevention) Program of U.S. Department of Energy under the contract ANL-T2-214A.
2
3 4
5 6 7
8
Osorio I., Frei M., Manly F., et al. (2001) An introduction to contingent (closed-loop) brain electrical stimulation for seizure blockage, to ultra-short-term clinical trials, and to multidimensional statistical analysis of therapeutic efficacy, J Clin Neurophysiol. 18:533-544. Litt B., D’Alessandro M., Esteller R., et al. (2003) Translating seizure detection, prediction and brain stimulation into implantable devices for epilepsy, Proceedings of the 1st International IEEE EMBS Conference on Neural Engineering, Capri Island, Italy, 2003, pp 485-488. Osorio I., Frei M., Sunderam S., et al. (2005) Automated seizure abatement in humans using electrical stimulation, Ann Nurol. 57:258-268. Yang X.-F., Duffy D., Morley R., Rothman S. (2002) Neocortical seizure termination by focal cooling: temperature dependence and automated seizure detection. Epilepsia 43(3):240-245. Sackellares C., Iasemidis L., Shiau D., et al. (2000) Epilepsy – when chaos fails, In: Chaos in the brain, World Scientific, Singapore. Dymond A., Crandall P. (1973) Intracerebral temperature changes in patients during spontaneous epileptic seizures. Brain Research 60:249-254. Gopalsami N., Osorio I., Kulikov S., et al. (2007) SAW microsensor brain implant for prediction and monitoring of seizures. Accepted for publication in the IEEE Sensors Journal. Pennes H. (1948) Analysis of tissue and arterial blood temperature in the resting human forearm. J. Appl. Physiol. 1:93-122. Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Lyubynskaya, Tatiana BioFil, Russian Federal Nuclear Center-VNIIEF 37 Prospekt Mira Sarov, Nizhny Novgorod reg. Russia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
In vivo dosimetry with diodes in radiotherapy patients treated with four field box technique A. Strojnik Institute of Oncology Ljubljana/Department of Radiophysics, Ljubljana, Slovenia Abstract— Two diodes have been calibrated as in vivo dosimeters for entrance and exit dose measurements in radiotherapy with 15 MV photon beams. Their response dependencies on dose, dose rate, focus skin distance, field size, gantry angle and patient thickness have been investigated. 1243 routine measurements have been performed in 302 rectal and prostate cancer patients irradiated with four field box technique. Measurement statistics is presented. Keywords— Radiotherapy, In vivo dosimetry, Diodes
I. INTRODUCTION Radiotherapy is a cancer treatment modality which exploits medical benefits of ionizing radiation. Its outcome strongly depends on the accuracy of absorbed radiation dose to the tumor and the surrounding healthy tissue. An important part of treatment quality control is in vivo dosimetry: small dosimeters attached to the patient’s skin measure absorbed dose at beam’s entrance into the patient’s body or its exit. Measurements are subsequently compared to the values obtained by the radiotherapy planning system and in case of unacceptable difference immediate action to detect and remove the source of error is undertaken. This paper describes calibration of two commercial semiconductor diodes as in vivo dosimeters and presents the results of a one year clinical routine. II. MATERIALS AND METHODS A properly calibrated semiconductor diode connected to a suitable electrometer can be used as a dosimeter: barrier voltage across the depletion region of the p-n junction propels the charge carriers generated by the radiation. The charge collected by the electrometer is proportional to the absorbed dose. To increase the accuracy by measuring absorbed dose close to the depth dose maximum, commercial diodes are equipped with a build-up cap.
Fig. 1 A cross-section of a dosimetric diode: 1 – build-up cap; 2 – detector chip. Radius of the build-up cap is approximately 5 mm. Diameter of the active area is 2 mm. Connectors to the electrometer exit to the right of the picture. A. Calibration The EDP-20 p-type silicone diode produced by Wellhofer Scanditronix has a build-up cap equivalent in attenuation to 20 mm of water and is intended for in vivo dosimetry in 10 – 20 MV photon beams. Two diodes of this type have been calibrated against an ionization chamber at reference conditions in a 15 MV photon beam from a Varian 2100CD linear accelerator, following the guidelines in [1] and [2]. In reference conditions each diode has been taped to a slab of Plastic Water at a distance of 100 cm from the accelerator focus, in the center of a 10 cm x 10 cm open treatment field, with gantry angle set to 0°. The ionization chamber has been irradiated with the same treatment parameters at depth dose maximum. In addition to the calibration factor F correction factors accounting for non-reference conditions (different focus surface distances – CFSD, field sizes – CFS, wedged filters – CW and exit dose measurement – CEXIT) have been determined. During each set of measurements all other parameters apart from the one in question have been kept at reference values. In the case of exit dose correction factor gantry has been rotated to 180° and focus surface distance to the near side of the slab has been set to 100 cm. Signal dependency on gantry angle has been investigated up to 30°.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 891–894, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
892
A. Strojnik
Fig. 2 Irradiation of a diode in reference (left) and non-reference conditions: 1 – linear accelerator focus (photon source); 2 – collimator jaws; 3 – diode; 4 – Plastic Water phantom; 5 – treatment couch. Field size (FS, FS’) is defined at the isocenter, 100 cm from the focus. Focus surface (skin) distance (FSD, FSD’) is measured from the linac focus to the phantom surface (patient’s skin). A wedged beam has a wedge filter inserted below collimator jaws (not shown in the picture). From the electrometer reading R the entrance dose measured by the diode has been calculated as
D = R ⋅ F ⋅ C FSD ⋅ C FS ⋅ CW
(1)
whereas the exit dose has been calculated as
D = R ⋅ F ⋅ C EXIT
(2)
Throughout the calibration and in vivo measurements each diode has been connected to a dedicated channel on an emX Scanditronix electrometer. Electrometer has been connected to a computer running DPD12-pc software also provided by Scanditronix Wellhofer. Dark current drift and offset of the assembly have been measured and accounted for. The diodes have been recalibrated periodically due to sensitivity degradation linked to radiation damage. B. In vivo measurements The two diodes have been used in routine measurements in rectal and prostate cancer patients treated with four field box technique. With this technique beams strike target from four directions, gantry angle occupying values of 270°, 0°, 90° and 180°. Such configuration allows a diode taped to the patient’s skin on the 0° beam’s axis to measure not only 0° beam’s entrance dose but also 180° beam’s exit dose. The same principle applies to 90° and 270° beams. In case of patient having an artificial hip the beam passing through it is removed from the treatment plan and the anteroposterior beams are wedged to obtain homogeneous dose coverage over the target. Routine: After a patient is accurately set-up in the correct treatment position (either prone or supine) as established at
the CT simulator, the diodes are taped to the patient's skin at the entrance points of 0° and 90° beams. As the treatment starts with the accelerator gantry at 270° rotating clockwise the first measurement is of the 270° beam's exit dose, followed by 0° beam's entrance dose, then 90° beam's entrance dose and concluding with 180° beam's exit dose. Measurement readings are multiplied by appropriate calibration and correction factors as in equations 1 and 2 and compared to the values calculated by the planning system. If the difference exceeds the tolerance level of 5% for entrance dose (as proposed by [2]) or 8% for exit dose a thorough investigation of treatment parameters is performed together with a scrupulous review of the treatment plan; in vivo dosimetry is repeated at the next treatment session and focus skin distances are carefully measured to verify the correct placement of the dosimeters with respect to the accelerator’s focus. If the problem persists, the treating radiation oncologist is consulted: if portal images of the treated area are satisfactory the number of monitor units of the problematic treatment field is adapted. Another session of in vivo dosimetry is required. Besides regular checks diodes are recalibrated at a slightest indication of a systematic error. At Institute of Oncology Ljubljana in vivo dosimetry is performed at the second treatment session for every patient (portal imaging at the first session) and after any alterations of treatment parameters. III. RESULTS C. Calibration Diodes have been calibrated for clinical dose rates of approximately 3 Gy/min. Signal has remained constant (within 2‰) throughout dose rate interval between 1 Gy/min and 6 Gy/min. Diode response linearity has been tested for clinical doses between 10 cGy and 10 Gy with the results within 2‰. The calibration factors F could not have been expressed in terms of Gy/As because DPD12-pc software presents the signal from the electrometer in arbitrary dosimetric units instead of As. The manufacturer specifies detector sensitivity to be around 30 nAs/Gy. Diodes have been irradiated in geometric conditions which are most common in patient treatment. This includes focus skin distances from 80 cm to 115 cm with field sizes between 5 cm x 5 cm and 30 cm x 30 cm. Diodes have also been irradiated with wedged beams, nominal inclinations of hard wedges being 15°, 30° and 45°. Correction factors are presented in Tables 1, 2 and 3. Exit dose measurements have been performed with thicknesses of the Plastic Water phantom ranging from 20 cm to 35 cm simulating different patient thicknesses. Exit
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
In vivo dosimetry with diodes in radiotherapy patients treated with four field box technique
dose correction factor has been determined to be almost constant (within 1%) and has been established to be 1,10 and 1,12 for diodes 1 and 2 respectively. The influence of gantry angles up to 30° has been below 1%. Since it is difficult to determine the angle between the beam axis and the patient’s skin surface on which the diode is taped and since with the four field box technique described in the previous section all beams are close to perpendicular, gantry angle correction factors have been omitted. Temperature dependence has not been investigated. It is too difficult if not impossible to monitor the temperature of the detector during patient treatment. The diode manufacturer claims the increase of diode sensitivity is approximately 2,5‰ per °C. According to [1] the influence of temperature is negligible. It is estimated that during the time of use the diodes have absorbed approximately 300 Gy. Sensitivity degradation of about 1% has been observed. Table 1
Focus skin distance correction factors
Focus Skin Distance (cm)
Correction factor for diode 1
Correction factor for diode 2
80 85 90 95 100 105 110 115
0,962 0,982 0,989 0,994 1 1,006 1,011 1,016
0,965 0,979 0,988 0,994 1 1,007 1,012 1,016
Table 2
Field size correction factors
Field Size (cm x cm)
Correction factor for diode 1
Correction factor for diode 2
5 x5 10 x 10 15 x 15 20 x 20 25 x 25 30 x 30
0,997 1 1,002 1,004 1,006 1,008
0,998 1 1,004 1,005 1,008 1,011
Table 3
Wedge correction factors
Wedge (°)
Correction factor for diode 1
Correction factor for diode 2
15 30 45
1,007 1,015 1,016
1,013 1,017 1,019
893
D. In vivo measurements From January 2006 to February 2007 in vivo dosimetry has been conducted in 326 treatment sessions. Measurement statistics is presented in Table 4. In 27 (9%) out of 302 patients in vivo measurements have exceeded the tolerances. In 6 (22%) of the 27 also repeated measurements have resulted beyond acceptable levels. In 1 of the above 6 patients a closer inspection has revealed a false CT image set had been assigned to the patient. A new therapy plan has later been created with the correct CT image set. In another of the above 6 patients the source of error has proven to be a set of incomplete CT images: due to the size of the patient, the outmost parts of the patient’s hips had not been captured by the CT scanner. To rectify the problem focus skin distances in lateral fields have been measured and the number of monitor units has been adapted. Isocenter marks had been incorrectly drawn on the patient’s skin in two cases: in both the error has been simultaneously discovered by in vivo dosimetry and electronic portal imaging. Marks have been correctly redrawn on the simulator. In the remaining two of the six patients the source of error has not been discovered. It has only been assumed that the patients’ bowels have not been emptied before radiotherapy as they have been prior to CT scan. As portal images have been evaluated by radiation oncologist as acceptable the number of monitor units has been suitably modified. After the corrections in vivo dosimetry has been repeated and resulted within tolerance levels in all 6 cases. In some of the prostate cancer patients treated in supine position a probable reason for the initial exceeding of tolerances has been patient’s body hair obstructing the taping of the diode to the patient’s skin. The problem has mostly been solved by moving the diode to a hairless spot of skin still within the treatment field. In some of the rectal cancer paTable 4
Measurement statistics Diode 1
Diode 2
Number of entrance dose measurements - exceeding tolerance at initial session - exceeding tolerance at repeated session Average difference between measured and expected entrance dose Standard deviation of entrance dose differences
317 17 (5%) 3 (1%) 0,0%
326 27 (8%) 5 (2%) 1,0%
3,0%
2,7%
Number of exit dose measurements - exceeding tolerance at initial session - exceeding tolerance at repeated session Average difference between measured and expected exit dose Standard deviation of exit dose differences
295 24 (8%) 6 (2%) -0,2%
305 19 (6%) 6 (2%) -0,6%
6,7%
4,6%
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
894
A. Strojnik
tients in prone position the solution to large deviations calculated from the initial measurements has been moving the dosimeter from a sloped to a nearby horizontal area, hence avoiding large angles between 0° beam axis and diode. IV. DISCUSSION So far in vivo dosimetry hasn’t detected any treatment equipment malfunction. The source of problems encountered during one year routine has either been patient data (CT images) or patient set-up. A quantity which is very influential in dose delivery and at the same time influenced by patient set-up is focus skin distance. Table 5 facilitates the discovery of source of error in measurements with extreme deviations. Entrance dose tolerance level of 5% allows inaccuracies of roughly 2,5 cm in focus skin distance. Similarly for exit dose a tolerance level of 8% allows errors of about 1,5 cm in patient thickness. As both discrepancies could easily be measured with an optical distance meter, a quick focus skin distance check could prove a valuable quality assurance procedure in departments without in vivo dosimetric equipment. Table 5
Error detection with opposite treatment fields
Entrance dose error (beam 0°) < –5%
Exit dose error (beam 180°) > +8%
> +5%
< –8%
Probable source of error Patient too thin or treatment couch too low Patient too thick or treatment couch too high
CONCLUSIONS Complementary to portal imaging in vivo dosimetry provides dosimetric information about the actual treatment delivery and represents another safety measure in the treatment process. In one year since its implementation it has revealed and prevented 6 cases of inaccurate treatment.
ACKNOWLEDGMENT The study was done partly within a national program Development and Evaluation of New Approaches to Cancer Treatment P3-0003(D) financially supported by the Slovenian Research Agency (ARRS).
REFERENCES 1. 2.
Van Dam J, Marinello G (1994) Methods for in vivo dosimetry in external radiotherapy. ESTRO Booklet No. 1, ISBN 90-804532-5 Huyskens DP, Bogaerts R, Verstraete J, Lööf M, Nyström H, Fiorino C, Broggi S, Jornet N, Ribas M, Thwaites DI (2001) Practical guidelines for the implementation of in vivo dosimetry with diodes in external radiotherapy with photon beams (entrance dose). ESTRO Booklet No. 5, ISBN 90-804532-3 Address for correspondence Andrej Strojnik Department of Radiophysics Institute of Oncology Ljubljana Zaloska cesta 2 SI – 1000 Ljubljana Slovenia Phone: (+386) 1 5879 631 E-mail:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Interaction between charged membrane surfaces mediated by charged nanoparticles J. Pavlic1,2, A. Iglic1, V. Kralj-Iglic3, K. Bohinc1,2 1
2
Faculty of Electrical Engineering, Trzaska 25, 1000 Ljubljana, Slovenia University College for Health Studies, Poljanska 26a, 1000 Ljubljana, Slovenia 3 Faculty of Medicine, Lipiceva 2, Ljubljana, Slovenia
Abstract— The interaction between charged membrane surfaces, separated by a solution containing charged nanoparticles was studied experimentally and theoretically. The nonlocal theory for the nanoparticles was developed where finite size of nanoparticles and spatial distribution of charge within a particle were taken into account. It was shown that for large enough membrane surface charge densities and large enough dimensions of nanoparticles, the force between equally charged membranes may be attractive due to spatially distributed charges within the nanoparticles. Keywords— Nano particles, charge density, membrane surface
I. INTRODUCTION In biology, there are many phenomena which motivate the studies of electrostatic interaction between charged macroions in the electrolyte solution. The condensation of DNA can be induced by the presence of multivalent counterions [1, 2], and corresponds to the packing of DNA in viruses. The complexation of DNA with positively charged colloidal particles [3, 4] was observed, which corresponds to nucleosome core particles and the basic fiber of chromatin. Network formation in actin solutions [5] is the consequence of the attractive interactions between cytoskeletal filamentous actin molecules mediated by multivalent ions. The aggreation of rod-like M13 viruses is induced by divalent tunable diamin ions [6]. In this work we study interaction between negatively charged membrane surfaces of giant phospholipid vesicles in the sugar solution containing multivalent rod-like ions. The negative charge of the phospholipid bilayers was generated by a certain proportion of the phospholpid cardiolipin while the multivalent cations are represented by Spermidine and Spermine. Adhesion of phospholipid vesicles due to the presence of Spermidine or Sperimine in the solution was observed. To describe the observed features, a theoretical model was constructed where the phospholipid membranes are described as infinite flat surfaces bearing uniformly distributed charge while the Spermidine and Spermine are considered as rod-like counterions. Due to the specific
shape of these molecules which bear charge at their ends, it is taken that the charge distribution within Spermidine and Spermine is represented by two effective charges e, separated by a distance l. The system is described by the nonlocal theory of the electric double layer, where the shape and the orientational restrictions of the rod-like ions is considered. II. MATERIALS AND METHODS A. Spermidine and Spermine polyamine molecules Polyamines Spermidine and Spermine (Fig. 1) were purchased from Sigma-Aldrich. The Spermidine is tri-valent, while the Spermine has the valency of four. They are positively charged (amino groups contribute to the charge). Spermidine and Spermine were obtained in powder and were dissolved in distilled water to the final concentration of 2mg/ml.
Fig. 1 Schematic presentation of Spermidine and Spermine polyamines.
B. Giant phospholipid vesicles (GPV) GPVs were prepared at room temperature (23°C) by the modified electroformation method [7]. The synthetic lipids cardiolipin (1,1'2,2'-tetraoleoyl cardiolipin), POPC (1palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine), and cholesterol were purchased from Avanti Polar Lipids, Inc. Ap-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 903–906, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
904
J. Pavlic, A. Iglic, V. Kralj-Iglic, K. Bohinc
propriate volumes of POPC, cardiolipin and cholesterol, all dissolved in a 2:1 chloroform/methanol mixture, were combined in a glass jar and thoroughly mixed. POPC, cholesterol and cardiolipin were mixed in appropriate proportions such as 2:2:1. Volume of 20 µl of lipid mixture was applied to the platinum electrodes. The solvent was allowed to evaporate in a low vacuum for 2 hours. The coated electrodes were placed in the electroformation chamber which was then filled with 3 ml of 0.2 M sucrose solution. An AC electric voltage with an amplitude of 5 V and a frequency of 10 Hz was applied to the electrodes for 2 hours, which was followed by 2.5 V and 5 Hz for 15 minutes, 2.5 V and 2.5 Hz for 15 minutes and finally 1 V and 1 Hz for 15 minutes. The content was rinsed out of the electroformation chamber with 5 ml of 0.2 M glucose and stored in plastic test tubes at 4oC. The vesicles were left for sedimentation under gravity for one day and were then used for a series of experiments.
Fig. 2 The solution of giant phospholipid vesicles containing 20% weight ratio of cardiolipin few minutes after the addition of Spermidine to the solution with vesicles.
C. Observation Vesicles were observed by an inverted microscope Zeiss Axiovert 200 with phase contrast optics and recorded by the Sony XC-77CE video camera. The solution containing vesicles was placed into observation chamber made from cover glasses and sealed with grease. The larger (bottom) cover glass was covered by two smaller cover glasses, each having a small semicircular part removed at one side. Covering the bottom glass by two opposing cover glasses formed a circular hole in the middle of the observation chamber. In all experiments the solution of vesicles (45 µl) was placed in the observation chamber. The solution containing the substance under investigation (Spermidine or Spermine) was added into circular opening in the middle of the observation chamber. III. EXPERIMENT The solution of GPVs contains a heterogeneous population of vesicles of different shapes and sizes. The vesicles are subject to thermal fluctuations of their shape. Few minutes after the addition of either Spermidine or Spermine to the solution with GPVs the thermal fluctuations of vesicles diminish while the vesicles adhere to each other and to the ground. Further away from the site of insertion of Spermidine or Spermine, the process of adhesion may take up to 30 min. Vesicles slowly approach each other, but once acquiring a small distance they adhere to each other. The complexes formed by the adhesion of charged giant phospholipid vesicles after the addition of Spermidine and Spermine is shown in (Fig. 2) and (Fig. 3), respectively.
Fig. 3 A complex of liposomes formed by adhesion of giant phospholipid vesicles containing 20% weight ratio of cardiolipin few minutes after the addition of Spermine to the solution with vesicles. IV. MATHEMATICAL MODEL We consider an aqueous electrolyte solution containing divalent rod-like ions. The solution is sandwiched between two equally charged planar surfaces with surface charge density σ. The two surfaces of area A are separated by a distance D. The description of the system is based on the non-local theory of the electric double layer where the rodlike ions are characterized by positional and orientational degrees of freedom. The energy is therefore stored in the electrostatic field as well as in the translational and orientational entropy of the rod-like ions. The electrostatic free energy of the system is
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Interaction between charged membrane surfaces mediated by charged nanoparticles D
D
F 1 = Ψ'2 dx + ∫ [n( x) ln[v0 n( x)] − n( x)]dx + AkT 8πl B ∫0 0 D
(1)
∫ n(x) < p(l | x)[ln p(l | x) + U (x, l)] > dx + 0
D
D
0
0
∫ n(x)λ(x)[< p(l | x) > −1]dx + μ ∫[−2n(x) −
2σ ]dx De
where Ψ is the reduced electrostatic potential, μ is the reduced chemical potential, U(x,l) is the external reduced potential of the charged wall, n(x) is the local concentration of reference charges of multivalent rod-like ions, p(l|x) is the conditional probability density describing the position of the second charge on the rod-like counterion if the first charge is located at x, lB =e2/4πεkT is the Bjerrum length, ε is the dielectric constant of water, k is the Boltzmann constant, T is the absolute temperature, v0 is the volume of the divalent ion while < ... > indicates the averaging over all possible orientations. Two constraints are added to the free energy describing the normalization condition for the probability density and the electro-neutrality of the system. The equilibrium state of the system is obtained by the minimization of the free energy (1). The variational problem is stated by means of integro-differential equation which is solved numerically. As a result we obtain the consistently related equilibrium positional and orientational distribution functions for the counterions and the equilibrium free energy of the system. The dependence of the equilibrium free energy on the distance between the
2.5
[1/nm2 ]
2
1.5
F /AkT
1 σ = 0.1 As/m2 0.5
σ = 0.033 As/m2
0
−0.5
1
2
3
4
5
6
7
8
D [nm]
Fig. 4 The free energy as a function of the distance between two equally charged surfaces. The model parameter is l = 5 nm.
905
charged surfaces reveals the nature of the interaction between the surfaces. If the free energy increases with increasing distance between the surfaces, then the force between the surfaces is attractive. The equilibrium distance between the surfaces is obtained at the mininum of the dependence of the free energy on the distance D. The free energy of the system as a function of the distance between two negatively charged surfaces is shown in (Fig. 4). For large enough σ the free energy first decreases with increasing distance D, reaches a minimum and then increases. For small σ the free energy monotonously decreases with increasing distance D. The minimum of the free energy is more pronounced for longer rod-like ions. The insets show a scheme of the most probable orientation of rod-like ions at minimal free energy. V. DISCUSSION AND CONCLUSIONS The presented model provides a simplified analysis of the problem. In this paper we studied the interaction between negatively charged surfaces mediated by positively charged tri-valent Spermidine and four-valent Spermine. The experiments showed the adhesion of GPVs after the addition of Spermidine and Spermine. This means that even though the surfaces are negatively charged they are attracted to each other. In order to better understand the experimental effects we considered the theoretical model, where the Spermidine and Spermine are considered as rod-like ions [8] and GPVs are described by two equal planar surfaces. The theory confirmed the attraction of negatively charged surfaces mediated by charged nano particles. The attraction between equally charged surfaces originates from correlations between the multivalent counterions, which are not considered in the mean field PoissonBoltzmann theory [9]. Theoretically, the Monte Carlo (MC) simulations of Guldbrand et al. [10] have first confirmed the existence of attraction between equally charged surfaces immersed into the solution composed of divalent ions in the limit of high surface charge density, which were originally predicted by Oosawa [11]. Further, MC simulations show that the attractive interaction between equally charged surfaces may arise for high surface charge density, low temperature, low relative permittivity and polyvalent counterions [12]. Also the anisotropic hypernetted chain approximation within the primitive electrolyte model for divalent ions was used [13, 14], where the ions were described as charged hard spheres immersed in a continuum dielectric medium. At moderate surface-surface distances and high surface charge density the attraction between the equally charged surfaces was found.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
906
J. Pavlic, A. Iglic, V. Kralj-Iglic, K. Bohinc
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9.
Bloomfield V. A. (1996) Curr. Opin. Struct. Biol. 5: 334 Teif V. (2005) Biophys. J. 89(4): 2574-2587 Raedler J. O., Koltover I., Saldit T., Safinya C. R. (1997) Science 275: 810-814 Gelbart W. M., Bruinsma R. (2000) Physics today 53: 38 Angelini T. E. (2003) Proc. Natl. Acad. Sci. USA 100: 8634 Butler J. C., Angelini T. (2003) Phys. Rev. Lett. 91: 028301 Angelova M. I., Soleau S. Meleard P. Faucon J. F., Bothorel P. (1992) Prog. Colloid. Polym. Sci. 89:127-31. Bohinc K., Iglič A., May S. (2004) Europhys. Lett. 68(4): 494-500 Carnie S., McLaughlin S. (1983) Biophys. J. 44:325
10. 11. 12. 13.
Guldbrand L. (1984) J. Chem. Phys. 80: 2221 Oosawa F. (1968) Biopolymers 6: 1633 Svensson B., Joensson B. (1984) Chem. Phys. Lett. 108: 580 Kjellander R. (1996) Ber. Bunsenges. Phys. Chem. 100(6): 894-904 14. Kjellander R. (1988) J. Phys. France 49:1009 Author: Institute: Street: City: Country: Email:
Janez Pavlič University College for Health Studies Poljanska 26a Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Laminar Axially Directed Blood Flow Promotes Blood Clot Dissolution: Mathematical Modeling Verified by MR Microscopy J. Vidmar1, B. Grobelnik1, U. Mikac1, G. Tratar2, A. Blinc2 and I. Sersa1 1
Condensed Matter Physics Department, Jozef Stefan Institute, Ljubljana, Slovenia Department of Vascular Diseases, University of Ljubljana Medical Centre, Slovenia
2
Abstract— Understandig process of thrombolysis is a key for a corresponding medical treatment. Thrombolysis of nonocclusive blood clots is significantly accelerated by axially directed blood plasma flow. When fast blood flow occurs, the increase of the dissolution rate is too big that it could be explained just by better permeation of the thrombolytic agent into the clot and more efficient biochemical degradation. Viscous forces caused by shearing of blood play an essential role in addition to the known biochemical fibrinolytic reactions. We developed an analytical mathematical model based on a hypothesis that clot dissolution dynamics is proportional to the power of the blood plasma flow dissipating along the clot. The model assumes cylindrical non-occlusive blood clots with centrally placed flow channel and the flow is assumed laminar at a constant rate all times during dissolution. Effects of sudden constriction on the flow and its impact on the dissolution rate are considered as well. The model of clot dissolution was verified experimentally by dynamic magnetic resonance (MR) microscopy in in-vitro circulation system containing plasma with a magnetic resonance imaging contrast agent and recombinant tissue-type plasminogen activator (rt-PA). Sequences of dynamically acquired 3D low resolution MR images of entire clots and 2D high resolution MR images of clots in an axial cross-section were used to evaluate the dissolution model by fitting it to the experimental data. The experimental data fitted well to the model, and confirmed our hypothesis. Keywords— flow, thrombolysis, blood clots, MR microscopy
I. INTRODUCTION Restoring vessel patency by dissolving blood clots is the main goal of thrombolytic therapy, which soared in the area of treating myocardial infarction during the last two decades [1, 2] but is now more moving into the realm of treating acute ischemic stroke [3]. In addition, thrombolytic treatment is valuable when dealing with hemodynamically significant pulmonary embolism [4, 5] and acute/subacute arterial thrombosis [6]. Biochemically, thrombolysis starts with activation of the proenzyme plasminogen into the active serine protease plasmin [7]. Several plasminogen activators, such as recombinant tissue type plasminogen activator (rt-PA) or its modified variants reteplase and tenekteplase may be used in pharmacological doses to initiate thrombolysis [8]. As an
effective and safer alternative to plasminogen activators, local infusion of plasmin into the clot has been sucessfully applied in experimental animals [9]. The final outcome of thrombolysis depends on properties of the thrombolytic agent, clot structure and characteristics of molecular transport into the clot [10]. Thrombolysis at low-velocity axially directed blood flow has already been studied and mathematically modeled. Pleydell and coworkers [11] developed a mathematical model of post-canalization thrombolysis at low-velocity flow that takes into account the changing concentrations of the major components of the fibrinolitic system. However, relatively little is known about high-velocity axially directed blood flow influencing thrombolysis of nonocclusive clots. The results of Sakharov and Rijken [12] as well as our own results [13] show that high-velocity plasma flow significantly enhances dissolution of blood clots when favorable biochemical conditions are present. We have already presented a model that takes into account the mechanical forces generated at the clot-plasma interface in addition to the biochemical conditions [14]. Our main assumption was that viscous forces of blood flow, that are responsible for the surface erosion of the clot in the flow channel, act in parallel with the fibrinolytic system [14]. Presented paper refines our initial model by taking into account the effects of sudden blood vessel constriction at the site of non-occlusive thrombosis and evaluates its impact on the rate of clot dissolution. The model was verified experimentally by dynamic magnetic resonance microscopy of non-occlusive blood clots dissolving in in-vitro circulation system. II. THEROY A. Blood Velocity Model Blood velocity profile in a normal vessel is according to the Poiseuille’s law parabolic. After entering the flow channel of the clot this profile progressively changes from initially flat profile to the parabolic profile as blood moves downstream the channel (Figure 1). In the transition region that extends from the entrance to the depth known as the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 859–863, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
860
J. Vidmar, B. Grobelnik, U. Mikac, G. Tratar, A. Blinc and I. Sersa
B. Rate of Clot Dissolution
z0 R∞
The clot begins to dissolve after adding a pharmacologic concentration of a thrombolytic to the flowing blood. The channel along the clot expands radially as thin layers of the clot are gradually removed. We assume that for every biochemical setting, the rate of clot dissolution is proportional to the dissipated power of the blood flowing along the clot. The work dW done by the flowing blood to the volume element of the clot in the flow channel in time dt is proportional to the shear velocity ∂v / ∂r squared
z R blood clot
Fig. 1 Blood velocity profile transformation after entering the flow channel of the clot.
entrance length [15] the blood adjacent to the wall is progressively retarded due to shear forces at the vessel wall while blood in the core region is accelerated to maintain the same flow rate. At distances from the entrance larger than the entrance length the region with dominating viscous effects, also known as the boundary layer (yellow region in Fig. 1), covers the whole cross-sectional area of the flow channel and the flow profile is then fully developed The boundary layer thickness δ increases with the square root of the distance from the entrance point z
⎧ R z / z0 ; z < z0 δ ( z) = ⎨ ⎩ R ; z ≥ z0
(1)
The boundary layer extends over the whole channel cross-section (δ=R) at entrance distances z equal or lager to the entrance length z0. The entrance length is proportional to the channel diameter d=2R and the Reynolds number [15]
z0 = 0.06 d Re =
0.24 ρ ΦV
π
η
.
(2)
A simple model for velocity profile in the entrance region can be constructed knowing the boundary layer axial profile (1). The model assumes a flat velocity profile in the core region that continuously converts into a parabolic velocity profile within the boundary layer ending with zero velocity at channel walls. Combining these conditions with conservation of the flow rate and with boundary layer axial profile (1) yields the following velocity profile as a function of the radial distance r and the entrance distance z 2 2 v0 R ⎧ ⎪ R2 + R − δ (z) 2 ( ) ⎪ v(r , z ) = ⎨ 2 2 2 ⎪ 2 v0 R ( R − r ) ⎪⎩ R 4 − ( R − δ ( z ) )4
;
r < R − δ (z)
. ;
r ≥ R − δ (z)
(3)
⎛ ∂v
dW = λ Sη ⎜
⎝ ∂r
⎞
r =R
2
( z ) ⎟ dt ,
(4)
⎠
where η is the blood viscosity, λ is the thickness of the volume element and S is the surface area of the volume element that is exposed to the streaming blood in the flow channel. The shear velocity of the flowing blood in contact with the clot (at r=R) can be calculated from (3)
∂v ∂r
( z) = − r=R
4 v0 R
3
R − ( R − δ ( z) ) 4
4
.
(5)
The work dW of the flowing blood in contact with the clot is used for removal of a thin layer of the clot of thickness dR. The work is proportional to the volume dV of the removed layer dW = c dV = c S dR, where c is the proportionality constant incorporating the efficiency of the thrombolytic agent. As every thrombolytic agent needs time to starts its enzymatic reaction the constant c is initially very large and after time τ starts approaching the final value c∞ at a rate Δ. The activation of the thrombolytic agent can be described by a Fermi like function 1 c (t )
=
1
1
c∞ 1 + exp((τ − t ) / Δ )
.
(6)
Replacing the work and the thrombolytic agent efficiency constant in equation dW = c S dR with expressions in (4), (5) and (6) yields a differential equation for the clot dissolution rate with the following solution ⎧ ⎪ R ⎡⎛ R0 ⎪ ∞ ⎢⎢⎜⎝ R∞ ⎪ ⎣ R( z, t ) = ⎨ ⎪ ⎪ ⎩⎪
7
⎞ Δ ⎛ 1 + exp((t − τ ) / Δ ) ⎞ ⎟ + T ln ⎜ 1 + exp( −τ / Δ ) ⎟ ⎝ ⎠ ⎠ 7
(1 − (1 −
z / z0
)) 4
1
2
⎤7 ⎥ ⎦⎥
;
z < z0
1
R∞
⎡⎛ R0 ⎞ 7 Δ ⎛ 1 + exp((t − τ ) / Δ ) ⎞ ⎤ 7 ⎢⎜ ⎟ + ln ⎜ ⎟⎥ ⎣⎢⎝ R∞ ⎠ T7 ⎝ 1 + exp( −τ / Δ ) ⎠ ⎦⎥
;
z ≥ z0
(7)
here R0 is the initial radius of the flow channel at beginning of the dissolution,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Laminar Axially Directed Blood Flow Promotes Blood Clot Dissolution: Mathematical Modeling Verified by MR Microscopy
R∞ is the radius of the normal blood vessel and T7 is a time constant equal to 1 T7
16λη ΦV
1.
861
High velocity
2
=
(8)
π c∞ 2
According to the model in equation (7) the dissolution completes in time approximately equal to the parameter T7. For a convenience, we introduce a parameter reflecting the occlusion level defined as the ratio between the crosssectional area of the clot and the cross-sectional area of the normal non-occluded vessel 2
⎛ R( z, t ) ⎞ ⎟ . ⎝ R∞ ⎠
x( z , t ) = 1 − ⎜
(9)
III. MATERIALS AND METHODS Clotting of blood, that was collected form a healthy male volunteer, was induced in vitro by adding calcium (50 μl CaCl2 at 2 mol/l per ml of blood) and thrombin in a final concentration of 1 NIH unit/ml of blood. Non-retracted clots (retraction was inhibited by the phosphodiesterase inhibitor UDCG 212) were formed in cylindrical glass tubes with an inner diameter of 3 mm and length of 3 cm. The clots were pierced lengthways on the outside by a needle with a diameter of 0.7 mm to create a flow channel along the clot. The glass tube with a clot was then connected by a flexible hose to a pump generating a constant pressure of either 15000 Pa or 3000 Pa, which represent mean arterial and venous pressures in humans. The artificial circulation system was in each experiment filled with approximately 0.5 l of blood plasma at room temperature. The viscosity of blood plasma is 1.8 times higher than is the viscosity of water whereas its density is about 3.5 % higher than the density of water [15]. Two blood flow velocity regimes were tested: high velocity regime corresponding to shear forces generated in the arterial system and low velocity regime corresponding to shear forces in the venous system. In the high velocity regime volume flow rate was equal to 1.64 ml/s, initial average blood velocity and the Reynolds number were equal to 4.26 m/s and 1660 and the entrance length was equal to 70 mm, while in the low velocity regime volume flow rate was equal to 0.074 ml/s, initial average blood velocity and the Reynolds number were equal to 0.19 m/s and 75 and the entrance length was equal to 3.1 mm. In both velocity regimes initial occlusion level was equal to x = 0.946. Artificial circulation system was set in a horizontal bore 2.35 T superconducting magnet (Oxford, UK) – a part of the
8.
0 min
7.
2.
Low velocity
4 min
6.
8
5.
12 min 4.
16 min
Fig. 2 MR images of blood clot dissolution
MRI system that consisted also of a TecMag NMR console and Bruker NMR probes and gradients. First, the glass tube with the clot was inserted into a RF probe in the centre of the MRI magnet. Then, hoses were connected to the glass tube with clot and to the pump immersed in a container filled with plasma and the circulation system was started. After that dynamical MR imaging was started. First 10 minutes, clots were imaged without the thrombolytic agent in the circulation system to assure that clot dissolution was not caused by mechanical erosion alone, then the thrombolytic agent recombinant tissue activator of plasminogen rt-PA (Actylise, Boehringer, Germany) was added in a pharmacologic dose of 2 μg/ml to the plasma as well as the MR imaging contrast agent Gd-DTPA (Magnevist, Berlex Lab., Germany) at 1 mmol/l and dynamically imaging continued for another 40 minutes. The imaging method was the conventional spin-echo MRI technique with parameters TE/TR = 8/400 ms, imaging field of view 2 cm, imaging matrix 256 by 256 points and the slice thickness of 2 mm. Clots were imaged in transversal slice positioned centrally to the clot, 15 mm downstream from the entrance point (Fig. 2). All images were analyzed by the Image-J program which was used for measuring cross-sectional areas of the remaining clot as a function of time to get the occlusion level time dependence. These results were then divided into high and low velocity regime groups. In each group there were at least 4 clots. The experimental occlusion level data were analyzed by the mathematical model in (7). The model parameters T7, τ and Δ were extracted by finding the best fit of the model to the experimental occlusion level data. Fitting was done by the Origin computer program (Origin Lab, Northampton MA, USA). IV. RESULTS AND DISCUSSION Figure 3 shows the occlusion level time dependence, i.e., the dissolution curve, in the high and in the low velocity regime, which was obtained by analysis of dynamical MR
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
862
J. Vidmar, B. Grobelnik, U. Mikac, G. Tratar, A. Blinc and I. Sersa 1.0
In fast flow, the dissolution rate increase is too big that it could be explained just by better permeation of thethrombolytic agent into the clot and more efficient biochemical degradation.
occlusion level x [a.u.]
0.8
0.6
high velocity low velocity
REFERENCES
0.4
1. 0.2
2.
0.0 0
500
1000
1500
2000
2500
time t [s]
3.
Fig. 3 Fit of the dissolution model to the experimental data.
image sequences as described earlier. In both velocity regimes, the dissolution model well fits the experimental MRI data. Apparent is the difference in the dissolution rate between the high and the low velocity regime. The difference is mainly due to different blood flow velocities in the channel, that result in different supply with thrombolytic agent and different mechanical work of the flowing blood to the superficial layer of the clot in the channel. The model enables also calculation of the flow channel profile at different times after dissolution beginning (Fig. 4). From the profile contours it can be clearly seen that dissolution is considerably faster at the entrance of the flow channel, due to higher shear forces of the flowing blood, than further downstream.
4.
5.
6.
7. 8.
V. CONCLUSION Mechanical forces due to blood shear velocity play in addition to enzymatic reactions of the thrombolytic agent essential role in the dissolution of non-occlusive blood clots
9.
10.
R
11. z
12. 13.
Fig. 4 Flow channel profile during clot dissolution. Profile contours are
The ISIS-2 investigators. Randomized trial of intravenous streptokinase, oral aspirin, both, or neither among 17,187 cases of suspected acute myocardial infarction (1988) J. Am. Coll. Cardiol. 12 (6 Suppl A):3A-13A The GUSTO investigators. An international randomized trial comparing four thrombolytic strategies for acute myocardial infarction (1993) N. Engl. J. Med. 329:673-682 Hill M D, Buchan A M (2005) Canadian Alteplase for Stroke Effectiveness Study (CASES) Investigators. Thrombolysis for acute ischemic stroke: results of the Canadian Alteplase for Stroke Effectiveness Study CMAJ 172:1307-1312 Goldhaber S Z, Haire W D et al (1993) Alteplase versus heparin in acute pulmonary embolism: randomised trial assessing right-ventricular function and pulmonary perfusion Lancet 341(8844):507-511 Konstantinides S, Geibel A, Heusel G, Heinrich F, Kasper W (2002) Management Strategies and Prognosis of Pulmonary Embolism-3 Trial Investigators. Heparin plus alteplase compared with heparin alone in patients with submassive pulmonary embolism N. Engl. J. Med. 347:1143-1150 Ouriel K, Castaneda F et al. (2004) Reteplase monotherapy and reteplase/abciximab combination therapy in peripheral arterial occlusive disease: results from the RELAX trial J. Vasc. Interv. Radiol. 15:229-238 Collen D (1999) The plasminogen (fibrinolytic) system Thromb. Haemost. 82:259-270 Spohr F, Arntz H R et al. (2005) International multicentre trial protocol to assess the efficacy and safety of tenecteplase during cardiopulmonary resuscitation in patients with out-ofhospital cardiac arrest: the Thrombolysis in Cardiac Arrest (TROICA) Study Eur. J. Clin. Invest. 35:315-323 Marder V J, Landskroner K et al. (2001) Plasmin induces local thrombolysis without causing hemorrhage: a comparison with tissue plasminogen activator in the rabbit Thromb. Haemost. 86:739-745 Blinc A, Francis C W (1996) Transport Processes in fibrinolysis and fibrinolytic therapy Thromb. Haemost. 76:481491 Pleydell C P, David T et al. (2002) A mathematical model of post-canalization thrombolysis, Phys. Med. Biol. 47:209-224 Sakharov D V, Rijken D C (2000) The effect of flow on lysisis of plasma clots in a plasma environment Thromb. Haemost. 83:469-474 Tratar G, Blinc A et al. (2004) Rapid tangential flow of plasma containing rt-PA promotes thrombolyis of nonocclusive whole blood clots in vitro Thromb. Haemost. 91:487-496
equidistant in time.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Laminar Axially Directed Blood Flow Promotes Blood Clot Dissolution: Mathematical Modeling Verified by MR Microscopy 14. Sersa I, Tratar G, Blinc A (2005) Blood clot dissolution dynamics simulation during thrombolytic therapy, J. Chem. Inf. Mod. 45:1686-1690 15. Nichols W W, O'Rourke M F (2005) McDonald's Blood Flow in Arteries: Theoretical, Experimental and Clinical Principles, Fifth Edition. Hodder Arnold, London
Author: Institute: Street: City: Country: Email:
863
Igor Sersa Jozef Stefan Institute Jamova 39 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Modulation of the beam intensity with wax filter compensators D. Grabec and P. Strojan Institute of Oncology Ljubljana/Department of Radiotherapy, Ljubljana, Slovenia Abstract— In order to achieve homogenous dose distribution in target volume several approaches are possible. We are discussing the possibility of field intensity modulation with wax filter compensators and comparing the technique with other techniques. The case report of the head and neck region radiotherapy with the use of 2D wax filter compensator is presented. The 3D wax filter compensators technique is further discussed as a substitute to the step and shoot IMRT or to the sliding window IMRT technique. The advantages and disadvantages of the wax filter compensators are put side by side. The case of meduloblastoma treatment is outlined as the case where whit applying 3D wax filter compensators the benefit would be the greatest. Keywords—wax filter compensators, radiotherapy, IMRT techniques, meduloblastoma
I. INTRODUCTION One of the tasks in radiotherapy is to deliver the prescribed dose to the target volume. The dose should be as homogenous as possible [1, 2, 3]. The acceptable inhomogeneity is +-5% of the prescribed dose. The homogenous dose distribution can be achieved with increasing number of the properly shaped irradiating beams and with the intensity modulation of the beams (IMRT). The beam can be shaped either with individual shielding blocks or with multileaf collimator (MLC). The shaping with MLC is achieved online while individual blocks have to be manufactured in advance for every single beam. With the increasing number of beams from various directions one must be very careful not to forget the extra irradiation of the tissue that would be spared otherwise. The beam intensity can be uniformly modulated simply with filtration wedges. The nonuniform modulation within the irradiation plane can also be achieved. The way of beam intensity modulation is with compensator filter [4, 5]. Like the individual shielding block the compensator filter should be maintained in advance for every single beam that needs to be modulated. Another way to achieve the nonuniform intensity modulation of the beam is to apply sequenced irradiation, where the whole irradiation field is composed of smaller parts of different shapes (subfields) [6, 7]. Subfields are composed with different MLC positions.
Fig. 1 The irradiation field is indicated with red line. MLC is closed to the target volume indicated with violet. The modulation with the wedge (gray) was not sufficient therefore the subfield indicated with orange was added. The position of the subfield MLC is indicated with white line and the closed part is indicated with spots. To achieve homogenous dose distribution at the Institute of Oncology Ljubljana we are usually using the “standard” beam distribution [1, 2, 3] and we modulate the intensity within the beam with the application of the subfields as presented on Fig. 1. II. WAX AS FILTER COMPENSATOR MATERIAL Our choice for the filter compensator material was wax. Wax does not alter the beam quality much, besides it has low melting point (52°C), and is solid at room temperature. Wax filter compensators can be shaped with pouring the liquid paraffin wax in the negative drilled Styrofoam blocks, the same ones that are also used for fabrication of the individual shielding blocks [8]. The wax filter compensators ready to use are showed in Fig.2. The maximal possible thickness of the compensators is the thickness of the Styrofoam blocks i.e. 8 cm. The measured beam intensity reduction by 8 cm wax filter is 30% for 5MV (Philips SL 5), 25% and 20 % for 6MV and 15MV respectively (Electa Synergy platform).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 867–870, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
868
D. Grabec and P. Strojan
Fig. 2 The wax filter compensators. The Styrofoam blocks hold both, the wax compensators and the individual shielding blocks of Wood alloy. The blocks are fixed on the positioning trays. The central hole was drilled in each foam block in order to see the center of the optical fields.
The depth dose measurements of filtered and nonfiltered beams at Electa Synergy platform revealed the slight beam hardening that were observed as the milimetric shifts of absorbed dose maxima. The tissue phantom ratios at 10 and 20 cm of the filtered and nonfiltered beam were also compared (TPR 20/10) and the comparison also did not reveal the significant beam hardening. III. THE CASE REPORT: 2D WAX FILTER COMPENSATORS Since with the standard technique the optimized planed absorbed dose distribution in the presented case of oropharings irradiation was not acceptable, the irradiation with 2D wax filter compensator was considered. The 2D wax filter compensator is a compensator of uniform thickness that is introduced in the certain region of the field. The patient was scheduled for irradiation with 5 MV on the linear accelerator Philips SL-5. The standard irradiation technique consisted of the two opposed lateral fields shaped to the desired shape with the individual shielding blocks. The optimized dose distribution was calculated with the Multidata planning algorithm using three CT slices (Multidata System International Corp., St. Louis, Missouri, USA). The planed absorbed dose distribution as is presented on Fig.3 revealed the inhomogenieties that ranged from 94% to 113% of the prescribed dose. The absorbed dose inomogenieties can be reduced applying the compensator filter, as presented on Fig.4. The optimal reduction of beam intensity was 15% in the indicated region. Applying such compensator, the planed dose inhomogenieties are reduced to the acceptable level.
Fig. 3 The optimized dose distribution was calculated using three CT slices. The inhomogenieties inside the treated volume were 19 % of the prescribed dose which is not acceptable. The regions of different absorbed doses, expressed in the percentage of prescribed dose, are colored: 95 % ≤ green <100 %; 100 % ≤ red <105 %, 105 % ≤ blue <110 %, 110 % ≤ purple < 115 %. On the central slice (through isocenter), the absorbed dose varied from 95% to 100%, it exceeded 110% on the lower slice (4 cm below isocenter) and on the upper slice (1.5 cm above isocenter), the absorbed dose hardly reached 95 % of the prescribed dose. Prior to the irradiation of the patient, the dosimetry of the filtered beams was preformed. The preheated micro-rod thermoluminescence dosimeters LiF:Mg:Ti (TLD 100) were used [9]. TLDs were calibrated in 5 MV photon beam at the conditions of the treatment without compensators. Two TLDs were placed in the centre of the plastic water block (which
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Modulation of the beam intensity with wax filter compensators
simulated the patient’s head) and were irradiated with nonfiltered but only shielded opposed lateral fields. According to the plan presented on Fig.3 the response for the absorbed dose of 197 cGy in the center of plastic block was used for the calibration. Next, the same plastic water block with calibrated TLDs was irradiated according to the treatment plan presented on Fig.4, applying compensators. The measured dose corresponded to the planed dose and the planed treatment was therefore considered appropriate for clinical use.
869
The delivered dose was also measured In vivo during the irradiation as showed on Fig.5. TL dosimeters were placed under the 1.5 cm thick jelly bolus in the center axis of each of the two opposed lateral fields. The measured doses corresponded well to the planed doses. Also the doses measured in the region of the compensation did not differ from the planed doses for more that 2 %. IV. 3D WAX FILTER COMPENSATORS In former chapter we presented 2D wax filter compensator. Considering the achieved effect 2D filter compensators are equivalent to the technique, we are most usually utilizing i.e. applying sub fields. An example of irradiation field and its subfield is presented on Fig.1. Even more homogenous dose distribution as with 2D filter compensators can be achieved with 3D intensity modulation. At chosen fields the IMRT optimization can be applied. The results of the IMRT optimization are the intensity maps of the fields that lead to more homogenous dose distribution inside the treated volume. The calculated intensity maps can be delivered on three different ways: • • •
slide the MLC within the irradiating field (i.e. sliding window) [10, 11] compose the field with subfields (step and shoot) [7,12] apply the 3D filter compensator [4, 5].
Each of the above mentioned techniques has its advantages and disadvantages. Before considering IMRT we have to understand advantages and disadvantages of every technique in order to choose the most suitable one.
Fig. 4 The absorbed dose distribution as calculated applying filter compensators. The compensators are introduced as blocks of 85% transmission. The planed dose inhomogenieties are reduced. At the central slice, the dose is almost homogenous (100% of the prescribed dose), at the lower slice, the maximum dose hardly reaches the 105%, and it ranges from 93% to 100% of the prescribed dose on the upper slice.
Fig. 5 In vivo dosimetry was performed with the preheated and calibrated TLDs. TLD were covered with the 1.5 cm thick jelly and fixed on the patient. The delivered dose measured in the centers of the lateral opposed fields were 208 cGy on the left patient side and 197 cGy on the right patient side (104% and 98.5% of the planned dose, respectively). The measured dose in the center of the compensated region was 190 cGy. Without the compensation, the dose in this region would be up to 220 cGy.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
870
The sliding window technique is easy to perform, but the results may not always be what we actually calculated [10, 11]. First the very precise additional measurements of small fields should be preformed [13]. The big problem here is how much do we rely on the dosimetry of the field that is smaller than 2cm in a direction [12, 13]? Next the very precise dose rate should be maintained through the whole irradiation time, besides the each leaf movement should be very exact and precise considering space and time [10, 11]. Once all this strict conditions are fulfilled, they should be also regularly checked and maintained. Each of the IMRT plan should be checked in advance and all of the intensity maps should mach the planed ones. Even a single displacement of a leaf movement (considering space or time) can lead to catastrophic results [10, 11]. Also fully automated but a bit safer, at least considering the time frame of the MLC movement, is a step and shoot technique [12]. The precise measurements should still be preformed and the question about the small field dosimetry remains open [13], but the condition about the constant dose rate is not as strict as when applying sliding window technique [12]. The multileaves movement should still be spatially very exact but the condition is less strict considering the time frame of the single leaf movement. There is a problem however that a treatment time when applying the step and shoot technique increases a lot. The irradiation time of the modulated field is easily four times longer than that of not modulated field [7]. Sometimes the increase in irradiation time is too big to be feasible in irradiating patients. We can also choose the possibility of 3D compensators that are manufactured for each field according to the calculated intensity maps of the fields [5, 8]. Applying the 3D compensator filter technique the questionable dosimetry of the small fields does not take the important role, as the whole field is irradiated at once [5]. The second problem taken care of is the irradiation time of the modulated fields. Using the filter compensator the irradiation time remains practically the same as that of the not modulated field [5]. The big problem is the time consuming compensator manufacturing. The negative for every compensator is to be drilled and the wax should be poured in [8]. Since the thermal expansion coefficient of the wax is big the wax should not be poured in the negative at once but in several steps. With the great latent heat it also takes long for every layer of the wax to solidify, and the big heat capacity of the wax makes it even longer. So the manufacture of the wax filter compensator is a time consuming task, and besides the machinery it also takes a worker. As it seemed promising at first due to the long lasting and expensive manufacturing it is not used widely.
D. Grabec and P. Strojan
Even though the manufacturing process of the wax filter compensators is long lasting we could still consider cases where the benefit beats the costs. The cases to consider are rare ones, and if possible not a lot of fields to be compensated. Such cases may be the treatment of meduloblastoma.
ACKNOWLEDGMENT This work was supported by the Ministry of Science and Technology of Republic Slovenia, Grant No. Z3-61440302-04/3.04
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Dobs J, Barrett A, Ash D (1999) Practical Radiotherapy Planing. Arnold, London Perez CA, Brady LW, Halperin EC, Schmidt-Ullrich RK, ed (2004) Principles and Practice of Radiation Oncology. Lippincott Williams& Wilkins, Philadelphia, USA Van Dyk J, ed (1999) The Modern Technology of Radiation Oncology. madison, Wisconsin, Boyer AL (1982) Compensating filters for high energy x-rays. Med Phys 9:430 Chang SX, Cullip TJ, Deschesne KM et al. (2004) Compensators: An alternative IMRT delivery technique. J Appl Clin Med Phys 5: 15–36 Galvin JM, Chen X-C, Smith RM (1993) Combining multileaf fields to modulate fluence distributions. Int J Radiat Oncol Biol Phys 27:697-705 Chang SX, Cullip TJ, Deschesne KM. (2000) Intensity modulation delivery techniques: “Step & shoot” MLC auto-sequence versus the use of a modulator. Med Phys 27:948–959 PAR Scientific A/S (2002) IMRT calculations: Solid Compensators/Beam Modifiers vs MLC-IMRT technique. at: http: //www.parscientific.com/MlcDiscussion.html Umek B (1986) Hitra priprava termoluminiscentnih dozimetrov iz litijevega fluorida za klinično uporabo. Magistrsko delo. Univerza v Ljubljani Xia P, Chuang CF, Verhey LJ. (2002) Communication and sampling rate limitations in IMRT delivery with a dynamic multileaf collimator system. Med Phys 29:412-423 Siebers JV, Keall PJ, Kim JO, Mohan R. (2002) A method for photon beam Monte Carlo multileaf collimator particle transport. Phys Med Biol 47:3225–3250 Bayauth JE, Morrill SM. (2003) MLC dosimetric characteristics for small field and IMRT applications. Med Phys 30:2545-2552 Laub WU, Wong T. (2003) The volume effect of detectors in the dosimetry of small fields used in IMRT. Med Phys 30:341-347
the corresponding author: Author: Institute: Street: City: Country: Email:
Dasa Grabec Institute of Oncology Ljubljana Zaloska 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Monte Carlo Radiotherapy Simulator: Applications and Feasibility Studies K. Bliznakova, D. Soimu, Z. Bliznakov and N. Pallikarakis Department of Medical Physics, University of Patras, Patras, Greece Abstract— This paper presents the application of an inhouse developed Monte Carlo Radiotherapy Simulator (MCRTS) to carry out complex radiotherapy investigations: Image Guided Radiotherapy (IGRT) with Cone-Beam Computed Tomography (CBCT) implementation and protection of irregular volume during tumor treatment. For the purposes of these simulation studies the MCRTS code has been additionally improved to include graphical features that facilitate the tumor delineating. A PC cluster with 24 dual-core processors nodes, running Linux operating system, is built to accelerate the calculations. Investigations of using megavoltage CBCT for tissue localization show that this technique could be used primarily to patient treatment for alignment purposes, using bony structures or air cavities. The feasibility study of shielding an irregular volume based on custom shielding filters designed by MCRTS demonstrates promising results by means of level of protection. Keywords— Radiotherapy, Monte Carlo, MVCBCT, irregular volume shielding, health tissue protection
I. INTRODUCTION Modern high-precision radiotherapy is nowadays characterized by flexibility to treat very complex situations mainly as a result of recent advancements in all parts of the “technology chain of radiotherapy” including target definition, 3D treatment planning and delivery, and dynamic treatment verification. This paper reports on such a flexible software system used to simulate general and specific, simple and complex radiotherapy applications. This software system, called Monte Carlo Radiotherapy Simulator (MCRTS) that was previously developed [1] is continuously improving in order to carry out complex and non-traditional treatment situations. Presently, it demonstrates flexibility in geometry irradiation design, optimization in particle transport, possibility for obtaining megavoltage images and speedup of calculations. Besides, it offers flexible user friendly tools to carry out specific investigations that involve novel treatment techniques, new shielding materials and megavoltage imaging applications [2, 3]. More specifically, the current work reports on the feasibility of using MCRTS in two fields of radiotherapy: Image Guided Radiotherapy (IGRT) with Cone-Beam Computed Tomography (CBCT) and protection of irregular volume during tumor treatment.
II. MATERIALS AND METHODS For the purposes of these simulations, (a) the MCRTS code has been additionally improved with some graphical features that facilitate the tumor delineating; (b) a 24 computer cluster is built to accelerate the calculations; (c) feasibility studies of using megavoltage CBCT (MVCBCT) for tissue localization and shielding an irregular volume studies are conducted for simple and patient specific phantoms. A. MCRTS new graphical module The MCRTS is developed for the purposes of verifying traditional and non-traditional radiotherapy applications [1, 2]. Shortly described, the MCRTS application, written under C++, allows the user to: (a) design a patient phantom (or any phantom); (b) set up the geometry configuration; (c) define the characteristics of the incident beam; (d) control particle transport simulation by imposing limitations; (e) control and set up the outputs of the simulation: dose matrix, images, particle interaction history; (f) calculate protector characteristics or to set a complex shielding assembly. In order to allow the physicists to select precisely the volume under protection and therefore to create a complex shielding protector, a new graphical module has been designed and integrated into the MCRTS (screenshot shown in figure 1). The filter is designed based on patient data. A set of CT slices appear in a sequence on the graphical module. Slice-
Fig. 1 Graphical user interface of the MCRTS
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 928–931, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Monte Carlo Radiotherapy Simulator: Applications and Feasibility Studies
by-slice, the physicist delineates the area for protection. The delineated region is stored in a shielding filter matrix. The type of the protection material is assigned from a number of available modeled shielding materials. B. BITUnit PC Cluster distributed system The BITUnit Linux cluster [4] has been upgraded to 24 units (figure 2). Each unit consists of a motherboard with built-in graphics controller and 1 Gbps network card, 1GB DDR2 RAM memory, and dual-core microprocessor Intel Pentium 4 at 3.4 GHz. Additionally, the system is equipped with a set of a flat LCD monitor, keyboard and mouse, which can be connected to each of the node PCs via three 8port KVM switches. Two 16-port Ethernet Gigabits switches connect the PCs into the existing laboratory network. One of the computers (called cluster server) boots from a hard drive and shares the operating system to the rest of the nodes. The MCRTS application code has been modified to run in a console mode.
Fig. 2 BITUnit PC cluster
929
C. Feasibility studies Two radiotherapy applications using the MCRTS are demonstrated. C1. Reproducing image guided radiotherapy (IGRT): case of IGRT with MVCBCT. Megavoltage images can be used to verify the patient set up and to assess target and organ motion. This involves comparison of a portal image acquired during a treatment fraction with a reference image generated prior to the initiation of the treatment course. More advanced technique is low-exposure MVCBCT. MVCBCT using simple objects. The goal of this experimentation is to provide feasibility study of using MVCBCT for patient alignment before treatment. Images are simulated using designed as parallelepipeds electronic portal imaging devices that absorb all the arriving particles. A simple phantom composed of cylindrical objects with assigned human tissue characteristics is shown in figure 3. Two simulations using photon cone-beam of diagnostic and therapeutic incident photon energies are carried out. Diagnostic energy is simulated by the use of 50 keV photon beam (that approximates the mean energy of the x-ray spectrum produced by a radiotherapy simulator 100 kVp), while the therapeutic one corresponded to 2 MeV (that approximates the mean energy of a 6 MV photon beam). A total of 181 images, acquired in the range 0:3600, with 10 runs each, consisted of 100 photons directed to detector pixel, are simulated. Detector size is 200x200 mm2. Source to isocenter and source to detector distances are 100/130 cm. “Low” and “high” 2D projections are then synthesized into two 3D volumes by the use of in-house developed CBCT reconstruction algorithm [5]. Three-dimensional volumes are compared by means of central reconstructed slices. MVCBCT using patient data. A MVCBCT simulation study is carried out using a phantom composed of patient data. Geometry and acquisition MVCBCT protocol are the same as described above. A patient phantom is composed of
Fig. 3 Phantom composed of cylinders of different densities
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
930
K. Bliznakova, D. Soimu, Z. Bliznakov and N. Pallikarakis
Fig. 4 Reconstructed filter matrix based on CBCT imaging protocol 240 CT images, taken from the National Library of Medicine’s Visible Human Project®, each slide with a dimension of 250 pixels in each direction. Pixel size is 1 mm. 181 megavoltage images are simulated for the whole gantry rotational range. These images are obtained with 200 photons directed to each detector pixel. Dose issues are not considered at this concrete study. Detector size is 300x300 pixels. Irradiated volume has been reconstructed using the in-house developed algorithms [5]. C2. Reproducing non-conventional techniques: the case of design and implementation of an irregular protector to shield the spinal cord in the thorax region. A nontraditional approach for modeling the protection of the spinal cord during treatment of a cancer in the thorax region is demonstrated. The application is summarized as follows: Shielding filter matrix design. The complex filter is designed with the new module shown in figure 1. In the demonstrated case, the spinal cord is subjected to protection. Lead material is used for protection. The center of the constructed shielding filter matrix is placed at a distance that corresponds to 0.4 of the source to isocenter distance. Therefore, the filter matrix voxel size is 0.4 mm. Verification of the filter. This is achieved by acquiring 181 diagnostic projections for the complete gantry range 0:3600. The filter matrix is reconstructed using the CBCT reconstruction algorithm [5]. The reconstructed volume is shown in figure 4. This filter matrix is used to protect the spinal cord, when treatment of a cancer in the lung is carried out.
Verification using a water phantom. Figure 5 shows an application of the MCRTS for the rotational therapy with the designed filter matrix. During rotational therapy, the filter matrix maintains the same position in relation to the organ at risk similarly to the simple case of cylinder protectors demonstrated recently [2]. The human body is approximated by a water phantom with a diameter of 200 mm. The phantom is subjected to irradiation simulation at 36 positions of the gantry head, in the full gantry rotation range, i.e. from 0° to 360° with discrete steps of 10 degrees. A 6 MV photon beam is used to obtain the simulated dose distributions. The latter are obtained using 5x107 photon histories. The beam is collimated to a 10×10 cm2 field size, defined at the isocenter. Dose matrix voxel size is 2×2×2 mm3. III. RESULTS Reproducing image guided radiotherapy (IGRT): case of IGRT with MVCBCT. Figure 6 shows a comparison between reconstructed slices from diagnostic and radiotherapy acquired image sets. MVCBCT using patient data. The image shown in figure 7 is the central reconstructed slice of the head volume, obtained under megavoltage conditions.
(a)
(b)
Fig. 6 Simulated images obtained for diagnostic (a) and radiotherapy (b) beams with MCRTS through simple phantom
Fig. 5 Simulated rotational therapy using custom designed filter
Fig. 7 Reconstructed central slice from a set of MVCBCT images
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Monte Carlo Radiotherapy Simulator: Applications and Feasibility Studies
931
tional power. The PC configuration with dual-core processor turned out to be a successful choice for building the Linux cluster, since one or two instances are executed for the same time. Splitting the Monte Carlo experiment in a number of jobs running on a larger number of available work units is much more efficient and saves time. V. CONCLUSIONS Fig. 8 Isodose curves extracted from different planes
This work presents two radiotherapy feasibility studies carried out with the MCRTS application. The first study, involving IGRT, demonstrate the potential benefit of using MVCBCT for the purposes of patient alignment. In the second study, the simplicity of the design of the filtering matrix used to protect irregular volumes during the treatment process, the promising results obtained for the level of protection, are factors that encourage its further development and real implementation.
The intensity images are mapped to an 8-bit grays scale display values, in order to provide easy image presentation and best observation. Reproducing non-conventional techniques: the case of design and implementation of an irregular protector Verification in a water phantom. Isodose curves taken at different planes are shown in figure 8. The maximal protection is estimated of 40% compared to the irradiation volume. The computation time needed for simulations has been drastically decreased when implementing the MCRTS code on a distributed system. For example, one run composed of 100 photons and a 6 MV incident beams (the therapeutic case of IGRT) on a single computer under Windows runs approximately 3 days, while on the distributed system is executed for less than 2 hours.
The authors thank the European Social Fund (ESF), Operational Program of Educational and Vocational Training II (EPEAEK II), and particularly the program PYTHAGORAS I, for funding the above work.
IV. DISCUSSION
REFERENCES
The results of the comparison between the slices in figure 6 demonstrate the ability the MVCBCT technique to be used for visualization of bony structures and air cavities. The images demonstrated enough contrast for these two cases and therefore they could be used for verification of patient position and treatment plan before patient treatment. This statement is also confirmed by the image in figure 7. Its poor quality is due to the small number of photon histories simulated, as well as the small slice and detector resolution. The megavoltage image exhibits blurring, however eye-sockets are detected. The isodose curves in figure 8 show that rotational technique with custom filter can be successfully applied to protect any irregular volume. The isodose values in the spinal cord region show considerable protection compared to the neighbor tissues that obtain full irradiation. The experiments involving tomographic phantoms filter matrices and linac spectra are much time consuming. The implementation of the MCRTS code under Linux environment accelerates significantly the Monte Carlo computations. Running MCRTS in a pure console mode, as in Linux, allows much better utilization of the processor’s computa-
ACKNOWLEDGMENT
1. 2. 3.
4.
5.
Bliznakova K, Kolitsi Z, Pallikarakis N (2004) A Monte Carlo based software tool for radiotherapy investigations. Nucl. Instr. Meth. B 222: 445-461 Ivanova T, Bliznakova K, Pallikarakis N (2006) Simulation studies of field shaping in rotational radiation therapy. Med. Phys. 33(11): 4289-4298 Messaris G, Kolitsi Z, Badea C, Pallikarakis N (1999): Threedimensional localization based on projectional and tomographic image correlation: an application for digital tomosynthesis. Med. Eng. & Phys. 21: 101-109 Bliznakova K, Buliev I, Bliznakov Z (2006) Monte Carlo radiotherapy simulator implemented on a distributed system. 5th European Symposium on Biomedical Engineering, In proceedings, Patras, Greece, 2006 Soimu D, Pallikarakis N (2004) Circular isocentric cone-beam trajectories for 3D image reconstructions using FDK algorithm. MEDICON 2004, In proceedings, Naples, Italy, 2004 Author: Institute: Street: City: Country: Email:
Kristina Bliznakova University of Patras Department of Medical Physics, School of Health Sciences Rio – Patras, 26500 Greece
[email protected]
6.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Optical biopsy system for breast cancer diagnostics S.A.Belkov1, G.G.Kochemasov1, S.M.Kulikov1, V.N.Novikov1, U.Kasthuri2, L.B. Da Silva2 1
BioFil Ltd (Biophysical Laboratory) and Russian Federal Nuclear Center – VNIIEF, Sarov, Nizhny Novgorod reg., Russia 2 BioTelligent Inc., Livermore, CA, USA
Abstract– An optical biopsy system for early breast cancer diagnostics is reported. Its specific features and calibration and data preprocessing method are described. Keywords– Breast cancer, spectrometry of scattered radiation.
I.
INTRODUCTION
Breast cancer is the most widespread female oncology disease. More than 200 000 women are subjected to diagnostics in the US annually. Being discovered in the early stage breast cancer may be successfully treated by combination of surgery, chemotherapy and radiation. Today X-ray mammography is number one among diagnostic methods and about 48 millions mammograms is done annually. Unfortunately, high sensitivity of mammography gives large amount of false diagnosis and that requires additional invasive diagnostic procedure for tissue sampling. The most frequent one is core biopsy, which implies thick needle penetration into the breast. Then the pathologist analyzes the removed tissue and gives the diagnosis. Only in the US more than 1.5 millions breast core biopsies are done annually. At all that more than 80% of the tumors are benign. It means that more than 1.2 millions patients undergo unnecessary surgery and anxieties. Although at present alternative, less traumatic procedures such as fine needle aspiration are used, lower accuracy, significantly restricts their diagnostic ability. In the clinical trials it was shown [1, 2], that characteristics of optical scattering and absorption are sensitive to the tissue type and state. In [3, 4] an application of a contact probe for cancer tissue diagnostics was demonstrated. That probe had one emitting and one collecting optical fibers for spectrum measuring of light scattering from biological tissues in the range 350 – 700 nm. In the given paper improved optical biopsy system is reported, clinical trials of which were conducted in the Regional Oncology Center of Nizhny Novgorod, Russia.
II.
OPTICAL PROBE AND DIAGNOSTIC SYSTEM
The flow diagram of the diagnostic system is presented in 1Fig. . Invasive probe developed by BioTelligent Inc, is a miniature needle containing optical fibers. One of them delivers wide-band optical radiation from the light source, which is a xenon lamp. Spectral characteristic of the lamp is given in 2Fig. . The probe is poked in the breast and moved to the suspicious area. Optical radiation is scattered and absorbed by the tissues, which the needle is passing though during diagnostic procedure, and is collected by three fibers placed in different distances from the source (several hundred of microns). Radiation from the source a nd scattered one is transported through the fibers to the measuring system, which is a block of four spectrometers S2000 produced by Ocean Optics Inc. Spectrum measurements are recorded with the frequency of 100-120 Hz providing data acquisition every 100 microns at the speed of needle insertion 1 cm/sec. The principle of operation of miniature spectrometers S2000 of Ocean Optics Inc. is the following: through the optical fiber input light falls onto the mirror, reflects to the diffraction grating and then reaches the detector, which is a CCD-line containing 512 elements. That allows spectral registration in the range from 200 to 1100 nm. Through the AD converter of Ocean Optics Inc. the signals are recorded on the computer’s hard disk. Absolute calibration of the spectrometers over wavelength is made using portable mercury-argon lamp with optical-fiber output (Ocean Optics HG-1), which has a discrete radiation spectrum in the range 253-922 nm. Such a configuration of the probe and diagnostic system allows obtaining of quantitative characteristics of optical scattering and absorption in various kinds of tissue in the breast. A general view of the diagnostic system and optical probe is given in 3Fig. . To control the depth of needle penetration and define the mechanical parameters of biological tissues the optical probe was provided with a position sensor and a force sensor.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 907–910, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
908
S.A.Belkov, G.G.Kochemasov, S.M.Kulikov, V.N.Novikov, U.Kasthuri, L.B. Da Silva
S R
D1
C1
D2
C2
3
Handle with needle
C3
2 Spectrometers
1
Storage device Fig. 1 Flow diagram of the diagnostic system: 1 – xenon lamp; 2 – matching unit; D1, D2 – optical-fiber splitters; 3 - attenuator; S – white light source channel; R – reference channel for source spectrum measurement and calibration of a system; C1, C2, C3 – scattered radiation measuring channels
1.5
'
1
0.5
0
400
500
600 Wavelength (nm)
700
800
Fig. 2 Spectral characteristics of xenon lamp
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Optical biopsy system for breast cancer diagnostics
909
Fig. 3 General view of the diagnostic system (on the left) and the optical probe (a handle with a force sensor and disposable needle) III.
CALIBRATION OF THE DIAGNOSTIC SYSTEM AND DATA PREPROCESSING
To obtain reliable information of the optical properties of biological tissues it is necessary to exclude spectral distortions of the measured signal that occur at its passing through the system’s optical path and on the optical contact between the needle and the handle. To do that two ways of calibration were used. Xenon lamp spectral intensity was measured continuously during data acquisition both at calibration and at the optical biopsy procedure in the patient. A part of source radiation after the splitter D2 is given in the input of one of the spectrometers; another part is given in the output of the reference channel (R) and is used for system’s optical path calibration. In the first calibration method in order to define the spectral properties of the optical path of each registration channel radiation from the reference channel output is consequently given in the registration channels inputs (C1), (C2), (C3) using a special optical fiber with an attenuator. Spectral calibration characteristics of the registration channels are calculated by normalization of the reference channel spectral data by that obtained in the course of calibration in each registration channel. The calibration dependencies are the characteristics of the given diagnostic system and are used at preprocessing of spectrometer signals measured in the optical biopsy procedure.
The second calibration method is used for determination of the optical properties of the needle and the optical contact between the needle and the handle. This calibration is done directly after the clinical procedure. The needle is cleaned from biological tissue remains and inserted into light scattering substance. A water solution (1 or 10%) of polystyrene balls calibrated in size (1 μm in diameter) serves as such a calibration substance. This procedure allows normalization and unification of the data received with different needles. Basically, this calibration method allows for the spectral characteristics of the path, which may change while time due various reasons in each optical biopsy procedure. Clinical data analysis revealed imperfection of both methods. In the first case the spectral parameters of the attenuator, which is used in the reference channel for light flux reduction and adjusting to spectrometer sensitivity, are indefinite. In our case a cylinder blackened inside with a diaphragm served as attenuator. The conducted studies demonstrated that proper adjustment allowed realization of almost ‘gray’ attenuator (i.e. the reduction factor was independent on the wavelength). In the second case radiation is scattered in turbid medium and only a small part of it gets the receiving channels. I.e. the medium itself serves as an attenuator. But in this case the spectral properties of the scattering medium are indefinite. Different receiving channels are shifted on different distances from the light source, so the spectral characteristics of scattered light for them may vary significantly. 4Fig. presents the calibration curves for the first (solid lines) and second (dashed lines) calibra-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
910
S.A.Belkov, G.G.Kochemasov, S.M.Kulikov, V.N.Novikov, U.Kasthuri, L.B. Da Silva 6
2.0
5 1.5
k, ab.un.
k, ab.un.
4
1.0
- C1 - C2 - C3
0.5
- C1 - C2 - C3
3
2
1
0.0 400
450
500
550
600
650
700
0 400
750
500
550
600
650
700
750
λ, nm
λ, nm
Fig. 4 Calibration curves for different calibration methods.
tion methods. The spectral behavior of the calibration curves differs quite a lot. And application of one or another calibration method gives different slope in the spectral curves of optical scattering obtained in the course of optical biopsy data processing. To avoid the disadvantages mentioned above a special calibration cell was developed. It was a quartz glass filled with immersion oil with index coefficient close to that of quartz. The probe with a needle is pulled down over the special guides providing needle being perpendicular to the glass bottom. Radiation from the source through the needle goes into the cell and is partly reflected from the distant face of the cell bottom. There is no reflection from the nearest cell face because the index coefficients of immersion oil and quartz are equal. The reflection coefficient of the second face does not depend on wavelength and is about 4%. Reflected radiation gets the receiving channels and is measured by the spectrometers. Additional attenuation of reflected radiation in the receiving channels necessary for adjustment with spectrometer sensitivity is provided by variation of the distance between the needle tip and the cell bottom. The calibration curves, obtained using calibration cell, are given in Fig. 5. At present additional studies of the calibration cell are carried out and a new protocol of clinical trials of the optical biopsy system is developed.
450
Fig. 5 Calibration curves obtained using a calibration cell.
ACKNOWLEDGMENT This work was partly supported by funding of IPP (Initiatives for Proliferation Prevention) Program of U.S. Department of Energy under the contract LLNL-T2-0242-RU and the Project #3075p of International Science and Technology Center.
REFERENCES 1.
2.
3. 4
J.R.Mourant, J.Boyer, A.Hielscher, and I.J.Bigio. (1996) Influence of the scattering phase function on light transport measurements in turbid media performed with small source-detector separations. Opt. Lett. 21, 546 L.T.Perelman, V.Backman, M.Wallace, and et al. (1998) Observation of periodic fine structure in reflectance from biological tissue: A new technique for measuring nuclear size distribution. Phys. Rev. Lett. 80, 627 I.J.Bigio, S.G.Bown, G.Briggs and et al. (2000) Diagnosis of breast cancer using elastic-scattering spectroscopy: preliminary clinical trials. J. of Biomedical Optics 5, 221 G.Zonios, L.T.Perelman, V.Backman, and et al. (1999) Diffuse reflectance spectroscopy of human adenomatous colon polyps in vivo. Applied Optics 38, 6628
Address of the corresponding author: Author: Institute: Street: City: Country: Email:
S.A.Belkov BioFil Ltd and Russian Federal Nuclear Center - VNIIEF Prospect Mira 37 Sarov, Nizhegorodsky reg. Russia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Problems faced after the transition from a film to a DDR Radiology Department S.P. Spyrou1, I. Gerogiannis2, A.P. Stefanoyiannis3, S. Skannavis4, A. Kalaitzis4, P.A. Kaplanis2 1
Electrical Engineering Department, Higher Technical Institute, 2152 Nicosia, Cyprus Medical Physics Department, Nicosia General Hospital, 2029 Strovolos, Nicosia, Cyprus 3 Second Department of Radiology, School of Medicine, University of Athens, Athens 12462, Greece 4 Radiology Department, Nicosia General Hospital, 2029 Strovolos, Nicosia, Cyprus 2
Abstract— A study was undertaken to show the problems that arise from the transition from a film to a Direct Digital Radiology (DDR) department, as well as the advantages and disadvantages resulting from this transition. Initially a questionnaire was given to 32 Radiologists and Radiographers who worked with conventional systems in the past and are working now with the new systems. From the answers received it was clear that a contradiction existed between the Radiologists and the Radiographers, as far as the time needed with the new systems to perform an examination and reach a diagnosis but both groups of professionals felt more secure working with DDR systems. However, most Radiographers stated that they were not happy with the training, whereas all Radiologists were. In most of the other questions, Radiologists and Radiographers were in agreement that the advantages of DDR systems in respect with Analog systems are considerably more than the disadvantages and in particular the more frequent problems that DDR systems face and the fact that the well running of the Radiology department, depends on technological effectiveness. Keywords— radiology, digital, analog, imaging, training.
I. INTRODUCTION In the summer of 2006 the Nicosia General Hospital moved from its old premises established in 1934, to a new campus. The radiology department at the old Hospital consisted of a number of Siemens radiology systems and in particular the known Gigantos 1012 and 712 radiography models, and the Klinograph 4 fluoroscopy system, which were installed in 1986. At the new campus the digital radiology systems were manufactured by MECALL srl from Italy and in particular the models Eidos 3100 for Radiography and Superix 180N for Fluoroscopy. The personnel remained the same and consisted of qualified Radiologists trained mainly in Greece, Germany and elsewhere, whereas the Radiographers followed a three year course in Greece. Their knowledge on computers varied as did their age and experience. However the objective of the study was to see the personnel’s opinion on the new digital systems and rectify any problems that may be present due to various reasons.
II. MATERIALS AND METHODS A. Questionnaire A questionnaire was given to Radiologists and Radiographers of the Nicosia General Hospital, Cyprus. In total 32 questionnaires were handed out but answers were received from only 25 members of the staff. They were requested to evaluate several topics like time consuming in performing and diagnosing an examination, difficulty in performing and diagnosing an exam, degree of confidence using such systems, degree of image quality, whether they were satisfied with the training on the new systems, frequency of damage of the new Direct Digital Radiology systems (DDR) in comparison with conventional Analogue systems (AN), and finally previous and after use opinion of the users for the DDR systems. The questionnaires used a combination of questions with multiple choice answers (they had to choose only one answer to a question) and free text to record the advantages and disadvantages of the usage of DDR systems as well as the greatest problem they had to deal with or any other items they wanted to add. The questions are shown in Tables 1 and 2. At the moment, similar questionnaires received from Medical Physicists and Biomedical Engineers are being evaluated and will be presented at a later stage. B. Sample The sample examined was that of answers from 4 Radiologists and 21 Radiographers although 3 more Radiologists and 4 Radiographers initially have shown an interest in the survey. III. RESULTS The results received from the 25 members of the staff (4 radiologists and 21 radiographers) are presented below. Tables 3-5 summarize answers received in the free text. In brackets, the amount of similar answers is indicated. Figure 1, shows the opinion of the personnel regarding DDR systems, before any hands-on experience and after working with these systems for the last 9 months
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 879–882, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
880
S.P. Spyrou, I. Gerogiannis, A.P. Stefanoyiannis, S. Skannavis, A. Kalaitzis, P.A. Kaplanis
Table 1 Questions for Radiologists No. Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 (Free text) Q10 (Free text)
Table 3 Advantages
Question How would you rate the time needed to conclude a diagnosis with a DDR system as compared with an AN system? How would you rate the degree of difficulty in reaching a diagnosis with a DDR system as compared with an AN system? How would you rate the degree of confidence once a diagnosis has been reached with a DDR system as compared with an AN system? How would you rate the quality of an image taken from a DDR system as compared with an AN system? Do you feel satisfied with the training provided on DDR systems? What was your opinion about DDR systems before the movement to the new hospital? What is your opinion about DDR systems now? Do you discuss more frequently with your colleagues cases taken from DDR systems as compared with cases taken from AN systems? Record the biggest problem you faced working with DDR systems. Record some advantages and disadvantages arise from the usage of DDR systems.
No. 1 2 3 4 5 6 7 8
Avoidance of repeating the exams (11) Better image quality (10) No dark rooms (5) No use of films and chemicals (3) Post-processing of the image (3) Save images and better files in PACS (3) Efficiency of an examination (2) Telematics No.
1 2 3
No. Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 (Free text) Q10 (Free text)
Chosen by radiologists Better image quality (4) Post-processing of the image (4) Ability of simultaneous study at different places (2)
Table 4 Disadvantages No. 1 2 3 4
Chosen by radiographers Time-consuming examinations (15) Frequent damages of the machines (4) Easily blocked (2) Not convenient for multi-injured person (2)
No.
Table 2 Questions for Radiographers
Chosen by radiographers
1 2
Chosen by radiologists More sensitive-blocked easily (3) Technology dependant (2)
Question How would you rate the time needed to conclude an examination with a DDR system as compared with an AN system? How would you rate the degree of difficulty in performing an examination with a DDR system as compared with an AN system? How would you rate the degree of confidence performing an examination with a DDR system as compared with an AN system? How would you rate the quality of an image taken from a DDR system as compared with an AN system? Do you feel satisfied with the training provided on DDR systems? What was your opinion about DDR systems before the movement to the new hospital? What is your opinion about DDR systems now? Do you believe that DDR systems will face more problems than AN systems? Record the biggest problem you faced working with DDR systems Record some advantages and disadvantages arise from the usage of DDR systems.
Table 5 The major defect No. 1 2 3 4 5 6 No. 1
Chosen by radiographers Time-consuming examinations (7) Frequent damages of the machines (7) Easily blocked (2) Problems with the PCs (2) Training (2) Time till gets patient’s Chosen by radiologists More sensitive-blocked easily (3)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Problems faced after the transition from a film to a DDR Radiology Department
881
Opinion for DDR systems for Radiographers and Radiologits before and after, hands on experience Radiographers
Radiologists
No of Answers
12 10 8
Before
6
After
4 2
Av er ag e
G oo d
Ex ce l le nt Ve ry G oo d
Ba d
Av er ag e
G oo d
Ex ce l le nt Ve ry G oo d
0
Fig. 1 Opinion of the personnel concerning DDR systems, before and after hands-on experience. IV. DISCUSSION-CONCLUSTIONS An interesting contradiction was revealed regarding the time needed to perform an examination with DDR systems and reaching a diagnostic conclusion. All four Radiologists stated that the time needed now to reach a diagnostic conclusion was reduced considerably, whereas all Radiographers stated that more time was needed now to perform an examination. The increase in time was attributed to the time required to type-in all patients’ details and other data and to the time needed to store and send these to the PACS station. The degree of difficulty was stated by all to be more or less the same. The reduction or even the elimination of re-taking an examination and hence delivering unnecessary dose to the patient was the contributing factor that made all personnel feel more secure and confident for a positive examination outcome. Both groups believed that the image quality increased considerably with DDR systems. Regarding the training provided, again an interesting contradiction with the Radiologists being satisfied whereas the Radiographers where not. The opinion of the personnel regarding DDR systems turned out to be a topic for fruitful discussion, as presented in Figure 1. Before the usage of such machines all the technologists had an opinion from “good” to “excellent” with more persons choosing “excellent” (10 persons). Furthermore, 2 Radiologists had “good” and another 2 had “very good” opinion. After the use of such machines there was a change in the opinion of technologists from “excellent” to “very good” and “good”. However, one person obtained negative
view. We believe that the contribution of the insufficient training is significant as well as the time-consuming examinations play role to this change. In contrary the doctors formed a better opinion (3 answered “very good” and 1 “excellent”). This change is due to the better image and the ability for post-processing which helps them for diagnosis. From the answers of question 8, as far as Radiologists are concerned, half of them answered that they ask more frequently for a colleague’s opinion and the rest exactly the opposite! The Radiographers on the other hand answered that they face more problems with the operation of the DDR units, mainly due to the lack of adequate training but also due to the frequent necessity to reset the PCs. As expected the main advantages from the use of DDR systems as outlined by the personnel where that there is no need for dark-rooms and storage areas for films and the ability of simultaneous study for some cases at different places, the better efficiency in finding images, and the capability for future use of telematics. It should be intensified though that Radiologists consider as a major problem the technological dependence of the whole system. Any fallout will cause problems to their work. In conclusion there is an improvement in the work because of better quality images that is much more in line with the necessary guidlines1 and more diagnostic information (with the help of image processing), using PC monitors instead of films and conventional viewing walls, but there is still need for appropriate training of the staff to achieve the optimization. Also, since at the moment, the Medical Physics Department, has in the pipeline, the implementation of the necessary Diagnostic Reference Levels2, the usage of DDR systems will help enormously in doing so, and certainly it will be easier than the method used today3. Fur-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
882
S.P. Spyrou, I. Gerogiannis, A.P. Stefanoyiannis, S. Skannavis, A. Kalaitzis, P.A. Kaplanis
thermore, future developments will lead to view images from the web server of the PACS using the Internet.
2.
ACKNOWLEDGMENT
3.
The authors would like to thank the personnel of the Radiology Department of Nicosia General Hospital, Nicosia, Cyprus, for their time and effort spend, in answering so willingly the questionnaires given.
REFERENCES 1.
Radiation Protection 118. Referral guidelines for imaging, ISBN 92-828-9454-1. EC 2001.
Guidance on the establishment and use of Diagnostic Reference Levels for Medical X-ray Examinations, Institute of Physics and Engineering in Medicine, Report 88, ISBN 1 903613 20 5, 2004. S. Christofides, P.A. Kaplanis, D. Sakkas. “Implementation of DRL´s in the Republic of Cyprus”. International Conference of Medical Physics, Nuremberg, Germany, September, 14-17, 2005.
Author: Institute: Street: City: Country: Email:
Spyros P Spyrou Higher Technical Institute P O Box 20423 Nicosia, CY-2152 Cyprus
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Recovery of 0,1 Hz microvascular skin blood flow in dysautonomic diabetic (type 2) neuropathy by using Frequency Rhythmic Electrical Modulation System (FREMS) M. Bevilacqua1, M. Barrella1, R. Toscano1, A. Evangelisti2 1
2
L.Sacco Hospital, Department of Endocrinology, Milan, Italy Department of Systems and Computer Science, University of Florence, Firenze, Italy
Abstract— Synchronized oscillation of smooth muscle cells tension in arterioles is the main control system of microvascular skin blood flow. An important autogenic vasomotion activity is recognized in the 0,1 Hz oscillations observed with power spectrum analysis of laser doppler flowmetry. Severe dysautonomia in diabetic neuropathy is correlated with the loss of 0,1 Hz vasomotion activity, hence with the impaired blood microcirculation. FREMS (Aptiva, Lorenz Biotech Spa - Italy) is a novel transcutaneous electrotherapy characterized by sequences of high voltage and low pulse duration electrical stimuli which vary in both frequency and duration. We evaluated the changes in laser doppler flow (Periflux System 5000 Perimed) in the volar part of the forearm before, during and after FREMS stimulation. Normal controls (n=10, 6 female, age range 21-39 years) demonstrated significant 0,1 Hz vasomotion power spectra in the basal condition associated with large oscillations of adrenergic cuteanous sweath activity (CED Model 2502 Skin Conductance unit) sampled from the hand; diabetics of type 2 with severe dysautonomic impairment (n = 10, 5 female, age range 63-75 years) displayed a near total absence (n=4) or an important decrease (n = 6) of 0,1 Hz vasomotion power spectra. During and until few minutes after FREMS application in both control and diabetics groups we observed a significant (p < 0,001) increase of 0,1 Hz vasomotion power spectra, despite in diabetics group adrenergic cutaneous sweath activity remained suppressed. These vasomotion power spectral variations were always related to the variations of blood flow velocity. Synchronization of smooth muscle cell activity is thought to initiate in the presence of endothelium and is related to repetitive transitory cGMP mediated release and reuptake of calcium from the sarcoplasmic reticulum of the smooth cells. We suggest that FREMS is able to synchronize the smooth cells activity, inducing and increasing 0,1 Hz vasomotion activity, indipendently from the autonomic nervous system. Keywords— Dysautonomic neuropathy, Vasomotion, Transcutaneous Nerve Electrical Stimulation, Autonomic nervous system
I. INTRODUCTION It is well known that smooth muscles dilate and contract rhythmically with the aim to deliver oxygen to tissues surrounding capillary beds. Blood flow cyclic changes through the skin can be evaluated non-invasively in human
beings: the rhytmicity of these oscillations is considered to be a fundamental part of tissues perfusion. The increases of skin flow are associated to decreased blood oxygen levels implying an increase in oxygenation of tissue. So vasomotion can control oxygen consumption, since the rate of vasomotion activity can change oxygen consumption by 2-8 fold, depending on the fraction of open micro-vessels. Recently the control of vasomotion has been partially elucidated1. The initiation of vasomotion and its control are thought to be mediated at least partially by calcium fluxes in endothelial cells that in turn are related to a nitric oxide mediated release of cGMP which activates a new type of channel closely linked to the changes of calcium fluxes in and out the sarcoplasmic reticulum of the smooth muscle cells. In fact synchronization could be observed also in deendotheliazed cells if a cGMP agonist (8-bromo-cGMP) was used2. Therefore the model now is that vasomotion is regulated in terms of rhythmicity and synchronization by nitric oxide released by endothelia cells which in turn increase cGMP that prompts a depolarization in smooth muscle cells. After depolarization the smooth muscle cells remain entrained to generate vasomotion. When a sufficient number of cells become activated at the same moment , the current will overcome the current sink and depolarizes all cells coupled via gap junctions. Impairment of skin vasomotion can be observed in various clinical and experimental conditions (including chronic venous insufficiency, diabetic polyneuropathy, etc). In diabetes there is an impairment of cutaneous vasomotion characterized by a decreased amplitude of the waves in basal conditions and after elicitation of shear stress (suprasystolic occlusion) 3-5 and it characterized typically by the reduction of low-frequency (=.1-0.01 HZ) of microcirculatory fluctuations6 as compared to findings well characterized in normal controls. Sympathetic discharge to the vessels is also important for the initiation of the vasomotion, perhaps as mediated by alpha 1 adrenergic receptors: in diabetics the impaired vasomotion is thought to be related to the impaired sympathetic discharge and is considered an ominous signs. Very few attempts have been done to try to ameliorate vasomotion. Recently chronic intermittent electrical stimulation was shown to eliminate endothelial disfunction
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 932–935, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Recovery of 0,1 Hz microvascular skin blood flow in dysautonomic diabetic (type 2) neuropathy
of pre-capillary arterioles in ischemic rat ankle flexo muscles with an amelioration of nitric oxide production and reversal of vasoconstrictor response to acetylcholine and restoration of the depressed vasodilation to bradykinin7. Pulsed electrical stimulation (TENS) has shown previously to modulate coronary blood flow when applied to the back of the patients but no effect was found on forearm vasomotion8. We recently devised a new electrical method of transcutaneous electric stimulation which, differently from traditional TENS, is characterized by sequences of specific voltage controlled electrical impulses (regulated at perception threshold), which vary in both frequency (F) and width (W), and that is defined as Frequency Rhythmic Electrical Modulation System (FREMS™). In this work we investigate the variations of power spectrum laser doppler flowmetry of the skin blood flow induced by FREMS applied on the skin volar surface of the forearm, with particular reference to the power of 0,1 Hz vasomotion spectra range. II. METHODS
To characterize each group according to quantitative physiopathological parameters, these two groups underwent the following tests: 1.
2.
3.
A. Subjects Two groups of subjects were recruited, one consisting of healthy volunteers, the other of polyneuropathic type 2 diabetic patients whose sensitive conduction from the sural nerve could be evoked. Group size, age and gender distribution are as follows: • •
group A consisted of 10 healthy volunteers, age range 21 to 39 years (mean = 30.4, SD = 5.46), 6 women and 4 men group B was initially of 14 diabetic patients with symptoms suggesting polyneuropathy and dysautonomy (orthostatic hypotension, disturbances of intestinal peristalsis, hypoesthesia of lower limbs and chronic symmetric painful syndrome). However, in four subjects we were unable to evoke sensitive conduction from the sural nerve; this left the group with 10 patients, aged 63 to 75 years (mean = 68.3, SD = 3.36), 5 women and 5 men, whose data could be analysed.
933
Electroneurography of lower limbs, using a Medelec™ Synergy N-EP - EMG/EP Monitoring System 2 channel apparatus (Oxford Instruments Medical BP546, France), detecting common peroneal, posterior tibial and sural nerve conduction on both sides. For each subject, this parameter has been calculated as the mean of the six conduction velocities. Autonomic cardiovascular reflexes: Valsalva ratio and maximal heart rate increment during the hand-grip test (%). ECG signal was recorded through two electrodes, one in the precordial region (IV intercostal space in V3), the other on the palm of the left hand. We provided the recording system an EEG signal preamplifier (CED 1902 isolated pre-amplifier) connected to a multi-channel polygraph (CED 1401plus) and directed by a CED Signal 1.9 acquisition software (Cambridge Electronic Design Limited Science Park, England) Evaluation of peripheral arteriolar elasticity, through a HDI/PulseWave™ CR-2000 (Hypertension Diagnostics Inc., Minnesota USA), able to noninvasively apply a Pulse Contour Technique providing an indirect quantitative estimate of arteriolar elasticity (SAEI in milliliters per mm Hg × 100) (Collins, V.R.; Finkelstein, S.M. and Cohn, J.N. 11
This way the two groups were thus characterized (Table 1): •
•
group A, healthy subjects, was characterized by young age, no systemic diseases, normal peripheral nerve conduction measures, normal cardiovascular autonomic reflexes, high values of small arteriole elasticity. group B, patients, was characterized by advanced age, type II diabetes mellitus, moderate dysautonomy as shown by values of the cardiovascular autonomic reflexes, symmetric distal polyneuropathy as shown by peripheral nerve conduction measures, significant reduction of elastic compliance of peripheral arterioles.
Table 1 Age [y]
Valsalv
Handgr
Capillary
Nerve Conduction Velocity [m/s]
Ratio
Test
[ml/mmHgX100]
MCV
SCV
group A (n=10)
35,4±8,83
>1
> 30%
1258,7± 233,66
51
58
group B (n=10)
66,4±9,7
<1
< 30%
1807,87± 381,98
39
42
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
934
M. Bevilacqua, M. Barrella, R. Toscano, A. Evangelisti
B. FREMS stimulation In the past 5 years, in the institute of endocrinology at Sacco Hospital in Milan, a new transcutaneous electrostimulation technology, defined FREMS (Frequency Rhythmic Electrical Modulation System) has being developed. Differently from other applications i.e. TENS, it is characterized by quasi-rectangular negative electrical impulses with a defined maximal voltage according to the perception’s threshold of the stimulus (up to 300V). The impulses have variable time-width (W) between 10 and 40 µsec and the frequency (F) of impulses generated (number of single impulses per second) varies from 1 to 1.000 Hz. These two parameters are modulated in a pre-set format in order to supply a stimulation sequence. Stimulation sequence is composed of different phases which are defined as period of time (T) in which only a parameter W or F varies. The features of the system allow the following actions: 1.
2.
high voltage stimulation is able to activate nervous cutaneous fibers without achieving painful threshold, because of the shortness of impulses. Moreover, it is possible to stimulate for several minutes without inducing thermo-electrical or galvanization effects in the tissue, also because of the use of non-ionized electrodes. A stimulation protocol lasts about 30 min. inducing a ‘firing’ stimulation: the impulses sequences are able to generate activating mechanisms and/or biological functions modulation in the tissues according to a specific correspondence between stimulation frequency and functional event.
C. Experimental setting All subjects underwent polygraphic examination to estimate the microcirculatory effects of the FREMS stimulation sequence. This study was approved by the local Ethical Committee. Subjects were 24 volunteers (10 healthy, 14 diabetic patients) who agreed to participate in the study after receiving careful information on the purpose of the research and the procedures involved. They all gave informed consent. During the 24 hours preceding the session, subjects abstained from alcohol, smoke, coffee, tea, or other drugs, except oral hypoglycemics or insulin. Each subject lied on a comfortable bed, in a silent environment, with constant temperature and isolated from external noise and stimulation. We recorded from the volar skin surface of the upper right limb through polygraphy the following parameters in the time domain:
1. Palmar skin conductance 2. Blood flow at the dorsal forearm surface. 3. Skin temperature at the dorsal forearm surface We then applied FREMS stimulation electrodes between the laser-doppler flow probes. Two sequences were applied separated by a pause. Once the stabilization of the observed measures was achieved, we recorded continuously the following parameters, as shown in Figure 1 (from top to bottom): •
Recording 4, related to CC skin conductance obtained through a CED 2502 skin conductance module. As CC reflects the degree of palmar sweating, which is in turn the index of systemic catecholaminergic activation, we assumed this recording to represent the index of sympathetic activity acting on target tissues (sympathetic outflow). The anatomic basis of this assumption resides in the facts that adrenergic innervation of the upper limb is totally provided by fibers originating in the inferior cervical ganglion and, adrenergic fibres with swaeting gland properties are limited to the palm.
•
Recording 3, related to blood flow velocity in the stimulated tissue obtained through a Periflux System 5000 (Perimed). Recorded velocity is expressed in arbitrary units. Recording 2, related to the value of instant temperature of the stimulated tissue, also measured through the Periflux System 5000 (Perimed). Recording 1, showing the administration of the FREMS stimulation sequence.
• •
III. RESULTS In all normal subjects the we found a very similar behavior of flow velocity as measured with Laser Dopper Flow. In details, our findings show an increase of flow velocity in association with FREMS sequences in which a variation of F is in the range 1 to 19 Hz. The flow velocity was kept during FREMS sequences in which F and W varied in a reciprocal way and a decrease of flow velocity during the FREMS sequences in which F was in the range 19-40 Hz. It was not observed any association of the variation of flow velocity with variation of temperature either in the normal or in diabetic group. The mean increase of blood flow velocity in normal subjects was 35% while in the diabetic group was 25%. In addition the normal group showed a simultaneous increase of CC, even though the absolute value was subject dependent, whitest we couldn’t observe any increase in the diabetic group.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Recovery of 0,1 Hz microvascular skin blood flow in dysautonomic diabetic (type 2) neuropathy
FREMS sequence
935
FREMS sequence
Figure 1
The increase of power in the 0,1 Hz spectrum was observed to be proportional to the flow speed in both groups (normal and diabetic). The area of proportionality of the two parameters in the 0,1 Hz spectrum is anyway shorter than in diabetic group.
3. 4. 5.
IV.CONCLUSIONS Our results indicate that FREMS sequences can induce reproducible vasomotion activity. Blood flow velocity increased both in normal subjects and diabetic patients after application of FREMS sequences, even though in diabetic patients the autonomous nervous system is impaired. This suggests that FREMS can directly activate the smooth muscle cells of microcirculation system. Analyzing the ratio between power spectrum at 0.1 Hz and blood flowmetry during FREMS stimulation can contribute to a more in depth understanding of the role of autonomous nervous system in vascular activity. REFERENCES 1. 2.
Peng H, Matchkov V, Ivarsen A, Aalkjaer C, Nilsson H. Hypothesis for the initiation of vasomotion. Circ Res 2001;88(8):810-5. Rahman A, Matchkov V, Nilsson H, Aalkjaer C. Effects of cGMP on coordination of vascular smooth muscle cells of rat mesenteric small arteries. J Vasc Res 2005;42(4):301-11.
6. 7.
8.
Meyer MF, Rose CJ, Hulsmann JO, Schatz H, Pfohl M. Impairment of cutaneous arteriolar 0.1 Hz vasomotion in diabetes. Exp Clin Endocrinol Diabetes 2003;111(2):104-10. Lefrandt JD, Bosma E, Oomen PH, et al. Sympathetic mediated vasomotion and skin capillary permeability in diabetic patients with peripheral neuropathy. Diabetologia 2003;46(1):40-7. Bernardi L, Rossi M, Leuzzi S, et al. Reduction of 0.1 Hz microcirculatory fluctuations as evidence of sympathetic dysfunction in insulin-dependent diabetes. Cardiovasc Res 1997;34(1):185-91. Stansberry KB, Shapiro SA, Hill MA, McNitt PM, Meyer MD, Vinik AI. Impaired peripheral vasomotion in diabetes. Diabetes Care 1996;19(7):715-21. Kelsall CJ, Brown MD, Kent J, Kloehn M, Hudlicka O. Arteriolar endothelial dysfunction is restored in ischaemic muscles by chronic electrical stimulation. J Vasc Res 2004;41(3):241-51. Jessurun GA, Tio RA, De Jongste MJ, Hautvast RW, Den Heijer P, Crijns HJ. Coronary blood flow dynamics during transcutaneous electrical nerve stimulation for stable angina pectoris associated with severe narrowing of one major coronary artery. Am J Cardiol 1998;82(8):921-6.
Author: Massimo Barrella Institute: L.Sacco Hospital, Department of Endocrinology, Milan, Italy Street: Via Grassi Giovanni Battista, 74 20157 City: Milano Country: Italy Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Scattered radiation spectrum analysis for the breast cancer diagnostics S.A. Belkov1, G.G. Kochemasov1, N.V. Maslov1, S.V. Bondarenko1, N.M. Shakhova2, I.Yu. Pavlycheva3, A. Rubenchik4, U. Kasthuri5, L.B. Da Silva5 1
BioFil (Biophysical Laboratory) and Russian Federal Nuclear Center-VNIIEF, Sarov, Russia 2 Institute of Applied Physics of Russian Academy of Science, Nizhny Novgorod, Russia 3 Regional Oncology Center, Nizhny Novgorod, Russia 4 LLNL, Livermore, CA, USA 5 BioTelligent Inc., Livermore, CA, USA
Abstract– Data analysis of the optical scattering spectra obtained in the clinical trials of the optical biopsy system is presented. The major types of spectra were revealed characterizing malignant and benign tumors. Keywords– breast radiation.
cancer,
spectrometry
of
scattered
I. INTRODUCTION Clinical trials of the optical biopsy system reported in [1] were conducted in the Regional Oncology Center of Nizhny Novgorod (Russia). During a year more than 150 patients with breast tumors were investigated using this system. To implement the clinical studies a test protocol was developed and approved by the Ethical Committee of Nizhny Novgorod State Medical Academy of Ministry of Health of Russian Federation on scientific study with human participation as a subject of investigation. In the accordance with the protocol the probe was poked in the breast twice: first into healthy tissue and second – in the tumor. The speed of penetration was controlled and did not exceed 1 mm/sec. Radiation from a xenon lamp through an optical fiber placed inside the probe’s needle was delivered into the breast. The radiation scattered from the breast tissue was collected by three other fibers also placed in the same needle. The spectrum of scattered radiation was measured by means of spectrometers with diffraction grating. Spectrum information was recorded on a hard disk for further processing and analysis. Recording frequency 100120 Hz allowed spectral measurements in the range 2001000 nm each 100 μm along the path of optical probe needle movement. The receiving fibers were located on different distances from the light source varying in the range of 100 – 500 μm providing special distribution of scattered radiation field. Surgeon’s comments during the procedure were recorded together with video recording. The moments of needle entering into the tumor and leaving the tumor were of great importance for further analysis. They were found using the
surgeon’s comments. After the procedure was completed data processing was done to obtain scattering coefficients of breast tissue defined as the ratio between the spectral power of each registration channel and spectral power of the source, which was measured in the special reference registration channel. As the intensity of source emission for the wavelengths less than 430 nm and higher than 710 nm is low the spectral coefficients of scattered light were strongly noisy in theses spectral rangers and therefore were excluded from the analysis. After optical biopsy was completed standard fine biopsy procedure followed by cytological analysis of tumor cell samples was conducted. In the cases of surgery the tumor tissues were subjected to histology investigation. The final diagnosis: malignant or benign was set basing on the results of the two last procedures. So the goal of the first stage of the analysis was to find general optical characteristics of scattered radiation in different types of tissue and revealing the major peculiarities in the spectral scattering coefficients of malignant tumors and their distinctions from benign tumors and healthy tissue. Respectively small data sampling was engaged at that stage: about 10 malignant patient datasets and 10 those of benign diagnosis. On the second stage an algorithm of automatic detection of malignant spectra in the data flow was developed. Using this algorithm the datasets of all patients were processed and analyzed and the diagnoses were obtained. The automatic diagnoses were compared with those given by physicians. As a result the indexes of sensitivity and specificity for the optical biopsy diagnostic method were found equal to 96% and 80% correspondingly. II. PRIMERY ANALYSIS OF THE SPECTRAL SCATTERING DATA IN MALIGNANT AND BENIGN TUMORS
The main problem occurring at scattered optical signal interpretation is high level of noise. There are several reasons of noise occurring. First, they are the changes in the properties of the optical contact between the needle and the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 856–858, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Scattered radiation spectrum analysis for the breast cancer diagnostics
857
handle caused by accidental tensions that take place while the needle moved through the breast tissue. Additionally, unavoidable non-uniformity of needle movement may give random variations of the optical properties of tissue in the vicinity of the receiving fibers. All that brought to chaotic changes in the absolute value of the scattered radiation intensity collected by the receiving channels and made signal decoding and interpretation rather difficult. An experimental signal presented in Fig. 1 is a good illustration to the aforesaid.
Fig. 2 The experimental signal after averaging procedure.
Fig. 1 An example of the experimental signal. A number of methods including various procedures of averaging were applied to reduce the influence of the noise component. The same signal as in the Fig. 1 but after averaging is presented in Fig. 2. Now the obtained picture allows distinguishing the tissues of different structure. At the same time the normalized spectral distributions also demonstrated considerable noise, which deformed the shape of the spectral curves. The main reason of that noise existence is the noise brought by the measuring equipment itself because of small time of spectrum storage (about 7 ms). After signal processing the noise level becomes lower due to averaging over the uniform tissue areas. In such a way we could manage to reduce the influence of the noise component and reveal the major distinctions in scattering spectra of malignant and benign tumors. Fig. 3 demonstrates the examples of the scattering spectra averaged over the time of probe presence in the tumor. Basing on this procedure spectrum classification was done. Scattering spectrum templates for malignant and benign tissue were built and used in the algorithms of automatic detection.
Fig. 3 The examples of the scattering spectra averaged over the time of needle being in malignant (mal.) and benign (ben.) tumors.
III. AUTOMATIC PROCEDURE OF OPTICAL BIOPSY DATA ANALYSIS
While the algorithms of automatic detection were created the issue of preliminary filtration of information recorded on the hard disk during the diagnostic procedure was the most important. For the primary filtering the indications of the position sensor were used. Further analysis of the scattering spectra was conducted by comparison of the filtered data with the templates developed previously. Two methods of definition of how much the current spectrum is close to the given template were used.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
858
S.A. Belkov, G.G. Kochemasov, N.V. Maslov, S.V. Bondarenko, N.M. Shakhova, I.Yu. Pavlycheva, A. Rubenchik, …
In the first one the whole spectral range is divided into three zones and a weight-average Euclid distance between the current spectrum and the template was calculated. The minimal distance determined the type of the template, which the analyzed spectrum was similar. In the second case the normalized overlapping integral (scalar product) between the analyzed spectrum and the template was calculated. The value of the overlapping integral >0.95 was taken as a criterion of closeness. These algorithms were applied for the optical biopsy diagnostics and the results were compared with those made by the physicians. The first method demonstrated almost 100% sensitivity, but false positive rate was quite high and resulted in the specificity of 70%. The sensitivity shown by the second method was lower (about 86%), but the specificity was higher to some extent: 84%. Combination of these two methods allowed increasing of specificity up to 80% at the index of sensitivity of 96%. The obtained results are not final. In particular, the specificity may be increased due to iterative preprocessing of the spectra and using of additional criteria of classification.
ACKNOWLEDGMENT This work was partly supported by funding from IPP (Initiatives for Proliferation Prevention) Program of U.S. Department of Energy under the contract LLNL-T2-0242RU and the Project #3075p of International Science and Technology Center.
REFERENCES 1.
Belkov S.A., Kochemasov G.G., Kulikov S.M., et. al. (2007) Optical biopsy system for breast cancer diagnostics. Report on this conference Address of the corresponding author: Author: Institute: Street: City: Country: Email:
G.G.Kochemasov BioFil Ltd and Russian Federal Nuclear Center - VNIIEF Prospect Mira 37 Sarov, Nizhegorodsky reg. Russia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Snoring and CT Imaging I. Fajdiga1, A. Koren2 and L. Dolenc3 1
University Department for Otorhinolaryngology and Cervicofacial Surgery, Ljubljana, Slovenia 2 Clinical Institute for Radiology, University Medical Centre, Ljubljana, Slovenia 3 Institute of Clinical Neurophysiology, Ljubljana, Slovenia
Abstract—Study objectives: To identify upper-airway changes in snoring using CT scanning and to clarify the snoring mechanism. Participants: Forty patients classified into nonsnoring (14), moderately loud (13) and loud snoring (13) groups. Methods: from CT images, measurements of spaces and structures at different pharyngeal levels were taken, impaired nasal breathing was noted and the pharyngeal narrowing (ratio between the area at the hard palate level and the narrowest one) calculated. Results: In snoring persons pharyngeal narrowing (p _ 0.0015) was greater and also proportional to the loudness of snoring (p _ 0.0016) and the length of soft palate with uvula (p _ 0.0173) was longer in comparison to non-snoring persons. Impaired nasal breathing was related (p _ 0.029) to loud snoring group only. Conclusions: Snoring is associated with greater pharyngeal narrowing. It indicates that Bernoulli principle plays major role in snoring. The key structure is soft palate: it defines the constriction and is sucked into vibrating by negative pressure which develops at that site. Constriction presents also an obstruction to breathing. Soft palate should therefore be the target for causal treatment of snoring. Other obstacles in the upper airway could not be confirmed as important for the development of snoring, although they may increase its loudness and potentially play a role in obstructive sleep apneas. Keywords— CT, snoring, soft palate, obstructive sleep apnea,
I. INTRODUCTION Snoring, obstructive sleep apnea (OSA), and upper airway resistance syndrome (UARS) are sleep-related breathing disorders (SBD) associated with the increase of upper airway resistance. The resistance is a consequence of partial (snoring, UARS) or complete (OSA) upper airway obstruction. Since snoring is well-defined disturbance, one would expect that it is associated with well-defined anatomical changes. However, typical differences between non-snoring and snoring persons still do not seem to be completely recognized. Faber et al.1 summarized a list of studies describing the imaging techniques available for determining the level of obstructive predominance. They concluded that in spite of the variety of changes described, no reference standard exists for the determination of the predominant obstructive level during obstructive events. They also suggest
that further studies are necessary to improve and validate existing methods and to develop new techniques. Such research would improve our understanding of the pathophysiology of OSA and snoring and assist in selecting the correct treatment option for different patients. The intention of this study was to discover the anatomical differences between snoring and non-snoring individuals using CT imaging, to clarify the mechanisms of snoring and to identify the key structures associated. II. MATERIALS AND METHODS Forty patients undergoing CT carotid angiography were included in the study. Their mean age was 61.8 years (SD 15.3 years) and twenty-four of the patients were male. The participants answered a questionnaire about their snoring loudness, nose breathing, age, weight, height and eventual former procedures in the upper way region. They estimated the loudness of their snoring by an analogue scale from 0 (no snoring) to 5 (loudest snoring possible) and according to these answers, fourteen patients were recruited in the non-snoring group, thirteen in the moderately loud snoring group and thirteen in the loud snoring group. Their CT images were analyzed using special computer software (SQ Throat©; Sekvenca d.o.o.; Ljubljana, Slovenia) that enabled the measurement of pharyngeal areas at the hard palate level, at the narrowest area (at the palatal level), and at the level just above the epiglottis (behind the root of the tongue). We also measured the anterior-posterior (AP) and transversal distances at the levels mentioned as well as the thickness and length of soft palate and uvula and their position (angle) against the hard palate. Evidence of impaired nasal passages was also noted. The pharyngeal narrowing was determined as the ratio between the area at the hard palate level and the narrowest area. A body mass index (BMI) was also calculated for each participant. III. DISCUSSION Snoring is a breathing disorder, and for its understanding the physical laws of aerodynamics should be engaged.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 864–866, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Snoring and CT Imaging
Pharyngeal narrowing ratio in non-, moderateand loud-snoring groups 10 8 ratio
Breathing can be defined as the streaming of air driven through the (upper) respiratory airway. The force that drives the air is the alternating negative and positive pressure produced by respiratory lung movements. There are two theories which can help understanding snoring. The first is ‘obstacle’ theory of snoring2. It tells that an obstacle (a narrowing) in upper airway increases the negative inspiratory pressure which in turn retracts the structures of the pharynx and makes them vibrate in the stream of air to produce the well-known sound of snoring. This explanation is supported by the Mueller test, which allows us to see and quantify the retraction 3. The second theory is based on Bernoulli’s principle4,5 and assumes that negative pressure is created at the narrow parts of the upper-airway which sucks the pharyngeal structures inward and generates snoring by their vibrations. In our study we wanted to check the presence of clinically recognized reasons for snoring such as obstacles in the nose, a narrow pharynx (in obese persons), backward displacement of the soft palate, enlarged tonsils, and a voluminous root of the tongue in snoring persons6 which would present the obstacles of obstacle theory or the narrowing in Bernoulli’s explanation of snoring. The direct comparison of pharyngeal and nasal structures by their areas, circumferences, lengths, widths and distances did not show any statistically significant differences between snoring and non-snoring persons. The only exception was the length of soft palate with uvula which was significantly longer in snoring groups. The impaired nasal breathing did not show to be related to snoring when the nonsnoring group was compared to all snoring persons. But if we compared the non-snoring persons to the loud-snoring group only the relation became significant. These results could not confirm the obstacle and neither the Bernoulli’s explanation of snoring mechanism. But, the narrowing can be expressed also as the ratio between the largest and the narrowest area in the upper airway. In his way the determination of airway size reductions is not influenced by individual absolute size differences which are a consequence of gender, constitution and nutrition. In this way a real degree of narrowing is exposed which made the comparison of snoring and non-snoring groups more reliable and interesting. The degree of pharyngeal narrowing was defined as a quotient between by the area at the hard palate level and the smallest area in pharynx. Comparison by this ratio gave highly significant results, showing that a greater inspiratory narrowing is characteristic for snoring persons. Even more, the narrowing was proportional to the loudness of snoring (Figure 1). The average narrowing in non-snoring persons was 3.59 (SD 1.50) (the narrowest area was 3.59 times smaller than the nasal area), 4.71 (SD 1.05) in the moder-
865
6 4
snore group
3,59
8,6 4,7
2 0 no moderate loud Kruskal-W allis H (equivalent to Chi square) = 12,8458 p = 0,0016 Fig. 1 ately loud group, and 8.60 (SD 6.53) in the loud-snoring group. The average narrowing for both snoring groups was 6.65 (SD 4.99). The mean cross-section of pharyngeal space was smallest at the level of the soft palate while the upper retronasal area was larger than the one behind the root of the tongue in all participating persons. These measurements show that the pharynx shape is similar to a Venturi tube which best demonstrates the Bernoulli’s principle. Increased inspiratory pharyngeal narrowing is therefore quite an obvious snoring characteristic. It is not easily seen on two-dimensional images but must be calculated from the two cross-sections. This supports the Bernoulli’s principle snoring theory and also shows that the identification of snoring characteristics depends on understanding the snoring mechanism. For treatment purposes, surgery in particular, it is important to recognize the structure responsible for snoring. Our study has shown that it is the soft palate or, to be precise, it’s lower half. The narrowest pharyngeal cross-section is always confined to the soft palate, even in cases of enlarged tonsils or root of the tongue. At the same time the increased narrowing in snoring persons should be seen as the obstacle which, by the obstacle theory, increases the negative inspiratory pressure and contracts all pharyngeal structures which in turn increases the Bernoulli’s forces. From our results and the evidence presented we believe that the pressure responsible for snoring is a sum of Bernoulli’s principle pressure and inspiratory negative pressure since both are present during inspiration. The sum reaches a peak at the narrowing, and when it is sufficient, the soft palate (unstable structure) is retracted until complete closure. At this moment, the air streaming is stopped, the Ber-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
866
I. Fajdiga, A. Koren and L. Dolenc
noulli’s principle negative pressure drops, and the soft palate is returned to its starting position by its tonicity. The air stream and the Bernoulli’s principle pressure are then restored and the whole cycle repeats, thus producing soft palate vibrations and the typical snoring sound. However, if the negative inspiratory pressure is elevated by another obstacle, it retracts all the pharyngeal walls and amplifies snoring loudness indirectly by increasing pharyngeal narrowing and/or possibly causes a steady closure that might be important in OSA. This may also explain our finding the snoring is louder in persons with impaired nasal breathing.
REFERENCES 1. 2. 3. 4. 5.
IV. CONCLUSIONS The study showed that snoring is associated with typical changes in the upper airway and that they can be presented by CT scanning. The key structure responsible for snoring is the soft palate, which is significantly longer in snoring persons and should therefore be the target for the causal treatment of snoring.
6.
Faber CE, Grymer L (2003)Available techniques for objective assessment of upper airway narrowing in snoring and sleep apnea. Sleep Breath 2:77–86 Rappai M, Collop N, Kemp S et al. (2003) The nose and sleep-disordered breathing: what we know and what we do not know. Chest124(6):2309–2323 Terris DJ, Hanasono MM, Liu YC (2000) Reliability of the Muller maneuver and its association with sleep-disordered breathing. Laryngoscope 110(11):1819–1823 Scharf MB, Cohen AP (1998) Diagnostic and treatment implications of nasal obstruction in snoring and obstructive sleep apnea. Ann Allergy Asthma Immunol 81(4):279–290 Suratt PM, Dee P, Atkinson RL et al. (1983) Fluoroscopic and computed tomographic features of the pharyngeal airway in obstructive sleep apnea. Am Rev Respir Dis 127(4):487–492 Nishimura T, Suzuki K (2003) Anatomy of oral respiration: morphology of the oral cavity and pharynx. Acta Otolaryngol (suppl.) 550:25-28 Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Igor Fajdiga, MD, PhD University Department for Otorhinolaryngology and Cervicofacial Surgery Zaloska 2 1000 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Standard versus 3D optimized MRI-based planning for uterine cervix cancer brachyradiotherapy – The Ljubljana experience R. Hudej1, P. Petric1, J. Burger1 1
Institute of Oncology Ljubljana/Department of Radiotherapy, Ljubljana, Slovenia
Abstract— The purpose of the present study was to evaluate the introduction of 3D MRI-based treatment planning for uterine cervix cancer at the Institute of Oncology Ljubljana. The evaluation was based on standard versus optimized plan comparison. D90, V100 for high-risk clinical target volume and D2cc for bladder, rectum and sigmoid colon were compared. The results showed that by optimization it was possible to increase D90 and V100 for HR-CTV. At the same time, D2cc was reduced for all organs at risk except for rectum in large tumor group where D2cc was increased but still remained below the institutional dose restrictions. The study provided confirmation that 3D MRI-based treatment planning was indeed a considerable improvement of the planning process with respect to the standard planning. Keywords— Cervix cancer, brachyradiotherapy, magnetic resonance.
I. INTRODUCTION Brachytherapy (BT) in combination with external beam radiotherapy (EBRT) and chemotherapy plays an essential role in radical treatment of locally advanced carcinoma of the uterine cervix [1]. Historically, treatment planning was based on orthogonal radiograph imaging and point dose estimations. Geometrical points were defined around the applicator and were typically: point A for target volume, ICRU point for bladder and ICRU point for rectum dose estimation [2]. Standard plan was prepared by dose prescription to point A. Dose calculated in the defined points served as estimation for the maximum dose in the relevant areas. Since the position of the points was not dependent on the patient anatomy the standard plan represented a risk of insufficient target coverage and/or excessive dose to organs at risk (OAR). The solution for these uncertainties was introduced by 3D image-based treatment planning. While computer tomography (CT) - based 3D conformal treatment planning has become widely adopted for EBRT of gynaecological malignancies, sectional imaging for BT has been implemented only recently and in a limited number of institutions. Using individual optimization of dwell-times and positions of 192Ir high- and pulse-dose rate stepping sources, this approach enables a dose escalation to the target volume without exceeding the tolerance limits of OAR [3, 4].
As far as the method of sectional imaging for BT treatment planning is concerned, magnetic resonance imaging (MRI) appears to be superior to CT due to its high resolution multiplanar capability and superior soft tissue depiction quality. It provides a basis for accurate delineation of the target and critical anatomic structures, offering a chance for improvement of dose conformity and better estimation of dose-volume relations [3]. The aim of this study was to retrospectively compare the differences between 13 clinically used optimized plans and the corresponding standard plans. Each pair of corresponding plans was prepared on the same patient anatomy, enabling the possibility of consistent plan comparison. This type of study is a suitable method to evaluate the advantages of modern MR-based BT treatment planning. II. MATERIALS AND METHODS The whole process of brachytherapy application was performed following standard departmental procedures. Nine patients with 13 applications were included in the study. A. Applicator implantation The type of applicator used for the particular patient was chosen by the physician immediately before the application with the intent to maximally cover the target volume with the simplest possible applicator setup. Two different applicator types were used depending on the tumor extent as evaluated by MRI at time of diagnosis and clinical examination at time of BT. In the case of a small tumor or initially large tumor with good response to EBRT a standard intracavitary Stockholm applicator comprising of a ring and an intrauterine tandem was used. In case of large or topographically unfavourable tumor with significant parametrial infiltration at time of BT a modified Stockholm-type applicator with needles implanted through the ring template was used. All parts of the applicators were MRI-compatible. B. Magnetic resonance imaging Immediately after the implantation MRI with Siemens Magnetom Avanto 1.5T was performed on each patient. The
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 875–878, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
876
R. Hudej, P. Petric, J. Burger
imaging sequence was T2-weighted turbo spin echo in paracoronal, parasagittal and paratransverse planes (i. e. parallel and orthogonal to the ring). The in-plane voxel dimensions were 0.8 mm x 0.8 mm and slice thickness was 3.9 mm. After the acquisition the MR images were imported in the BrachyVision planning system (Copyright Varian Medical Systems, Inc.). C. Contouring The delineations were performed by an experienced radiation oncologist. HR-CTV, rectum, bladder and sigmoid colon were contoured, respecting the GYN-GEC ESTRO Working Group recommendations for cervix cancer brachyradiotherapy [5].
To calculate total doses of EBRT+BT, the linearquadratic model [7] was applied (reference dose per fraction 2 Gy, reference dose rate 0.5 Gy/h, α/β=10 Gy for HR-CTV and α/β=3 Gy for OAR). For the HR-CTV, we aimed to achieve a V100 of 90%-95% and a total D90 above prescribed dose. For the OAR, we attempted to respect the following dose constraints: total D2cc < 70 Gy for the bladder and total D2cc < 65 Gy for the rectum and sigmoid colon. The plan optimization procedure was performed in consecutive steps. With each step the plan was optimized as much as possible before going to the next step. • •
D. Applicator reconstruction Applicator reconstruction was performed directly on MR images with applicators imported from library. Each applicator was aligned with the image using translations and rotations. Using this method, the error of reconstruction was reduced to minimum.
•
E. Treatment planning Standard and optimized treatment plans were created for each case. Standard plan was a simulation of a treatment plan that would have been created in the absence of MR images with only orthogonal radiographs of the pelvis. Standard plan was based on standard loading of active positions inside the applicator. In the ring, four dwell positions on the left and four dwell positions on the right side were activated. In the intrauterine tandem, all the dwell positions were activated. Dwell position separation was 5 mm. Dwell times of all activated positions in the applicator were set equal and the whole setup normalized so that the absorbed dose in point A was equal to the prescribed dose. The starting point for plan optimization was always the standard plan. The main objective of the optimization procedure was to cover HR-CTV with prescribed dose as much as possible without exceeding dose restrictions to the OAR. At our department, the following DVH parameters are evaluated during the process of plan optimization [6]: • • •
D90 – minimum dose that covers 90% of the HR-CTV. V100 – percentage of the HR-CTV that is encompassed by the prescribed isodose. D2cc – minimum dose that is absorbed in the most irradiated 2 cm3 of the individual OAR.
•
First step was adjusting the active dwell positions in the ring to align them better with the HR-CTV position. Second step was changing the dwell times of the active dwell positions in the whole applicator in order to make the prescribed isodose more conformal to the HR-CTV and to decrease the dose to the OAR where necessary. Third step involved activating additional dwell positions in the ring close to areas where the target volume was insufficiently covered. Dwell times in these positions were only between 10% and 20% of the standard dwell times since the positions were usually near the rectum or bladder and increasing the dwell times in these positions inevitably increased dose to the two organs. Fourth step was performed in cases where needles had been implanted through the ring template. Maximum dwell time of each needle active position was in the range of 10%-20% of the standard tandem dwell time. This restriction ensured that the isodose volume of 200% prescribed dose around the needles remained small enough not to cause unnecessary tissue damage around the needles. III. RESULTS
The dose absorbed in the HR-CTV and the coverage of the HR-CTV by the prescribed dose is described by D90 and V100 parameter respectively. The average values of both parameters for standard plans and optimized plans are presented in Figure 1 and Figure 2. The average D2cc parameters for bladder, sigmoid colon and rectum are presented in Figures 3-5. The results are grouped according to tumor size: • •
Large tumor: Tumor size 40-50 cm3, 3 applications Small tumor: Tumor size 25-35 cm3, 9 applications
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Standard versus 3D optimized MRI-based planning for uterine cervix cancer brachyradiotherapy – The Ljubljana experience
Fig. 1 Average D90 as percentage of prescribed dose
Fig. 3 Average bladder D2cc as percentage of dose restriction
for large and small tumors.
for large and small tumors.
Fig. 2 Average V100 for large and small tumors.
Fig. 4 Average sigmoid colon D2cc as percentage
877
of dose restriction for large and small tumors.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
878
R. Hudej, P. Petric, J. Burger
portion of the colon were present very close to the applicator and in these cases it was not possible to avoid the violation of the restriction. V. CONCLUSIONS This study shows that at the Institute of Oncology Ljubljana the introduction of 3D MRI-based treatment planning resulted in a significant improvement in DVH parameter values for BT of locally advanced carcinoma of the uterine cervix. The new method enables an increase of tumor dose and tumor coverage. At the same time OAR dose restrictions can be respected.
REFERENCES Fig. 5 Average rectum D2cc as percentage of dose restriction for large and small tumors.
1.
IV. DISCUSSION The average D90 shows clear distinction between large and small tumor groups. Average standard plan D90 for large tumor was smaller than the prescribed dose. The reason for this fact is that large tumors usually extend to distal parametria that can not be sufficiently covered with standard application and standard plan. By using additional needles and performing the optimization it was possible to increase the average D90 above the prescribed dose. The V100 parameter for the HR-CTV was also increased from unacceptably low 80% to reasonably good 93% coverage. The opposite situation can be observed within the small tumor group. The excessive dose to HR-CTV with standard plan shows that the average HR-CTV size in this group was considerably smaller than the volume treated with the prescribed dose. The optimization resulted in the reduction of the D90 while still remaining above the prescribed dose. Although D90 was reduced it was possible to slightly increase the V100 parameter, indicating that treated volume was more conformally adjusted to the HR-CTV. The optimization process resulted in considerable reduction of dose to all three OAR, regardless of the tumor group (Figure 3, 4, 5). The only exception was rectum in large tumor group where D2cc parameter was increased. However, the average D2cc did not exceed the dose restrictions. Results for the sigmoid colon D2cc show that even though it was possible to reduce the dose to sigmoid colon with the optimized plan, D2cc still exceeded the dose restriction by almost 40%. A thorough investigation of the individual plans showed, that in 5 out of 13 applications the location of sigmoid was extremely unfavourable. In all 5 cases, large
2.
3.
4.
5.
6.
7.
Poetter R, Dimopoulos J, Bachtiary B et al. (2006) 3D conformal HDR brachy- and external beam therapy plus simultaneous Cisplatin for high- risk cervical cancer: clinical experience with 3 year follow – up. Radiother Oncol 79:80-86 International Commission on Radiological Units (1985) Dose and Volume Specifications for Reporting Intracavitary Therapy in Gynaecology, Report No. 38. ICRU Publications, Washington DC Wachter-Gerstner N, Wachter S, Reinstadler E, et al. (2003) The impact of sectional imaging on dose escalation in endocavitary HDR-brachytherapy of cervical cancer: results of a prospective comparative trial. Radiother Oncol 68:51-59 Kirisits C, Poetter R, Lang S, et al. (2005) Dose and volume parameters for MRI based treatment planning in intracavitary brachytherapy of cervix cancer. Int J Radiation Oncology Biol Phys 62:901-911 Haie-Meder C, Poetter R, Van Limbergen E et al. (2005) Recommendations from Gynaecological (GYN) GEC-ESTRO Working Group (I): Concepts and terms in 3D image based 3D treatment planning in cervix cancer brachytherapy with emphasis on MRI assessment of GTV and CTV. Radiother Oncol 74:235-245 Poetter R, Haie-Meder C, Van Limbergen E et al. (2006) Recommendations from Gynaecological (GYN) GEC-ESTRO Working Group (II): Concepts and terms in 3D image based 3D treatment planning in cervix cancer brachytherapy – 3D dose volume parameters and aspects of 3D image-based anatomy, radiation physics, radiobiology. Radiother Oncol 78:67-77 Steel GG (2002) Basic Clinical Radiobiology, 3rd edition. Hodder Arnold, London Author: Robert Hudej Institute:Institute of Oncology Ljubljana Street: Zaloska 2 City: 1000 Ljubljana Email:
[email protected] Country: Slovenia
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Studies on the attenuating properties of various materials used for protection in radiotherapy and their effect of on the dose distribution in rotational therapy T. Ivanova1, G. Malatara2, K. Bliznakova1, D. Kardamakis3 and N. Pallikarakis1 1
Department of Medical Physics\ BIT Unit, School of Medicine, University of Patras, Rio-Patras, 26500, Greece 2 Medical Physics Department, University Hospital of Patras, Rio-Patras, 26500, Greece 3 Department of Radiotherapy, School of Medicine, University of Patras, Rio-Patras, 26500, Greece
Abstract— Protection of vital organs within a radiation field is one of the major concepts in radiotherapy. Measurements of beam attenuation by materials commonly used for protection in radiotherapy, such as lead, cerrobend and brass, as well as by three new materials for radiation shielding, which are polymers filled with tungsten powder, were carried out. The results of measurements were compared with the results of simulations for the same experimental setup to verify some aspects of the in-house developed Monte Carlo Radiotherapy Simulator. The later was also used for simulation studies on the effect of cylindrical protectors on the dose distributions in the rotational radiotherapy. The experimental and simulated data were found to be in a good agreement with a RMS error not higher than 2.1%. The metal-polymer composites can rival the lead in the protection of vital organs if the density provided is high. The results of studies in rotational therapy have shown that a protector of high density and bigger diameter offers more extensive protection of the organ at risk. In the case of cylindrical protector, however, increase in diameter leads to an increase of the attenuated field width, which is not always desirable. So, the solution could be to use larger diameter protector, placed closer to the phantom. Use of nonhazardous material, such as tungsten powder filled polymer, is preferable as well as combination of two materials with the outer layer made of denser material may improve the protector performance. Keywords— attenuation measurements, Monte simulation, rotational radiotherapy, dose distribution
Carlo
I. INTRODUCTION The effect of the attenuators inserted in the beam on the dose was studied. Measured beam attenuation by a number of materials, new at the market and traditionally used in radiation therapy for a protection of the healthy organs, and the corresponding effective linear attenuation coefficient were compared with the results of simulations. The purpose was twofold: (1) to study the attenuation properties of the materials and (2) to verify some aspects of the in-house developed Monte Carlo Radiotherapy Simulator (MCRTS) [1]. Moreover, more complicated simulation studies in rotational therapy were performed. Sometimes adequate radiation doses are difficult to attain because of a tumor
proximity to the Organ At Risk (OAR), like in cases of head and neck tumors with regional metastases, due to proximity of the lymph nodes to the spinal cord. A cylindrical attenuator inserted in the treatment beam can effectively be used for protection of the spinal cord in the case of rotational therapy, besides the uniform dose in the Planning Target Volume (PTV) can be achieved by additionally using beam shapers [2, 3]. II. MATERIALS AND METHODS A. Measurements of beam attenuation by different materials The attenuators used in the experiments consisted of lead, cerrobend and brass, as well as of three polymer-metal composites: Ecomass® compound 1700TU96 (Ecomass Technologies), Gravi-Tech™ GRV-AS-110-W (PolyOne Corporation) and Technon®/Poly (Tungsten Heavy Powder, Inc.). Composite materials are based on different polymers: nylon 12, acrylonitrile butadiene styrene (ABS) and polyurethane (PU) based resin correspondingly, filled with tungsten powder to achieve high specific gravity compounds. The properties of the materials are shown in Table 1. Ecomass samples were received from the manufacturer in a form of molded plaques. Gravi-Tech samples were reshaped from the hollow cylinders to the planar plaques by warming by a hot air gun and immediate pressing. Technon/Poly plaques were produced manually from tungsten Technon® powder and corresponding polymer, provided by the manufacturer. Tungsten powder was added in half amounts to the polymer while mixing in a ratio of 96:4. For enhanced physical properties the plaques were placed at 66oC for 4 hours for post curing. The primary beam transmission in the materials was measured using the ionization chamber (PTW M23332, 0.3cm3) connected to the electrometer (PTW DI4). The chamber was placed in a PMMA phantom (ρ=1.18 g/cm3) at a depth of 5 cm. Source-to-chamber distance was 100 cm, and source-to-shadow tray distance was 67 cm. Measurements were carried out for 6 MV x-ray beams (ELECTA SLi Plus, University hospital of Patras, Greece).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 923–927, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
924
T. Ivanova, G. Malatara, K. Bliznakova, D. Kardamakis and N. Pallikarakis
The absorbers, enough to cover the entire field, were placed on a tray and the beam transmission was measured while changing the thickness of absorbers. The field size was 5 × 5 cm2 at 100 cm from the source for measurements with lead, brass and cerrobend, and 4 × 4 cm for the polymermetal composite materials. The effective attenuation coefficients were derived from x-rays attenuation curves in the materials under assumption that the attenuation curves are simply exponential. Table 1 Materials used in the beam attenuation measurements Material
Z
Lead Cerrobend
Density, g/cm3 11.35 9.63
79 75.3
Brass Ecomass Gravi-Tech Technon/Poly
8.90 10.73 8.15 8.17
29.4 72.8 72.2 72.1
Composition, (wt. %) Pb (100) Sn:Bi:Pb:Cd (13.3:50:26.7:10) Cu:Zn (66:34) W: nylon 12‡ (97:3) W:ABS‡ (96:4) W:[PU resin + PU curing agent]‡ (96:4)
‡Chemical formulas: nylon 12 (C12H23ON); ABS (C15H17N); [PU resin + PU curing agent] was modeled consisting of 70% of methylene bis(4- phenylisocuanate) (C15H10N2O2) and 30% of polyurethane (C27H36N2O10).
B. Simulation studies for MCRTS verification The experimental setup and the properties of the attenuating materials were reproduced and computations of the doses under the different thicknesses of attenuators were performed using the MCRTS. Some difficulties were encountered in modeling the Technon/Poly material due to lack of the information about the chemical components of the PU curing agent, which constitutes 2% of the Technon/Poly. Thus, some deviations from the measured data are expected. The effective linear attenuation coefficients for simulated data were derived from the corresponding transmission curves. The number of photons histories was not less than 6.3×108 that was enough to keep the dose variance below one percent.
Table 2 Materials used in simulation studies with rotational geometry Material Gold Tungsten Lead Tungsten powder filled nylon 6 Lipowitz’s metal
Density, g/cm3 19.32 19.30 11.35 12.00
Z 79 74 82 72.7
10.00
75
Sn : Bi : Pb : Cd (13.3:50:26.7:10)
phantom was 15 cm. Protectors were positioned at the center of the field at different distances z from the isocenter with diameters d ranged from 1.0 to 2.0 cm and were rotated synchronously with the gantry. Dose distributions were obtained using photon fan beams with a source-to surface distance (SID) of 100 cm for a field size 10 × 10 cm2 at SID. III. RESULTS D. Verification studies A comparison of measured and simulated attenuation curves of the tested materials is presented in Fig. 1. The exponential curves, fitted to the measured data, are also shown. The values of the effective linear attenuation coefficients derived from simulated and measured attenuated curves are summarized in Table 3. The RMS error calculated shows the deviation of the simulated data from the exponential curve fitted to the measured data. Table 3 Effective linear attenuation coefficients derived from the measured and simulated attenuation data in different materials for 6 MV beam Material Lead
RMS error, % 1.1
Measured μ1,cm-1
Simulated μ2, cm-1
0.527±0.0
0.545±0.0 07 0.442±0.0 09 0.361±0.0 03 0.513±0.0 34 0.384±0.0 16 0.382±0.0 13
26 Cerrobend
1.3
0.431±0.0 09
Brass
C. Simulation studies of rotational irradiation 3D dose distributions in homogeneous water phantoms for rotational radiotherapy with cylindrical protectors of different parameters were calculated using MCRTS. The physical properties of the attenuation materials used are summarized in Table 2.The diameter of cylindrical water
Composition, (wt. %) Au (100) W (100) Pb (100) W : C6H11ON (97 : 3)
1.0
0.349±0.0 08
Ecomass
1.2
0.516±0.0 22
Gravi-Tech
0.8
0.380±0.0 11
Technon/P
2.1
oly
0.365±0.0 07
†
ε (μ)† % 3 .4 2 .6 3 .4 0 .6 1 .1 4 .7
ε(μ)=[(µ1-µ2)/µ1]·100
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Studies on the attenuating properties of various materials used for protection in radiotherapy 100
Lead, measur. Lead, simul. Cerrobend, measur. Cerrobend, simul. Brass, measur. Brass, simul. Expon. (Lead, measur.) Expon. (Cerrobend, measur.) Expon. (Brass, measur.)
80
I/Io, %
60
40
20 2
Field size 5x5 cm 0 0
10
20 30 Thickness, mm
100 90 80
I/Io, %
40
50
Ecomass, measur. Ecomass, simul. Gravi-Tech, measur. Gravi-Tech, simul. Technon/Poly, measur. Technon/Poly, simul. Expon. (Ecomass, measur.) Expon. (Gravi-Tech, measur.) Expon. (Technon/Poly, measur.)
70 60 50 40
2
Field size 4x4 cm 30 0
5
10
15 Thickness, mm
20
25
30
Fig. 1. Comparison of the simulated and measured transmission data of primary protons of energy of 6MV through (a) lead, cerrobend and brass and (b) metal- polymer composites. E. Simulation studies Figures 2(a) to 2(d) represent dose profiles along the phantom diameter for field size 10 × 10 cm2 for different protector materials, diameters and distances. Dose values were normalized to the dose of open beam at the isocenter. The insets reproduce configurations of protectors for the presented dose distributions. Figure 2(a) shows dose distributions for gold protectors placed at z=32 cm, while protectors diameters varied. Dose profiles for gold protectors of 1.46 cm in diameter, placed at different distances from the isocenter, are shown in Fig. 2(b). Figure 2(c) illustrates dose profiles of protectors made of different materials of the same diameter d=1.60 cm and located at the same distance from the isocenter. Finally, in Fig. 2(d) dose distributions for gold protectors with the same attenuated field width dc at the isocenter are plotted. IV. DISCUSSION The comparison of simulated and measured attenuation curves gave entirely satisfactory results. The experimental and simulated data were found to be in a good agreement with a RMS error not higher than 2.1%. The maximal
925
discrepancy between the effective linear attenuation coefficients derived from measured and attenuation curves was 4.7%. This can be caused by the insufficient information about the chemical constituents of the curing agent of the Technon/Poly material for which the highest error was noticed. The lead, as it was expected, provides superior protection, than cerrobend and brass. The best attenuation properties of the studied composite materials showed Ecomass. Its effective linear attenuation coefficient is very close to that of lead. The other two materials of this group showed inferior protection properties, because their densities were significantly lower than expected. The reason could be in the processing procedures that these materials underwent. However, if the density provided is high (it can be achieved till 12 g/cm2), the metal-polymer composites can rival the lead in the protection of vital organs. The second part of our studies showed that dose distributions resulting from a rotational irradiation, with cylindrical protectors inserted in the megavoltage beam, depend in a complicate way on all attenuator parameters. For the same protector material and z-coordinate, the dose at the center of the OAR decreases as the protector diameter (and therefore attenuated field width) increases (see Fig. 2(a)), because primary radiation and scatter contribution to the central part of the OAR from the phantom are reduced. For the same protector material and diameter, dose at the center of the OAR decreases if z-coordinate of the protector increases, as it is illustrated by Fig. 2(b). Scatter from the attenuator decreases as well as scatter contribution from the phantom does because of larger attenuated field width at the isocenter. The effect of scattered photons at the center of the OAR is less pronounced, because they are absorbed prior to reach the center of the attenuated field. If the diameter and the z-coordinate are the same, the use of denser protector material leads to more pronounced decrease of primary radiation, as shown in Fig. 2(c). Tungsten and gold have comparable protection that is better than of all other materials. Compton effect is the most predominant mode of interactions for 6MV photon beams that is proportional to the density of the material and is independent of the atomic number Z. For gold, increased production of photoelectrons and electron pairs contributes to the absorption in addition to Compton electrons, due to higher atomic number than that of tungsten. Dose distributions, obtained with lead and tungsten powder filled nylon 6 protectors, are similar as well. Although lead has slightly lower density it has higher atomic number. Cerrobend can not rival the above discussed materials in terms of protection. Finally, as shown in Fig. 2(d), for the same material and attenuated field width, the dose at the center of the OAR decreases, if the protector’s diameter increases, because of the increase in the primary beam attenuation.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
926
T. Ivanova, G. Malatara, K. Bliznakova, D. Kardamakis and N. Pallikarakis 100
100
(
90
(
)
b)
90
80
80
Dose, %
Dose, %
70 60
70
60
50
50
40
d = 1.0 cm d = 1.6 cm d = 2.0 cm
30 20 -7 100
-6
-5
-4
-3
Gold, z = 32 cm
-2
-1
Di t
0
1
2
3
4
5
z = 38 cm z = 32 cm z = 25 cm
40
6
30 -7 100
7
-6
-5
-4
Gold d = 1.46 cm -3
-2
-1
0
1
2
3
4
5
6
( )
80
80
70
70
60 50 40 30 20 -7
-6
-5
-4
-3
-2
d)
60 50
ρ1…ρ Lipowitz's Lead Tungsten powder filled nylon 6 Tungsten Gold
(
90
Dose, %
Dose, %
90
7
40
d = 1.6 cm (dc= 2.35 cm) z = 32 cm
-1
0
1
Distance, cm
2
3
4
5
6
d=1.46 cm, z=38 cm d=1.60 cm, z=32 cm d=1.76 cm, z=25 cm
30
7
20 -7
-6
-5
-4
-3
-2
Gold dc = 2.35 cm -1
0
1
Distance, cm
2
3
4
5
6
7
Fig. 2. Transverse dose profiles shown at the center of the cylindrical phantom for 6 MV beam for different protector materials, diameters and distances from the isocenter. One parameter varies, while other are kept constant: (a) material and z= const, d varies; (b) material and d= const, z varies; (c) d and z=const, material varies; (d) material, dc= const, d and z vary. Thus, concerning the optimal protectors for rotational radiation therapy, it was observed that gold and tungsten materials have the best beam attenuating properties. Nevertheless, gold and tungsten are expensive and hard to fabricate. Lead that is commonly used and provides adequate protection is toxic and hazardous to handle. Polymer-metal composites can be a good choice as lead alternatives. They are not hazardous, easy to handle and have relatively high density (till 12 g/cm3). Other solution can be lead cylinder covered by a thin layer of higher density material, like gold. V. CONCLUSIONS To achieve more extensive protection of the OAR, a protector of high density and bigger diameter are preferable.
In the case of cylindrical protector, however, increase in diameter leads to increase of projected shadow, which is not always desirable. So, the solution can be to use larger diameter protector, placed closer to the phantom.
ACKNOWLEDGEMENTS The authors are grateful to Ecomass Technologies and PolyOne Corporation, and personally to R. Durkee and J. Chirico, for donation of the sample materials. We would like also to acknowledge the European Social Fund (ESF) / Operational Program of Educational and Vocational Training II (EPEAEK II), and more specifically the Program PYTHAGORAS, for funding the above work.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Studies on the attenuating properties of various materials used for protection in radiotherapy
REFERENCES 1. 2. 3.
Bliznakova K, Kolitsi Z and Pallikarakis N (2004) A Monte Carlo based radiotherapy simulator. Nucl Instr Methods B 222: 445-461 Proimos B S (1966) Beam shapers oriented by gravity in rotational therapy. Radiology 87: 928-932. Ivanova T, Bliznakova K, Pallikarakis N (2006) Simulation studies of field shaping in rotational radiation therapy. Med Phys 33: 4289-4298
927
Address of the corresponding author: Author: T. Ivanova Institute: University of Patras, School of Medicine Department of Medical Physics City: Rio-Patras, 26500 Country: Greece Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Cavitational Potential of a Single-leaflet Virtual MHV: A Multi-Physics and Multiscale Modelling Approach D. Rafiroiu1, V. Díaz-Zuccarini2, D.R. Hose2, P.V. Lawford2, A.J. Narracott2, R.V. Ciupa1 1
Technical University of Cluj-Napoca, Electrical Engineering Dept./Biomedical Engineering Center, Cluj-Napoca, Romania 2 University of Sheffield, Academic Unit of Medical Physics, Royal Hallamshire Hospital, Sheffield, UK
Abstract— A newly developed lumped parameter model of the left ventricle contraction and a computational fluid dynamics (CFD) representation of the fluid structure interaction (FSI) of a single leaflet mechanical heart valve are coupled to investigate the cavitation potential of a single leaflet prosthetic heart valve. The left ventricle model gives a “more-realistic” representation of the cardiac muscle contraction, from the level of the contractile proteins, up to the hemodynamics of the whole ventricle. A commercial finite volume CFD code (ANSYS-CFX) is coupled to the lumped parameter ventricle model through the inlet and outlet boundary conditions. Cavitation potential is evaluated from the negative pressure gradients occurring on the surface of the occluder and vortex formations adjacent to its atrial aspect. Keywords— Cavitation, mechanical heart valve, left ventricle, lumped parameter model, fluid structure interaction.
I. INTRODUCTION Cavitation associated with closure of mechanical heart valves (MHV) has been linked to adverse events such as blood damage and, in extreme cases, to catastrophic failure. For this reason information is sought on cavitation potential of these devices [1]. It is believed that valve closure mechanics are a key factor [2]. Numerical modeling of heart valve dynamics is increasingly used as a tool for studying complex valve hydrodynamics. In the course of the design process, valves are computer designed and virtual prototypes are available at an early stage. There is a significant body of work describing the use of computational fluid dynamics (CFD) in this context [3-5]. Early simulations were limited to steady flow but advances in computational algorithms, have made it possible to take into account the continuous fluid-structure interaction (FSI) between the flow and the valve leaflets. Numerical simulations of 3D fluid structure interaction during valve closure have already been obtained using prescribed pressure boundary conditions [6]. However, physiologically representative models of the intricate biological processes that occur in the human body, especially in the heart, require a multi-scale approach. Models of the left ventricle (LV) contraction, taking into account both the mechanisms of contraction and hemodynamics of the LV, have become recently available [7]. Such
models can provide appropriate boundary conditions for detailed CFD models of local geometry. They bring great improvements in terms of the understanding of the interaction between the physics and the physiology. Specifically, in this case, the model of the LV contraction can aid investigation and understanding of the way that various physiological parameters affect the cavitation potential of a valve prototype. Although results are promising, this is an initial study and a formal parametric study will be conducted later. II. METHODS A. CFD model of the mitral valve For the purpose of the current study, an idealized CAD model of a single-disc MHV, consisting of a flat leaflet (the occluder) which moves inside a mounting ring (the housing) was built to represent the geometry of the valve (Fig. 1). The thickness of the disc-shaped occluder is h=1.5 mm and its diameter is d=22 mm. In the fully closed position, the gap between the mounting ring and the occluder is g=0.5 mm. The movement of the occluder is considered to be purely rotational, acting around an eccentric axis situated
Fig. 1 The idealized CAD model of a single-disc MHV
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 895–898, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
896
D. Rafiroiu, V. Díaz-Zuccarini, D.R. Hose, P.V. Lawford, A.J. Narracott, R.V. Ciupa
at 3.6 mm from its centroid (OX axis). The occluder travels through an arch of 70 degrees from the fully open to the fully closed position. Considering the valve to be mounted in mitral position, the CAD model was completed with two cylindrical chambers representing the atrium and ventricle (Fig. 2). The atrial chamber, positioned in the positive OZ direction, is 66 mm in length whilst the ventricular chamber, positioned in the negative OZ direction, is 22 mm in length. Only half of the valve was considered in the 3D model, saving computational resources. Blood is considered an incompressible Newtonian fluid with density ρ=1100 kg/m3 and dynamic viscosity μ=0.004 kg/ms. The unsteady flow field inside the valve is described by the 3D equations of continuity and momentum, with the boundary conditions suggested by Fig. 2.
ρ
∂u + ( u ⋅∇ ) u ρ = −∇p + μΔu ; ∇ ⋅ u = 0 ∂t
(1)
The principle used to model the fluid-structure interaction is outlined in Fig. 3. The occluder rotates under the combined effects of hydrodynamic, buoyancy and gravitational forces acting on it. At every time step, the drag, fQz , and lift, fQy , forces exerted by the flowing fluid on an arbitrary point Q of the leaflet’s surface, are reduced to the centroid, G. The total contribution of drag, FGz , and lift FGy are added to the difference between the gravitational and buoyancy forces (G-A). The condition for the conservation of the kinetic moment is imposed, resulting in the dynamic equation that describes the motion.
Fig. 3 The fluid-structure interaction and rotation of the leaflet dθ = ϖ old dt +
M 2 dt J
(2)
In equation 2, dθ represents the leaflet’s angular step, dt is the time step, ϖ old is the angular velocity at the beginning of the current time step and M is the total moment acted on the leaflet from the external forces: M = rG ⎡⎣ FGz sin θ + ( FGy − G + A) cos θ ⎤⎦
(3)
The total moment of inertia is J=5x10-8 kg.m2. It is generally accepted the idea that the most important mechanisms that initiate cavitation in MHVs are the negative pressure gradients and vortex formation at the surface of the occluder [8]. As there is speculation that vortexes are likely to occur after the initial impact between the occluder and the valve housing, leaflet rebound was also modeled. When, within the current time step, the new angular position drops below zero, a virtual spring with constant C=10 Nm/degree is suddenly considered to be stretched between the fully open and the current position. The elastic force developed by the spring is added to the gravitational force, thus reversing the leaflet’s rotation.
B. The left ventricle model
Fig. 2 The CAD model and the boundary conditions
A complex boundary condition is used to represent the behavior of the left ventricle (LV). Figure 4 shows the sublevels of organisation of the ventricle that were taken into account to give a “more physiologically realistic” model of the LV. The connection between the LV model and the valve model is also illustrated in Figure 4. A constant pressure source is connected to the ventricle via the mitral valve
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Cavitational Potential of a Single-leaflet Virtual MHV: A Multi-Physics and Multiscale Modelling Approach
897
Fig. 5 Ventricular pressure (solid line) and volume (dotted line) vs. time
Fig. 4 Representation of the LV model and its different scales
(input model). The blood fills the LV (output model) and ejects a volume of blood into the arterial network via the aortic valve. Contraction in the cardiac muscle is described at a number of levels starting from the level of the contractile proteins (actin and myosin), following a hierarchical path, from the microscopic level up to the tissue (muscle level) and to the organ (LV) level, to finally reach the haemodynamic part of the LV and its arterial load. Complete details of the formulation of the model may be found in [7], [9].
Fig. 6 Ventricular pressure rate and instantaneous angular position of the valve, during the effective closure (ω≠0)
III. RESULTS The following sequence of images describes the system response. Figure 5 illustrates the ventricular pressure and volume variation during one cardiac cycle, at a heart rate of 60 beats/min. The ventricle fills with blood, reaching the pressure of approximately 15 mmHg. Contraction causes a sharp raise in LV pressure, and starts to close the mitral valve. Valve closure is strongly dependent on the rate of ventricular pressure increase and the moment of inertia of the occluder. Fig. 6 illustrates both the rate of increase of ventricular pressure and the position history of the valve. The latter also highlights the bouncing motion of the occluder. The total duration of closure and rebound is 50ms.
To investigate the cavitation potential of the valve, we examined the end of the closure period, searching for negative pressure values and evidence of vortex formations on the surface of the leaflet. More or less negative pressure values alternate on the surface of the leaflet, depending on the direction of movement. Fig. 7 clearly demonstrates the presence of both the negative gradients and vortex flow during the backward (rebound) movement. During forward movement, pressures are less negative (Fig. 8); These observations are consistent with experimental findings [8] which, in the case of non-overlapping single-disc mechanical heart valves, (the Medtronic-Hall valve, for example) show that fluid vaporization is localized on the atrial side of the valve, towards its upper (larger) gap.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
898
D. Rafiroiu, V. Díaz-Zuccarini, D.R. Hose, P.V. Lawford, A.J. Narracott, R.V. Ciupa
further refinements to the model for example; sourcing and implementing suitable interphase mass transfer models. Other improvements, such as automatic remeshing, would be extremely useful to accelerate the solution process.
ACKNOWLEDGMENT This work was partially funded by the Romanian University Research Council and the European Commission, (Marie Currie Intra-European Fellowship: Project CCARES) and supported throughout the simulation phase by ANSYS Europe division. Special thanks are extended to Dr. Ian Jones and Dr. Justin Penrose, from ANSYS Europe division for their valuable assistance.
REFERENCES Fig. 7 Pressure fields and vortex formations at the surface of the occluder and the housing, during rebound (at t=0.0428s)
1. 2. 3.
4.
5.
6.
7.
Fig. 8 Pressure fields at the surface of the occluder and the housing, during forward movement (at t=0.0457s) IV. CONCLUSIONS This paper presents an initial step in the assessment of the cavitation risk for a candidate prototype valve. ANSYSCFX proved to be a valuable tool in accomplishing this task, particularly when coupled to a user-defined numerical tool providing realistic boundary conditions. The methodology described may also be applicable to other bioengineering contexts such as blood clotting or graft design. In the context of valve design, there is room for
8.
9.
Christopher E, Brennen (2003) Cavitation in biological and bioengineering contexts. Fifth Int. Symp. on Cavitation , Osaka, Japan, 2003 D. Ashworth, A. Behan (2002) An Investigation of the Closure Mechanics of Mechanical Heart Valve Prostheses. BSc Dual Honors Thesis, University of Sheffield, 2002 K. B. Chandran, E. U. Dexter, S. Aluri and W. E. Richenbacherr. Negative Pressure Transients with Mechanical Heart-Valve Closure: Correlation between In Vitro and In Vivo Results (1998). Annals of Biomedical Engineering. Volume 26, Number 4 / July, 1998 K. B. Chandran, E. U. Dexter, S. Aluri and W. E. Richenbacher. Flow in Prosthetic Heart Valves: State-of-the-Art and Future Directions. Annals of Biomedical Engineering. Volume 26, Number 4 / July, 1998 Cheng, R., Y.-G. Lai, and K. B. Chandran. Three-Dimensional FluidStructure Interaction Simulation of Bileaflet Mechanical Heart Valve Flow Dynamics. Ann Biomed Eng. 2004 November; 32(11): 1471– 1483.. David R. Hose, Andrew J. Narracott, Justin M. T. Penrose, David Baguley, Ian P. Jones, Patricia V. Lawford (2006) Fundamental Mechanics of Aortic Valve Closure. Journal of Biomechanics 39 (2006) :958–967 Vanessa Díaz-Zuccarini, Jacque LèFevre (2006) An Energetically Coherent Lumped Parameter Model of the Left Ventricle Specially Developed for Educational Purposes, Comp. in Biol. and Med. In press. I. Avrahami, M. Rosenfeld, S. Einav, M. Eichler, H. Reul, (2000 ), Can Vortices in the Flow Across Mechanical Heart Valves Contribute to Cavitation?, Medical & Biological Engineering & Computing, 2000, 28, 93-97 J. LeFevre, L. LeFevre, Couteiro B. A bond graph model of chemomechanical transduction in the mammalian left ventricle. Simul. Practice and Theory 1999, 7, (5-6),;531-552 Author: Dan Rafiroiu Institute: Street: City: Country: Email:
Cluj-Napoca Technical University 15. C. Daicoviciu Cluj-Napoca Romania
[email protected].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Time-Frequency behaviour of the a-wave of the human electroretinogram R. Barraco, L. Bellomonte and M. Brai Dipartimento di Fisica e Tecnologie Relative, Università di Palermo, Sez. CNISM-CNR, Italia
Abstract— The electroretinogram is the record of the electrical response of the retina to a light stimulus. The two main components are the a-wave and the b-wave, the former is related to the early photoreceptoral activity. Aim of this paper is to acquire useful information about the time-frequency features of the human a-wave, by means of the wavelet analysis. This represents a proper approach in dealing with nonstationary signals. We have used the Mexican Hat as mother wavelet. The analysis, carried out for four representative values of the luminance, comprehends the frequency dependence of the variance and the skeleton. The results indicate a predominance of low frequency components, their time distribution depends on the luminance whereas that of the high frequency components is little affected by the luminance. Keywords— a-wave, ERG, wavelet analysis, Mexican-Hat, wavelet variance, skeleton.
I. INTRODUCTION The electroretinogram (ERG) is the record of the lightinduced retinal response. It is a composite signal consisting of a sequence of time-delayed potentials (early response, a-, b-, c-wave, late potentials, etc.) originated in different retinal layers [1-2]. The a-wave is the ERG component analysed here, it lasts about 30ms (Figure 1).
II. MATERIALS AND METHODS
0.05 0.00
A. ERG acquisition
-0.05 Retinal response μ( V)
-0.10 -0.15 -0.20 -0.25
F = 1,5
-0.30
F = 0.5
-0.35
F=0 F = -1
-0.40 -0.45 -0.50 0.00
The a-wave appears in the ERG as a small negative potential characterised by two dips, denominated as a1 and a2. They are attributed to the contribution of the photoreceptoral activities ascribed to the cones and the rods, respectively [1-3]. The dips shift toward greater times and their relative weight changes as the luminance is reduced. The purpose of the present paper is to analyse the temporal features of the frequency content of the human a-wave. This, as most biological signals, is non-stationary, it turns out that its frequency components have time dependent features. A simple Fourier analysis is not appropriate since it contains only time averaged information, hence transient or timelocation of specific features are obscured. This limitation can be bypassed by a more sophisticated approach based on the wavelet analysis (WA) [4-6], that produces a timefrequency decomposition of the signal separating its individual frequency components. This signal processing method operates as a mathematical microscope able to reveal the features of the inhomogeneous processes underlying the early stages of the photoreceptoral response, it allows us to evaluate the values and times of occurrence of the frequency components present in the retinal response. The Mexican hat (MhW) is used in the present investigation as mother wavelet.
6.25
12.50
18.75
25.00
31.25
time (ms)
Fig. 1 Temporal behaviour of the a-wave recorded under different luminance conditions. F indicates the value of log(I/I0). I0 is the standard luminance (1.7 cd s m-2). The signals are truncated at 31.25ms
A sample of 10 normal ERGs belonging to subjects not affected by ocular diseases with visual acuity of 20/20 (range ±2) and negligible differences between the two eyes was selected out. They were recorded by means of Henke’s corneal electrodes following routine methods in accordance with the standards for clinical electroretinography [7]. The indifferent electrodes were frontal, with the ground electrode on the forehead. Electrode impedance was kept under 5 kΩ. Stimuli were stroboscopic white flashes with duration smaller than 50 μs presented in a Ganzfeld integrating sphere of 40 cm diameter. The pupils of the subjects were dilated (pupil diameter ≥ 7mm) with N-ethyl-(α-picolyl) tropicamide 1%, and the cornea was anaesthetised using oxibuprocaine hydroclorate 4%. They were then dark-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 919–922, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
920
adapted for 30 minutes. The standard luminance, denoted I0, was 1.7 cd s m-2, it was varied with step of 0.5 log units. In order to simplify the labels in the figures, we have indicated the luminance I in terms of log(I/I0). The filter and amplifier requirements were in line with the ISCEV standards [8]. Data were sampled with a frequency of 4096 Hz and stored as ASCII files for subsequent retrieval and analysis. Each clinical exam consisted of 11 traces (covering the time interval of 500 ms with sampling time δt = 0.244140625 ms). Each trace was obtained by averaging at least 3 responses. The traces in Figure 1 refer to the subject whose ERG was the closest one to the average.
R. Barraco, L. Bellomonte and M. Brai
for eliciting the various frequency components. Two types of wavelet transforms are currently used: discrete wavelet transform and continuous wavelet transforms (CWT). The first fits in with standard signal filtering and decoding methodologies, but it exhibits coarse time-frequency resolution. The second allows higher resolution in time-frequency plane which is necessary for an accurate identification and partitioning of signal’s characteristic features. For these reasons the second type is more adequate for our aim. Given a mother wavelet ψ, the CWT of a generic signal x(t) is defined according to *⎛ t − τ ⎞ ⎟dt ∫ x(t )ψ ⎜ σ −∞ ⎝ σ ⎠
B. Main features of the a-wave The a-wave reflects the functional integrity of the retinal photoreceptors. It contains two contributions, one of them ascribed to the cones (a1), the other one to the rods (a2). Cones and rods behave differently [9-10]. Rods (about 120 million) are sensitive to dim light, they predominate with respect to the cones (about 6-7 million), that are sensitive to bright light. In conditions of high luminance a1 is predominant with respect to a2. At intermediate luminance, a1 decreases, a2 increases, and, apparently, the two dips tend to coalesce into a single minimum. At low luminance a2 tends to predominate. This behaviour can be interpreted as a switch from a cone based response to a rod based response as the luminance is reduced. It is also relevant the translation toward higher times of the response. These facts are evident in Figure 1 that displays the a-wave recorded under different luminance values. The characteristic features of the a-wave, such as implicit time and amplitude, depend on eye adaptation, brightness and frequency of the light stimulus. A suitable variation of these parameters, makes it possible to separate rod and cone contributions. The retinal photoreceptors have different colour sensitivity, threshold and recovery time. The time interval of interest, in the present work, lasts 31.25 ms after the stimulus and contains 128 values. This choice represents a compromise as the cut-off time of the awave is not exactly measurable, since its final segment partially overlaps with the onset of the successive b-wave. Figure 1 shows four a-wave traces representative of the main features of their luminance dependence. The values of log(I/I0) are 1.5, 0.5, 0.0 and -1.0. C. Main features of wavelet analysis The WA has recently acquired an important role in signal processing for its ability to detect relevant signal features in a time-frequency diagram, through a proper use of time limited functions suitably localised. This method, in fact, employs windows of different size and time location
+∞
1
WT (τ , σ ) =
(1)
In usual implementations, the CWT is discretized and computed over a discrete σ-τ grid, where σ and τ are the scale and the translation, respectively. Since in many practical applications the signal of interest is time limited, the CWT is implemented by the following expression: t N −1
1
WT (τ , σ ) =
σ
∑ 0
tn
x ( t n )ψ * (
tn − τ
σ
(1a)
)δ t
where x(tn) is the nth value of the sampled signal (the awave in our context), ψ *[(t n − τ ) / σ ] is the complex conjugate of the mother wavelet, N is the number of samples (128) constituting the signal, δt is the sampling time (i.e. the time resolution of the signal). The MhW is defined according to: MhW (t ,τ , σ ) =
1 2π
⎡ ⎛ t − τ ⎞ 2 ⎤ − 1 ⎛⎜ t −τ ⎞⎟ ⎟ ⎥ ⋅ e 2⎝ σ ⎠ ⎢1 − ⎜ ⎢⎣ ⎝ σ ⎠ ⎥⎦
2
(2)
It has the shape of the second derivative of a gaussian. It is characterised by poor time localization, but good frequency localization. The latter fact is a consequence of its spectral distribution that exhibits a single peak. III. RESULTS The way of presenting of the results in WA is important, because it can simplify their reading and interpretation. We accordingly present the results in term of wavelet variance and patterns of local extremum lines (skeleton). The wavelet variance, defined according to ∞
2
E (σ ) = ∫ WT (τ , σ ) dτ
(3)
−∞
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Time-Frequency behaviour of the a-wave of the human electroretinogram
f MhW =
1030.55
(4)
σ
Figure 2 shows the wavelet variance relative to the traces plotted in Figure 1. The maximum individuates the dominant frequency, in the present case it occurs at 31.25 Hz. There is a drastic reduction of the higher frequency components as indicated by the fact that the curve is narrow. The variance reduces to 1/2 and to 1/10 when the frequency increases to 50 and to 103 Hz, respectively. The peak location as well as the width do not depend on the luminance. Figure 3 reports the skeleton that displays the pattern of the local extremum lines obtained valuating the extreme values (maxima and minima) of equation (1a). The lines corresponding to the minima are the most meaningful, since they occur in the central region of the temporal range; they allow us to locate the time of occurrence of each frequency component. A reduction of luminance determines a shift toward higher times, in agreement with the behaviour of the temporal signal. No appreciable changes occur in the high frequency components in the sense that their frequency distribution does not depend on time, whereas the lower ones are appreciably affected, as commented in the next section. The lines corresponding to the maxima of the coefficients are significant only in case of high luminance, at low luminance they are close to the temporal cut-off. In the range of significance, the high frequency behaviour is similar to that observed for the minima, whereas the low fre60
F = 1.5 F = 0.5 F = 0.0 F = -1.0
50
Vairance
40
30
20
10
0 0
50
100
150 200 Frequency (Hz)
250
300
350
Fig. 2.Frequency dependence of wavelet variance, relative to the traces of
350
max 1.5 min 1.5 300
max 0.5 min 0.5 max 00 min 00
250
Frequency (Hz)
reflects relative contributions from different scales to the total energy and reveals the energy distribution over the scales. It is convenient to use the frequency f instead of the scale σ. The conversion factor between σ (ms) and f (Hz) depends on mother wavelet being used. In the present case is
921
max -1 min -1
200
150
100
50
0 0
6.25
12.5
18.75
25
31.25
time (ms)
Fig. 3. Patterns of local extremum (maxima and minima) lines relative to the traces of Figure 1
quency behaviour is different, in the sense that their times of occurrence are shorter. IV. DISCUSSION The results about the variance and the skeleton prove that the MhW is appropriate to analyse the a-wave. Figure 2 indicates that the frequency range covered by the a-wave is very limited and interests the low frequency range. The information deducible from Figure 3 clarifies the nature of the process under analysis. The skeleton, giving the time evolution of the frequency components of the nonstationary signal, provides a global vision of its characteristic features. The principal result concerns the timefrequency behaviour of the a-wave and its dependence on the luminance. To explain this fact, we must consider that the response of rods and cones is conditioned by many factors such as adaptation, frequency and intensity of the light stimulus and so on. In conditions of intense light, the response is cone based, if the intensity of the illumination is reduced, the contribution switches toward that of the rods. According to the results reported here, both responses have a similar high frequency content, but their low frequency composition is different. In fact, in conditions of high luminous intensity, the higher frequencies tend to set up earlier than the lower ones, in conditions of low luminance, there is an inversion of tendency: the low frequency components set up now earlier than the higher ones. This fact, noticeable as an inversion of curvature in the skeleton lines, evidences another differentiation characterising the two types of photoreceptors: the predominant low frequency response of the cones is different from that of the rods.
Figure 1. The variance of the weakest signal is multiplied by 10.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
922
R. Barraco, L. Bellomonte and M. Brai
V. CONCLUSION The wavelet analysis is a powerful and promising method for the analysis of the time and frequency characteristics of non-stationary signals of various nature. The results, reported here, validate the helpfulness of this method in processing a biomedical signal. In this context, the WA can be considered as an effective investigation tool for highlighting some hidden features of the human a-wave. The possibility of describing the timefrequency behaviour of the photoreceptors allows us to acquire more information about the role and weight of rods and cones in the early stages of phototransduction. Further information can be deduced by analysing ERGs recoded in different conditions, such as manipulating the adaptation level, the background illumination, the flash intensity, the frequency of the light and the rate of stimulation. Under suitable conditions, rod and cone activities can be separated. We plan to consider this aspect as a next step. The use of the wavelet analysis in the field of ocular electro physiology is relatively new, therefore some aspects such as the choice of an appropriate mother wavelet, the values of the scale parameters and delay times, need a further investigation in order to enhance the performance of this signal processing method. In particular, it is auspicated that some controversial aspects concerning the typology of the processes underlying the early retinal response may be clarified using the information deduced from the wavelet analysis. In conclusion, the results reported here indicate that cones and rods have different time-frequency characteristics. This information adds to the already known fact that they have different luminance sensitivity.
REFERENCES 1.
Granit R. (1933) The components of the retinal action potential in mammals and their relation to the discharge in the optic nerve. Journal of Physiology (London) 77: 207-239. 2. Tomita T. (1950) Studies on the intraretinal action potential I. Relation between the localization of micropipette in the retina and the shape of the intraretinal action potential. Japanese Journal of physiology 1: 110-117. 3. Lamb TD, Pugh EN. (1992) A quantitative account of the activation steps involved in phototransduction in amphibian photoreceptors. Journal of Physiology 719-739 4. N M Astaf’eva. (1996) Wavelet analysis: basic theory and some applications. Physics-Uspekhi Fizicheskikih Nauk 39, 1085-110 5. Rioul O., Vetterli M. 1991 Wavelet and signal processing. IEEE SP Magazine 14-38 6. Torrence C., Compo G. P. (1998) A practical guide to wavelet analysis. Bulletin of the American Meteorological Society. 71, 1; 61-78 7. Marmor et al. 1989 International Standardisation Committee. Standard for clinical electroretinography. Arch ophthalmol 107: 816-819 8. Marmor M. F., Holder G. E., Seeliger M. W., Yamamoto S. (2004 update) Standard for clinical electroretinography Documenta Ophalmologica, 108: 107-114 9. Burns M. E., Lamb T. D. (2003) Visual Transduction by Rod and Cone Photoreceptors in Chalupa L. M. and Werner J. S. (Eds): ‘The Visual Neurosciences’ 215-33 10. Hood D, Birch D G. (1990) The relationship between models of receptors activity and the a-wave of the human ERG. Clin. Vision Sci. Vol. 5 N° 3 293-297 Author: Leonardo Bellomonte Institute: Dipartimento di Fisica e Tecnologie relative, Università di Palermo Street: Viale delle Scienze, Edificio 18 City: Palermo Country: Italy Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Verification of planned relative dose distribution for irradiation treatment technique using half-beams in the area of field abutment R. Hudej Institute of Oncology Ljubljana, Ljubljana, Slovenia Abstract— The aim of the study was to assure a proper treatment by using pairs of opposed abutting half-beams. Different beam arrangements were studied with the treatment planning system. The delivered dose in the region of beam abutment was measured with the film dosimetry and compared to the planned dose. A noticeable dose difference was found in the area of few millimetres around the abutment of the half-beams. The dose difference, resulting from inaccurate positioning of Y jaws, was up to 22%. The displacement of the jaws was evaluated and the correction of the position of the jaw was made. Keywords— film dosimetry, half-beams, dose verification.
I. INTRODUCTION High energy photon beams produced by linear accelerators are used in the treatment of neck carcinoma. At present, the predominant practice is three dimensional (3-D) conformal technique. The disadvantage of this technique is that it cannot irradiate the target volumes around the spinal cord without irradiating also the spinal cord which is thus exposed to the same irradiation dose. In 2006, a 3-D conformal parotid gland-sparing irradiation technique for bilateral neck treatment (ConPas) [1] was introduced at the Institute of Oncology Ljubljana. In the teletherapy plan, a large AP field and two pairs of opposed abutting wedged oblique half-beams were used. The halfbeams were created by closing one of the length jaws to the centre of the field. The advantage of such plan is an adequate coverage of a U-shaped target volume, thus sparing the spinal cord. The geometric accuracy of the linear accelerator is 1 mm for conventional conformal plans. However, the use of opposed half-beams requires very accurate length jaw positioning. Inaccuracies in the length jaw positioning can result in an over- or under-dosage in the area where one of halfbeams abut upon another. The purpose of our study was to determine the difference between the planned and the delivered dose in this area and to assure a proper treatment to patients. Several quality control (QC) plans with different half-beam setups were studied. The dose distribution of the delivered dose was measured with the relative film dosimetry and compared to
the dose distribution of the planned dose. The positional displacement of each length jaw was evaluated by simulating jaw displacement in the treatment planning system. II. MATERIALS AND METHODS For the purposes of our study, a Plastic Water TM phantom (cream coloured) in the form of slabs of the size of 30 × 30 cm2 was used. The thicknesses of the slabs varied from 1 to 4 cm. The electron density of Plastic Water TM is almost equivalent to the electron density of water and the electron density of human soft tissue. The conditions inside the patient's body were therefore reproduced to the acceptable level. The phantom was scanned with a Phillips spiral computer tomography (CT) scanner MX 8000 D with the thickness of the slices 3.2 mm and the increment of 3.2 mm. Equal scanning parameters are used also in clinical scanning of the head and neck region of patients. The scan was performed twice, once with the slabs lying parallel to the transverse plane and second with the slabs lying parallel to the coronal plane. This allowed the relative film dosimetry to be done in two orthogonal planes. 3-D image data were transferred to the 3D treatment planning system (TPS) XiO version 4.3.3 manufactured by the Computerised Medical Systems (CMS) company. Two groups of QC plans were created. Isocentric planning technique and superposition dose calculation algorithm were used for all plans
Fig. 1 Schematic representation of beam arrangement for the first group of QC plans with two half-beams with the same gantry angle (left) and the second group of QC plans (right). The horizontal line marks the isocentric plane.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 883–886, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
884
R. Hudej
In the first group, the QC plans were made by using the CT images of the phantom slabs parallel to the coronal plane. The isocenter of the linear accelerator was placed in the plane at the depth of 2 cm. Two equally weighted abutting half-beams with the same isocenter were used. The gantry angle and collimator angle were set to 0 degrees for all beams. The first beam had the length jaw Y1 closed to the center of the field while the second beam had the length jaw Y2 closed to the center (Figure 1 left). Together, they formed a symmetric square field. Another plan was made where the width jaws X1 and X2 were used to create half-beams in a similar way. Detailed beam data for both plans are given in Table 1. The names of the plans correspond to the name of the jaw pair that was used to create half-beams. In the second group of QC plans, the same set of CT images was used as in the first group. The isocenter of the linear accelerator was placed in the centre of a 10 cm high phantom. Two equally weighted opposite abutting halfbeams with the same isocenter were used. Both half-beams had the same jaw closed to the center, each beam irradiating the opposite part of the phantom (Figure 1 right). Four such QC plans were made, one for each collimator jaw. Detailed beam data for both plans are given in Table 2. The names of the plans correspond to the name of the jaw that was used to create half-beams. The monitor units for all beams in both groups of plans were chosen to deliver a dose, appropriate for the relative film dosimetry, to the isocentric plane. The final QC plan was a technique-specific QC plan. It had the same beam arrangement as the treatment plans for Table 1
Beam data for the first group of QC plans
Plan
Beam
X1
X2
Y1
Y2
Y
1 2
10cm 10cm
10cm 10cm
0cm 5cm
5cm 0cm
X
1 2
0cm 5cm
5cm 0cm
10cm 10cm
10cm 10cm
Table 2
Beam data for the second group of QC plans
Plan
Beam
Gantry
Coll.
X1
X2
Y1
Y2
Y2
1 2
0° 180°
90° 90°
10cm 10cm
10cm 10cm
5cm 5cm
0cm 0cm
Y1
1 2
0° 180°
90° 90°
10cm 10cm
10cm 10cm
0cm 0cm
5cm 5cm
X2
1 2
0° 180°
0° 0°
5cm 5cm
0cm 0cm
10cm 10cm
10cm 10cm
X1
1 2
0° 180°
0° 0°
0cm 0cm
5cm 5cm
10cm 10cm
10cm 10cm
bilateral treatment of U-shaped targets in the head and neck area. Two pairs of opposed oblique wedged halfbeams and a large AP field (Figure 2) were used. The phantom slabs were positioned parallel to the transverse plane and the isocenter of the linear accelerator was set in the center of the scanned phantom. The weights of the beams were chosen to achieve the dose homogeneity between –5% and +7% [2, 3]. The beam data of all QC plans were exported to the Impac Multi Access record and verify software. Kodak X-OMAT V diagnostic films were placed between the slabs of Plastic Water TM either on the transverse or coronal plane. Varian 2100 C/D linear accelerator producing high-energy photon beams of 6 MV with the dose rate of 300 MU/min was used to irradiate the phantom according to the planned data. The films were developed with a Kodak X-OMAT M43 Processor. The matching of planned and irradiated dose distribution was first checked visually. The visual check revealed a relevant discrepancy between the planned and the delivered dose in the area where one of half-beams abut upon another. The films were then scanned with a CMS DynaScan laser densitometer using DynaScan software version 2.01. The iris aperture of the laser was 0.25 mm which was satisfactory for our study. The background was defined on an area of the film that was not irradiated and located at least 4 cm away from the irradiated area. Several one-dimensional scans were made for each irradiated film perpendicular to the abutment of the two half-fields. All scans for each film were centred and averaged. Only a region of a few centimetres around the centre was studied. The planned and scanned dose distributions were exported to a Microsoft Excel spreadsheet and a quantitative comparison was made. The average under- or overdosage at the centre and its standard deviation (1SD) was estimated. The width of the under- or over-dosage was measured at the half of the maximum dose difference as the full width at the half maximum/minimum (FWHM).
Fig. 2 Schematic representation of the beam arrangements of the technique specific QC plan. The horizontal line marks the isocentric plane. The dose distribution was measured 5 cm higher from the isocentre (dashed line).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Verification of planned relative dose distribution for irradiation treatment technique using half-beams in the area of field abutment
tres around the centre of the irradiated area for all QA plans. The results from the relative film dosimetry are given in Table 3. Graphical comparison between the planned and the delivered relative dose distribution for plans Y, Y1 and Y2 are shown in Figure 3.
relative dose [%]
110,0
100,0
90,0
IV. DISCUSSION
80,0
70,0 -2
-1
0
1
2
1
2
1
2
distance from the centre [cm]
relative dose [%]
110,0 100,0
90,0 80,0 70,0 -2
-1
0 distance from the centre [cm]
110,0
relative dose [%]
885
100,0
90,0
80,0
70,0 -2
-1
0 distance from the centre [cm]
Fig. 3 Comparison between the planned (dashed line) and the delivered relative dose distribution (solid line) for the plans Y (top), Y1 (middle) and Y2 (bottom). For the plans Y and Y2, the delivered relative dose distribution after the Y2 jaw calibration is also given (dotted line). The dose distributions are normalised to the planned dose at the centre. III. RESULTS The visual check of the irradiated films revealed the unexpected dose characteristics in the region of a few millime-
The results clearly showed that the Y jaws weren’t accurate enough. The result of QA plan Y showed that the Y jaws were too close to each other so that there was a gap in the dose at the centre, but it told nothing about the accuracy of each separate jaw. The QC plan Y1 showed that the jaw Y1 was slightly misaligned, but still acceptable. The QC plan Y2, however, showed an unacceptable discrepancy between the planned and the measured dose. It was decided that the calibration for the Y2 jaw was needed. Simulating the jaw displacements with treatment planning system suggested that an approximately 1 mm shift of the Y2 jaw away from the centre of the field is needed to produce a desirable result. The calibration was done by the certified Varian technician only for the Y2 jaw. After the calibration, the relative film dosimetry was repeated for QC plans Y and Y2. This time, the dose distribution for QC plan Y2 was consistent with planned dose distribution. There was still some discrepancies between the measured and the delivered dose distribution for QC plan Y but this was attributed to the slight misalignment of the Y1 jaw. The treatment-specific QC plan was made after the jaw calibration. The delivered dose distribution was consistent with the planned dose distribution within the 3% error, which is an accepted standard for TPS accuracy. The differences in dose distributions are shown in Figure 4. The results from the QC plan X showed that the jaws were in the correct position with regards to each other. The results from the QC plans X1 and X2 imply that each of the X jaws is shifted towards the centre of the beam and this is in contradiction with the conclusion from plan X. The reasons for this contradiction are yet to be studied. Fortunately, the X jaw pair is not used to create half-beams in the treatment of patients. Table 3 Mean value of maximum dose difference between the planned and the delivered dose distributions and corresponding 1SD and FWHM. Plan
Dose difference
1SD
FWHM
Y Y1 Y2 X X1 X2
-13% +7% -22% 0% -25% -10%
1% 3% 2% / 7% 7%
3 mm 5 mm 3 mm / 3 mm 2 mm
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
relative dose [%]
886
R. Hudej 110
ACKNOWLEDGMENT
105
The study was done partly within a national program Development and Evaluation of New Approaches to Cancer Treatment P3-0003(D) financially supported by the Slovenian Research Agency (ARRS)
100 95
REFERENCES 90 -10
0
10
1.
distance from the centre [cm]
Fig. 4 Comparison between the planned (dashed line) and the delivered relative dose distribution (solid line) for the treatment-specific QC plan. Dose distributions are normalised to the planned dose at the centre.
It was noticed also that the width of the dose error wasn’t constant across the field for QC plans in the second group. The possible reasons could be the error in the collimator position or some angular displacement of the jaws. This is also a subject for further study. V. CONCLUSION The introduction of new treatment techniques, such as ConPas, require a thorough testing before they are used in clinical practice. QA/QC dosimetry checks are needed. Relative film dosimetry proved to be a reliable tool for detecting dose distribution errors, especially those of small dimensions.
2. 3.
Wiggenraad R, Mast M, van Santvort J, Hoogendoorn M, Struikmans H (2005) ConPas: a 3-D Conformal Parotid Gland-Sparing Irradiation Technique for bilateral Neck Treatment as an Alternative to IMRT. Strahlenther Onkol 181:673-82 International Commission on Radiation Units and Measurements (ICRU). Prescribing, Recording and Reporting Photon Beam Therapy. ICRU Report 50. (1993) ISBN 0-913394-48-3 International Commission on Radiation Units and Measurements (ICRU). Prescribing, Recording and Reporting Photon Beam Therapy (Supplement to ICRU Report 50). ICRU Report 62. (1999) ISBN 0913394-61-0 Address for correspondence Rihard Hudej Department of Radiophysics Institute of Oncology Ljubljana Zaloska cesta 2 SI – 1000 Ljubljana Slovenia Phone: (+386) 1 5879 631 E-mail:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Wavelet-based quantitative evaluation of a digital density equalization technique in mammography A.P. Stefanoyiannis1, I. Gerogiannis2, E. Efstathopoulos1, S. Christofides2, P.A. Kaplanis2, A. Gouliamos1 1
Second Department of Radiology, School of Medicine, University of Athens, Athens 12462, Greece 2 Medical Physics Department, Nicosia General Hospital, Nicosia 2029, Cyprus
Abstrac– In this study, quantitative evaluation of a proposed digital density equalization technique in mammography was carried out. The evaluation was performed on a set of 90 mammograms, based on wavelet-generated measurable parameters of image quality, such as contrast, noise and contrastto-noise ratio (CNR). These parameters were estimated for dense mammary gland and breast periphery, for both initial and corresponding corrected mammograms. The equalization character of the technique was also examined. A statistically significant (p<0.05) or highly significant increase (p<0.0005) was observed in breast periphery and mammary gland contrast, noise and CNR values. The proposed technique was found to result in density equalization, since the decrease in equalization index is statistically highly significant (p<0.0005). Keywords– equalization, wavelet, evaluation, digital mammography.
I. INTRODUCTION Mammography is currently the most sensitive diagnostic tool available for the early detection of nonpalpable breast cancer, which is one of the most common cancers among women. The detection of the earliest mammographic indicators associated with breast cancer can be difficult in clinical practice, due to their small size and subtle contrast, thus the use of high contrast, reduced dynamic range screen-film combinations is essential. The reduced screen-film dynamic range results in under- and over-exposed film regions, corresponding to dense mammary gland and breast periphery, respectively. These regions are poorly visualized, thus limiting the diagnostic performance of the technique. Several equalization techniques have been proposed to overcome the under- and over-exposure problems. These techniques can be categorized as either exposure or density equalization techniques. Exposure equalization techniques aim at reducing the exposure dynamic range [1,2], whereas density equalization techniques are computer-based and equalize the density at the breast periphery to the density at the mammary gland or to a value that is considered appropriate to ensure good visualization [3-6]. Equalization techniques improve visualization at dense breast and/or breast
periphery, however few of them have been clinically or quantitatively evaluated. In a previous study, a digital density equalization technique, dealing with both dense breast and breast periphery problem, was designed and developed. The technique was based on a layer model of the mammographic image, representing breast shape and composition, in addition to the film characteristic curve. In an effort to assess the performance of the proposed technique in improving the visualization of both dense breast and breast periphery, a pilot clinical evaluation was carried out. Comparative evaluation between the initial and corrected images was performed on the basis of nine anatomical features, three of which were relative to the mammary gland and six to breast periphery. A statistically highly significant improvement (p<0.0001) was obtained in the visualization of both dense breast and breast periphery for the corrected images. In this study, quantitative evaluation of our digital density equalization technique was carried out, supplementing the already performed pilot clinical evaluation. The evaluation was performed on a set of 90 mammograms, based on wavelet-generated measurable parameters of image quality, such as contrast, noise and contrast-to-noise ratio (CNR). These parameters were estimated for dense mammary gland and breast periphery separately, for both initial and corresponding corrected mammograms. Subsequently, conclusions concerning visualization improvement in dense mammary gland and breast periphery were reached, through statistical analysis of these parameters. The equalization character of the technique was also examined. II. MATERIALS AND METHODS A. Measurable parameters’ estimation In order to derive the contrast indicator, the nonsubsampled biorthogonal discrete wavelet transform (NB DWT) has been used. The magnitudes of wavelet coefficients in the 2nd scale, corresponding to a certain ROI sampling the breast periphery or the mammary gland region, are used for contrast estimation. Specifically, the wavelet coef-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 899–902, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
900
A.P. Stefanoyiannis, I. Gerogiannis, E. Efstathopoulos, S. Christofides, P.A. Kaplanis, A. Gouliamos
ficient magnitudes are averaged to determine the contrast indicator of either breast periphery or mammary gland. The NB DWT has also been exploited for noise indicator estimation. Particularly, the 1st scale of the NB DWT has been utilized, as it is the scale mostly contaminated by noise. Noise indicator in both breast periphery and mammary gland regions in the initial and corresponding corrected images is estimated as the mean power of the magnitude wavelet coefficients, ranging from zero up to a threshold, called reference noise power. The reference noise power is estimated as the mean power of the 1st scale magnitude wavelet coefficients in the signal-free background of the mammogram. Application of an image processing technique is possible to enhance image contrast, leading to noise increase at the same time. In this case, visualization may be degraded, despite of contrast enhancement, if noise increase is significant. Therefore, the estimation of a contrast-to-noise ratio (CNR) indicator can provide a better indicator of visualization improvement. In producing high quality images it is our goal to achieve the highest possible CNR. In the framework of this study, the CNR indicator is derived by the ratio of the contrast indicator over the noise indicator. The density equalization character of the technique has also been quantitatively evaluated, based on the breast region histogram. Application of a density equalization technique is expected to lead to a narrower breast region histogram of the corrected image, as compared with the histogram of the corresponding initial image. The standard deviation of the image histogram was selected to measure the equalization effect of the technique. B. Statistical evaluation 90 mammograms, originating from the University Hospital of Athens “Attikon” and Nicosia General Hospital, were digitized and used to statistically evaluate the proposed density equalization technique. These mammograms corresponded to dense breast and specifically to the subcategories of uniform dense mammary gland and dense mammary gland with partial fatty replacement. For each digitized mammogram, the breast periphery and mammary gland regions were manually extracted by an experienced qualified radiologist specialized in mammography. Subsequently, the measurable parameters were estimated for initial and corresponding corrected mammograms, separately for the breast periphery and mammary gland regions. Afterwards, these parameters were statistically analyzed, independently for breast periphery and mammary gland, to test whether application of the proposed technique resulted in statistically significant changes in the values of the measurable parameters. In order to select the appropriate statistical
test for each parameter, it was examined whether the population from which each parameter data was drawn was normally distributed, through calculation of skewness. In case that the normal population condition was met, the pairedsample t-test was utilized, whereas in case of non-normal population the Wilcoxon paired-sample test was preferred. The latter was not utilized in all cases, since its power is only 3/π (i.e. 95%) of the power in detecting differences of the paired-sample t-test in case that data comes from a normal population. C. Implementation details The mammographic images utilized in this study were acquired with the use of medium screens and films (Kodak MIN-R, Kodak, California, USA) and digitized by a Lumiscan 75 (Lumisys Inc, Sunnyvale, California, USA) at 12-bit pixel depth, spatial resolution of 100μm and 2000x2500pixel image matrix. The technique was developed in C language. The wavelet transform routines were taken from the Wave2 source code and utilized in a mammographic image visualization tool [7]. III. RESULTS As a preliminary step, the sample skewness was calculated for each measurable parameter for both breast periphery and mammary gland. The results of these calculations, along with information concerning the statistical test finally selected, are summarized in Table 1. The results of the statistical tests per measurable parameter are tabulated in Table 2, which also includes the corresponding levels of significance. Figure 1 provides the average values of each parameter for both breast periphery and mammary gland regions, before and after application of the proposed density equalization technique. Concerning con trast measurements, it can be noticed that application of the
BP
MG
Table 1 Selection of appropriate statistical test Statistical test Parameter Skewness selected CI 0.17 t-test NI 0.88 Wilcoxon CNR 0.25 t-test CI 0.55 Wilcoxon NI
0.99
CNR
0.01
Wilcoxon t-test
EI
0.81
Wilcoxon
Abbreviations: CI: contrast indicator, NI: noise indicator, CNR: contrastto-noise ratio, EI: equalization index, BP: breast periphery, MG: mammary gland
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Wavelet-based quantitative evaluation of a digital density equalization technique in mammography 200
initial
corrected
Table 2 Results of statistical evaluation
150 100
CI
901
Parameter
Statistical evaluation result
CI
↑ SHS ↑ SS ↑ SHS ↑ SHS
BP
NI CNR
50
CI
0 bp 60
mg
initial
MG (a)
corrected
↑
CNR
↑ ↓
SHS SHS
EI SHS Abbreviations: BP: breast periphery, MG: mammary gland, CI: contrast indicator, NI: noise indicator, CNR: contrast-to-noise ratio, EI: equalization index, SS: statistically significant (p<0.05), SHS: statistically highly significant (p<0.0005)
57,5 55
NI
NI
52,5 50 bp
mg
initial
4
(b)
corrected
CNR
3 2 1 0 bp
mg (c)
(c)
proposed technique results in statistically highly significant contrast improvement (p<0.0005) for both breast periphery and mammary gland. Noise was also increased and this increase was found to be statistically significant in breast periphery (p<0.05) and statistically highly significant in mammary gland (p<0.0005). A statistically highly significant improvement (p<0.0005) was observed in breast periphery and mammary gland CNR values as well. The proposed technique was found to result in density equalization, since the decrease in equalization index is statistically highly significant (p<0.0005). Figure 2 provides the percentage improvement of each measurable parameter for both breast periphery and mammary gland. It can be noticed that contrast is increased by approximately 60% and 25%, whereas noise is also in-
700 500
80
400
60
300
40
% difference
EI
600
200 100 0 initial
corrected
(d)
Fig. 1 Average values of the measurable parameters for both breast periphery (bp) and mammary gland (mg) regions, before and after application of the proposed density equalization technique. (a) average contrast indicator (CI) values, (b) average noise indicator (NI) values, (c) average contrast-tonoise ratio (CNR) values, (d) average equalization index (EI) values.
bp
mg
20 0 -20
CI
NI
CNR
EI
-40 -60
(d)
Fig. 2 Percentage difference of each measurable parameter for both breast periphery (bp) and mammary gland (mg) regions, after application of the proposed density equalization technique. Abbreviations: CI: contrast indicator, NI: noise indicator, CNR: contrast-to-noise ratio, EI: equalization index.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
902
A.P. Stefanoyiannis, I. Gerogiannis, E. Efstathopoulos, S. Christofides, P.A. Kaplanis, A. Gouliamos
creased by about 2% and 5% at breast periphery and mammary gland, respectively. Additionally, CNR is improved by 45% and 20%, respectively. The equalization index is decreased by 35%.
between signal and noise, so that a de-noising scheme can also be incorporated. In this way, the increase in noise level can be better controlled. The technique could also be applied, after necessary adjustments, to mammographic images produced by a digital mammography system [8].
IV. DISCUSSION - CONCLUSIONS In this study, quantitative evaluation of a proposed digital density equalization technique was carried out. The evaluation was based on measurable parameters of image quality, such as contrast, noise and contrast-to-noise ratio (CNR). These parameters were estimated for dense mammary gland and breast periphery, for both initial and corresponding corrected mammograms. The equalization character of the technique was also examined. Application of the proposed technique results in considerable breast region contrast improvement (approximately 25-60%). This is in agreement with the theoretical basis of the technique. The corrected images are expected to be characterized by improved contrast, due to the shifting of grey level values to the linear, high-contrast part of the filmdigitizer system characteristic curve. Unfortunately, breast region noise is also increased by approximately 2-5%. This is attributed to the fact that the technique can not discriminate whether specific grey level differences correspond to signal or noise and enhances all these differences. Contrast and noise increase are competing factors in improving breast region visualization. Therefore, one is interested in which measurable parameter change turns out to be more important, this fact necessitating the examination of CNR. Breast region CNR is increased, indicating that breast region visualization is improved. Visualization improvement of breast periphery can be attributed to another reason as well. The human visual system has poor brightness discrimination (the Weber ratio is large) at large degrees of background blackening. The degree of blackening increases in mammogram regions corresponding to breast periphery. Application of the proposed density equalization technique, apart from enhancing contrast, also modifies the breast periphery average degree of blackening to a smaller value. Consequently, the human visual system response to a certain contrast is improved. The present study revealed that further work is required, concerning the improvement of the proposed technique. Although application of the technique results in breast region visualization improvement, this improvement diminishes due to the noise level increase. Therefore, the technique must be appropriately extended to discriminate
ACKNOWLEDGEMENT The authors would like to thank the staff of the Department of Medical Physics, School of Medicine, University of Patras for their contribution in this work.
REFERENCES 1. 2. 3. 4.
5.
6.
7. 8.
Panayiotakis G, Skiadopoulos S, Solomou E et al. (1998) Evaluation of an anatomical filter based exposure equalization technique in mammography. Br J Radiol 71: 1049-1057 Wong J, Xu T, Husain A et al. (2004) Effect of area x-ray beam equalization on image quality and dose in digital mammography. Phys Med Biol 49(16): 3539-3557 Byng JW, Critten JP, Yaffe MJ. (1997) Thicknessequalization processing for mammographic images. Radiology 203: 564-568 Snoeren PR, Karssemeijer N. (2004) Thickness correction of mammographic images by means of a global parameter model of the compressed breast. IEEE Trans Med Imaging 23(7): 799-806 Stefanoyiannis AP, Costaridou L, Sakellaropoulos P et al. (2000) A digital density equalization technique to improve visualization of breast periphery in mammography. Br J Radiol 73: 410-420 Stefanoyiannis AP, Costaridou L, Skiadopoulos S et al. (2003) A digital density equalisation technique improving visualisation of dense mammary gland and breast periphery in mammography. Eur J Radiol, 45: 139-149 Sakellaropoulos P, Costaridou L, Panayiotakis G. (1999) An image visualization tool in mammography. Med Inform 24: 53-73 Hailey D. (2006) Digital mammography: an update. Issues Emerg Health Technol 91: 1-4 Address of the corresponding author: Author: Antonis P. Stefanoyiannis Institute: 2nd Dept of Radiology, School of Medicine, University of Athens Street: 1, Rimini Street, Haidari City: Athens Country: Greece e-mail:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Complementary evaluation tool for clinical instrument in post-stroke rehabilitation I.Cikajlo, M.Rudolf, N.Goljar and Z.Matjacic Institute for rehabilitation, Republic of Slovenia, Linhartova 51, Ljubljana, Slovenia Abstract— The aim of the research is to bring the objective evaluation tool in development into post-stroke rehabilitation clinical practice. An apparatus enabling perturbations and postural response assessment in eight directions in transversal plane was used to assess data in 7 neurologically intact subjects and 10 stroke patients before and after the rehabilitation. Center of pressure was monitored during the perturbation and the ratio between the anterior/posterior and medial/lateral peak for each perturbation direction demonstrated objective and applicable outcomes and besides correlated with Berg Balance Scale clinical instrument. The preliminary results have shown identifiable postural response strategies in selected directions of transverse plane and an objective evaluation of the rehabilitation progress in post-stroke patients. Keywords— stroke, postural response, center of pressure, Berg balance scale
An effective evaluation tool providing objective information about postural responses and balance abilities uses ground reaction forces or/and muscle electromyographic information. Besides effectiveness of the clinical methods a rapid and easy to perform procedure is a desire of every healthcare professional. In the paper the proposed postural response evaluation tool that would complement the existing and well established clinical instrument BBS in post-stroke rehabilitation is presented. The main focus is on the center of pressure (CoP) analysis and the amplitude ratio between anteriorposterior and medial-lateral direction before and after the rehabilitation. However, the correlation with BBS was expected. II. METHODS AND SUBJECTS
I. INTRODUCTION In recent years the modernization in medical and rehabilitation centers has brought evidence based treatment into the practice. Therefore the evidence based principle requiring objective evaluation of functional capabilities before and after intervention is becoming increasingly important as it facilitates optimization of interventions as well as the outcome of rehabilitation process in each individual patient. In post-stroke patients the efficient balance and postural control are one of the most important functional abilities required for more complex functional task. Nowadays the assessment and evaluation of postural control and balance abilities is still principally done by renowned clinical test that are performed by healthcare professionals, physiotherapists. Among the well recognized and widely used is Berg balance scale (BBS) [1] that provides a good within- and between-rater agreement and can be used as a reliable predictor of potential fallers, however the subjective test cannot provide insight into particular mechanisms of postural control that can be determined by studying kinematics and kinetics during postural responses elicited in various manners. Among these manners are moving [2] and rotating [3] [4] platforms with different strategies and perturbation techniques or devices that provide perturbations in transverse plane to elicit postural responses [5]. Applying such devices require well-conceived strategy and an effective postural response evaluation tool.
A. Equipment and protocol The apparatus [5] is made of steel base construction placed on four wheels, the standing frame is made of aluminum and fixed to the base with passive controllable spring defining the stiffness of the two degrees of freedom (2 DOF) standing frame. On the top of the standing frame a wooden table with safety belt for holding the subject at the level of pelvis was mounted. Four battery powered electromotors (Iskra Avtoelektrika d.d., Sempeter, Slovenia) were used to generate postural perturbations in eight major directions (forward, forward-right, right, back-right, back, backleft, left and forward-left). Subjects stood with each foot on separate force plates (AMTI OR6-5, AMTI Inc., Watertown, MA, USA) assessing 6-DOF data (3 forces, 3 moments, filtered within AMTI amplifier, A/D sampling frequency 100 Hz). The electro-motors delivered constant torque of 3 Nm during selected duration of perturbation. The generated pulses elicited perturbation in one of the four principal directions (Forward - FW Right - RT, Left - LT and Backward - BW) or in one of the four combination of the principal directions (Forward/Right - FR, Backward/Right - BR, Forward/Left - FL, Backward/Left - BL). The realization of perturbation in combined principal direction was managed by simultaneous action of two electro-motors each for corresponding principal direction. Subjects were instructed to
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 936–939, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Complementary evaluation tool for clinical instrument in post-stroke rehabilitation left/forward
forward
right/forward
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
-0.1
-0.1
-0.1
-0.2 -0.2
0 left
-0.2 -0.2
0.2
0.2
AP [m]
stand comfortably with feet in parallel, each on separate force plate. The following instructions were to stand still prior to the perturbation and try to attain the same posture when recovering from perturbation. The perturbation direction or perturbation commencement which was 1 s (user set up) after the operator pressed the button were chosen randomly without notifying the subject. The total assessment time was set to 6 s due to longer perturbation response recovery time. Each subject took part in 32 trials, 4 for each direction. In post-stroke subjects data were collected before and after the rehabilitation treatment in the 4 weeks period. For each perturbation trial a set of 6 DOF data (forces and moments in anterior-posterior direction (AP), mediallateral direction (ML) and vertical axis for each foot were recorded using two force plates. Using transformed data assessed a common CoP was calculated [6]. The analysis of the CoP in time domain applying developed computer algorithm [5] demonstrated quantitative results for postural response latency and AP and ML amplitude. The paper is focused on AP/ML peak ratio analysis.
937
forward-left
0 forward
-0.2 -0.2
0.2 forward-right
0.1
0 right
0.2
0 right/back
0.2
0
0.2
0.2 0.1
left
0
right
-0.1
0 -0.1
-0.2 -0.2
backward-left
0 left/back
0.2
backward
backward-right
back
-0.2 -0.2
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
-0.1
-0.1
-0.1
-0.2 -0.2
0
-0.2 -0.2
0.2
0
-0.2 -0.2
0.2
ML [m]
Fig. 1 .CoP during perturbations in transversal plane.
B. Subjects 6,00
III. RESULTS The common CoP and the CoP under each foot for each perturbation direction in transverse plane for neurologically intact subjects are presented in figure 1. Such presentation was suitable for on-line surveillance of postural responses and by observing the size of the CoP loop for certain perturbation direction a quick report of the monitoring postural responses could be issued indicating whether the tested subject was in a fair way. The small CoP loop area is correspondent to appropriate AP/ML amplitude ratio; for FW and BW direction a high ratio was desired as for the LT and RT ratio needed to be minimal and characteristically for neurologically intact individuals. The figure 2 demonstrates the decrease/increase of AP/ML ratio for each perturbation direction before and after the rehabilitation treatment. For the participating group of stroke patients the aforementioned ratio has decreased for
5,00
4,00
Mean AP/ML
In development and preliminary tool evaluation 10 poststroke subjects and 7 neurologically intact volunteers participated. The volunteers had no musculoskeletal impairment of any disease that would affect balance capabilities. Post-stroke (right side hemiplegia) subjects fulfilled the participation criteria which was ability to stand and balance using the BalanceTrainerTM. The methodology was approved by local ethics committee and the subjects gave informed consent.
3,00
2,00
1,00
0,00 BL
BR
BW
Pert direction
FL
FR
FW
LT
RT
before rehabilitation after 4 weeks Intact normative
Fig. 2 Ratio AP/ML (with standard deviation) demonstrates the improvement (toward the neurologically intact subjects normative) of postural responses for each perturbation direction.
LT, RT, BL, BR, FL directions, for FL direction no significant change was noticed, while the changes in FW and BW directions were significant toward anterior(posterior) direction. The ratio changes were also compared with results obtained in neurologically intact volunteers and revealed the
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
938
I.Cikajlo, M.Rudolf, N.Goljar and Z.Matjacic .9 60
50
Mean [BBS, abs AP/ML]
Mean load_ratio
.8
.7
.6
.5 BL
BR
BW
FL
FR
FW
LT
RT
40
30
20
10
Before rehabilitaion
Pert direction 0
After 4 weeks
1
neurologically
2
Fig. 3 Load ratio (affected / unaffected side) before and after the rehabili-
intact subjects
assessment
tation treatment clearly demonstrates the rehabilitation progress and emphasizes difficulties for each perturbation direction (e.g. RT).
abs_ampAP_ML BBS
Fig. 4. Correlation (Pearson’s coefficient) between the AP/ML ratio for single perturbation direction and BBS is rather important for complementary use of clinical tools. The figure presents the rehabilitation progress for single perturbation direction (BW).
rehabilitation progress of the post-stroke group for each direction in transverse plane (Figure 2 and Table 1). Figure 3 presents how the load ratio in the group of stroke patients has changed after the 4 weeks rehabilitation period. The post-stroke subjects were able to put significantly more load on their affected extremity after the treatment than in the acute phase. The highest ratio increase (ideal affected/unaffected load ratio would be 1) occurred in FR (from 0.69±0.26 to 0.81±0.24) and RT (from 0.60±0.23 to 0.71±0.19) directions as a result of the successful rehabilitation progress of the participating patients suffering from ride side hemiplegia and was in line with our expectations. In FW, FL and LT directions the load ratio change was significantly lower. Further on the outcome correlation with widely used clinical tool BBS was examined. Figure 4 and Table 1 offer a graphical and numerical presentation of CoP peak value
Table 1
FL
Results for post-stroke subjects Assessment trial (before/after rehabilitation)
AP/ML ratio vs BBS correlation coefficient (Pearson’s coefficient). According to the figure 4 one may notice that AP/ML ratio was in linear relationship with BBS and as stated in the table 1 for BW direction also highly correlated with BBS (0.868). In contrary the Pearson’s coefficient was very low for BR direction and FR direction (< 0.2). The BBS was assessed before entering the rehabilitation program (in average 23±15) and 4 weeks later, when the BBS (39±11) revealed significant progress in balance capabilities (neurologically intact subjects have BBS 56) considering that the BBS has increased for all participating subjects.
FW
FR
RT
BR
BW
BL
LT
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
AP/ML ratio
1.77
1.14
3.59
4.43
1.25
1.28
0.45
0.38
0.82
0.71
1.81
2.68
0.97
0.64
0.41
0.28
Load ratio (affected/unaffected side)
0.68
0.74
0.69
0.73
0.69
0.81
0.60
0.71
0.74
0.83
0.64
0.74
0.65
0.74
0.69
0.72
23
39
23
39
23
39
23
39
23
39
23
39
23
39
23
39
BBS Pearson coefficient ABS(AP/ML) vs BBS
0.541
0.229
0.138
0.345
0.110
0.868
0.287
0.550
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Complementary evaluation tool for clinical instrument in post-stroke rehabilitation
IV. CONCLUSIONS The rehabilitation progress of the group of post-stroke patients has been evaluated to test the eventual effectiveness of the proposed clinical tool in development. The results obtained in right side hemiplegics demonstrated the clinical applicability of CoP assessment. Load ratio between affected and unaffected side, AP/ML peak ratio for each perturbation direction have proven as effective indicators for postural response evaluation, especially as the assessment procedure duration for each subject was less than the time needed to assess BBS. The right side hemiplegics put significantly fewer load on their affected extremity in acute phase, but the load ratio has improved during the rehabilitation which is evident from figures 2 and 3 (FR, RT). The AP/ML ratio indicated for each perturbation direction separately that postural responses have generally improved after the treatment when comparing with the data assessed in neurologically intact subjects. The methodology was compliant and partially in correlation with BBS in spite of the higher standard deviation in places. The objective evaluation methods in rehabilitation medicine have been an issue for rehabilitation engineering since ever. Dedicated space, expensive electronic equipment and retardative setting of parameters hinder an application of complex devices in clinical environment, especially when considering that the simple questionnaire may fulfill the needs in clinical evaluation of posture and balance [7]. Indeed these subjective tests can not provide enough information to study and explain kinematics and kinetics of postural responses, therefore an effective complementary tool that could enable the healthcare professionals to assessed data in a very short time and provide comprehensive information. The outcomes of the preliminary test of the proposed objective postural response evaluation tool, providing information for eight directions in transverse plane indicate that in combination with existing clinical instrument BBS much more objective and reliable evaluation of postural
939
response and balance capabilities of individuals entering the rehabilitation program can be achieved.
ACKNOWLEDGMENT The authors express their gratitude to the volunteers who participated in this study and wish to acknowledge the financial support of the Slovenian Research Agency.
REFERENCES 1. 2.
3.
4. 5. 6. 7.
Berg K, Wood-Dauphinee S, Williams J (1995) The balance scale: Reliability assessment with elderly residents and patients with acute stroke, Scand J Rehab Med 27: 27-36, Ghulyan V, Paolino M, Lopez C, Dumitrescu M, Lacour M (2005) A new translational platform for evaluating aging or pathology-related postural disorders, Acta Oto-Laryngologica, 125: 607–617 DOI10.1080/00016480510026908, Nashner LM (1983), Analysis of movement control in man using the movable platform. Advances in Neurology: Motor Control Mechanism in Health and Disease. Desmedt J, ed. Raben Press, New York, NY NeuroCom International, Welcome to a World on Balance. 2003, NeuroCom International, Inc. Available at: http://www.onbalance.com. Cikajlo I, Matjacic Z (2007) A novel approach in objective assessment of functional postural responses during fall-free perturbed standing in clinical environment, Technol & Health Care, in press. Winter D (1979) Biomechanics of Human Movement, John Wiley & Sons, New York, Hansson G, Balogh I, Bysträom JU et al (2001) Questionnaire versus direct technical measurements in assessing postures and movements of the head, upper back, arms and hands, Scand J Work Environ Health 27: 30-40, Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Dr.Imre Cikajlo Institute for rehabilitation, Republic of Slovenia Linhartova 51 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Electrically Elicited Stapedius Muscle Reflex in Cochlear Implant System fitting A. Wasowski 1,2, T. Palko 2, A. Lorens 1, A. Walkowiak 1, A. Obrycka 1, H. Skarzynski 1 1
2
Institute of Physiology and Pathology of Hearing, Warsaw, Poland Warsaw University of Technology, Faculty of Mechatronic, Warsaw, Poland
Abstract–– The aim of this study was to assess the possibility of using electrically elicited stapedius muscle reflex (ESR) for estimation of most comfortable loudness level (MCL), one of the most important electrical stimulation parameter in cochlear implant system fitting. The material of this study consisted of 48 adult patients, sampled from the group of MedEl Combi 40+ and MedEl Pulsar users, implanted in the Institute of Physiology and Pathology of Hearing. Their cochlear implant system was fitted according to the results of psychophysical tests: loudness scaling and electrical amplitude growth function. ESR measurement was performed, and ESR thresholds and MCL values were compared. Good correlation after 12 months of using cochlear implant system was observed. Results indicate that ESR can be included in cochlear implant system fitting procedure as objective measurement for prediction of optimal MCL values. Keywords–– cochlear implant, objective methods, stapedius muscle reflex
I. INTRODUCTION The use of cochlear implant system is the treatment method of choice for patients with profound, sensorineural hearing loss. This kind of hearing loss is characterized by the damage of the hearing cells in the inner ear, responsible for transforming incoming sound waves into electrical pulses. During surgery, the electrode array of the cochlear implant system is inserted into the cochlea, where it allows direct, electrical stimulation of auditory nerve and in this way imitates the function of damaged hearing cells [1]. The transformation of sound waves into electrical pulse pattern takes place in the speech processor, the external part of the system, worn behind the ear. Information and power for electrical stimulation is send to the internal part through the skin using radio frequency link (Fig. 1.).
Speech coding strategy, responsible for changing the incoming sounds into electrical pulse pattern, is described by many parameters. Those parameters have to be optimally set for each patient individually to obtain maximum possible hearing benefits, in terms of hearing of incoming sounds and speech understanding capability. One of the very important parameter is the most comfortable loudness level (MCL). It’s the current value, set separately for each electrode, which elicits loud, but still comfortable hearing sensation. MCL is the upper limit of electrical stimulation dynamic range. To determine the optimal values of this parameter, it is crucial to obtain reliable, quantitative assessment of electrically elicited hearing sensations. The information usually is obtained by means of psychophysical tests, where patients have to assess specific parameters of the acoustic or electric stimulus. However, very often it is difficult to obtain reliable information using mentioned tests, especially in case of children with lack of auditory experience and communication skills or in adults, when long duration of deafness causes changes in auditory perception. Such patients are not able to assess hearing sensation, and the process of establishing optimal values of electrical stimulation parameters is longer and more difficult [2]. Therefore, there is a need to introduce objective electrophysiological measurements to obtain reliable, quantitive information about patient’s auditory pathway stimulated electrically by the cochlear implant. One of those methods is measurement of electrically elicited stapedius muscle reflex [3]. II. AIM The aim of this study was to asses the possibility of using postoperative electrically elicited stapedius muscle reflex for estimation of most comfortable loudness level (MCL). III. MATERIAL
Fig. 1 Cochlear Implant System: 1 – microphone, 2 – speech processor, 3-4 – coil with cable, 5 – electrode array, 6 – auditory nerve
The material of this study consists of 48 postlingually deafened patients, implanted in the Institute of Physiology and Pathology of Hearing. All patients are users of MedEl Combi 40+ or MedEl Pulsar system for time ranging from 3 to 19 months. Patients age in the moment of implantation range from 23 to 52 years, with mean of 42 years.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 940–942, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Electrically Elicited Stapedius Muscle Reflex in Cochlear Implant System fitting
IV. METHOD A. Fitting Procedure All patients have been fitted according to the fitting procedure based on psychophysical measurement. For estimation of optimal MCL values, repetitive loudness scaling and electrical amplitude growth function measurements were used. Both methods require from the patient to assess loudness of presented stimuli, using Wirtzburg, 50 steps scale. In loudness scaling the stimuli is a narrowband noise of specific middle frequency and amplitude, and in amplitude growth function the stimuli is an electrical pulse burst of 500 ms duration presented on specific electrode. MCL values for each electrode separately were estimated based on combined information obtained from acoustic and electric loudness scaling. B. ESR Measurement Measurement of acoustically elicited stapedius muscle reflex (SR) is one of the most important, objective procedure used in clinical practice for diagnosis of hearing disorders. In healthy ear, change in acoustic admittance of the middle ear, resulting from the contraction of stapedius muscle is observed for stimulus intensities above 80 – 90 dB. This is the intensity level that corresponds to loud, but still comfortable sound level. The measurement is performed using electroacoustic immitance analyzer, with build-in automatic procedures for both monitoring of the impedance of the middle ear and presentation of acoustic stimulus. In electrically elicited stapedius muscle reflex (ESR), a change in acoustic admittance is observed in nonimplanted ear during presentation of the stimulus to the implanted ear. Actually available acoustic impedance meters do not support automatic procedures for this kind of measurements, therefore new setup and manual measurement procedures had to be created. The setup consists of electroacoustic immitance analyzer (Madsen Zodiac 901) and diagnostic interface box (MedEl DIB) with appropriate software (CI Studio 2.02) for controlling the electrical stimulus. The first stage of ESR measurement procedure consisted of conventional tymapanometry. Patients with abnormal tympanogram in nonimplanted ear, which is used for next stages of the measurement, were excluded from the study At the second stage, patient’s speech processor was connected to the diagnostic interface box. The same immitance analyzer was set to “reflex decay mode”, which allowed for continuous monitoring of the acoustic admittance in nonimplanted ear. An electrical stimulus of
941
500ms duration burst of biphasic pulses was delivered to the selected electrode. A cycle of measurement is triggered manually from the computer keyboard, and the stimulation begins at the MCL levels earlier estimated during fitting procedure. If a clear downward deflection of the prestimulus baseline (a decrement of impedance) is observed on analyser's screen, the electrically evoked stapedius reflex is detected (Fig. 2). The level of the stimulus is consequently decreased in 1 dB steps until no change of impedance is observed. If the reflex can not be seen at the initial stimulation level, the level is increased in 1 dB steps until a clear trace of reflex becomes evident and than the level is decreased by 1 dB, until trace deflection disappears again. In each measurement, three up and three down series are applied, and the ESR threshold is evaluated as an average of the three lowest stimulus levels, taken from three descending series, that produce a detectable deflection of the baseline (a measurable increment of acoustic impedance). For the study, ESR measurements were performed for 3 electrodes: one apical, responsible for low frequencies, one basal, responsible for high frequencies, and one middle, responsible for middle frequencies. V. RESULTS During ESR measurement, 9 out of 48 patients reported loudness discomfort before reaching stimulus level necessary for eliciting stapedius muscle reflex. The ESR measurement was repeated during next visits and in 6 cases out of 9 the result were obtained. For analysis the results were divided into 3 groups, based on the patient’s experience with cochlear implant system in the moment of ESR measurement. The correlation coefficients between ESR thresholds and MCL values obtained by psychophysical methods, computed separately for each group and for each electrode, were shown in Fig. 3.
Fig. 2 A diagram showing a) ESR during ascending approach and b) ESR during descending approach
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
942
A. Wasowski, T. Palko, A. Lorens, A. Walkowiak, A. Obrycka, H. Skarzynski
Based on mentioned observations, authors believe, that poor correlation in first months is caused mainly by improper subjective assessment of MCL value. After few months this assessment is much better, and then the correlation between MCL values and ESR thresholds is also better. VII. CONCLUSIONS
Fig. 3 Correlation coefficients between ESR thresholds and psychophysically estimated MCL values as a function of experience in cochlear implant system for each electrode separately
Correlation after 3 month of using cochlear implant system is relatively poor for all stimulation places. In time, the correlation is rising and for patients measured after 12 months or more of using cochlear implant system it reaches values from 0,87 for apical electrode to 0,92 for basal electrode. During first months, the correlation on basal electrode, responsible for high frequencies, is worse than on middle and apical electrode.
Electrically elicited stapedius muscle reflex is possible and easy to measure using proposed procedure with the use of standard acoustic immitance analyzer. The results obtained from the study show, that it can be used in fitting of cochlear implant system. The method is not dependent on patient’s ability of assessing auditory sensation properties, and in this way can give reliable information even during first months after cochlear implantation. The ESR thresholds can be used as estimators of optimal, stable MCL values. Still, precaution have to be made to avoid a possibility of excessive stimulation, as during first months patient is not used to high intensities of electrical stimuli. The good results are the reason that this method was included in cochlear implant fitting procedure used in the Institute of Physiology and Pathology of Hearing.
VI. DISCUSSION Authors believe that both effect: the rise of correlation in time and the worse correlation on basal electrode, responsible for high frequencies, are connected with one phenomenon. Usually implanted patients have a hearing problems history lasting several years before implantation. After the implantation, when the cochlear implant system is starting to work, their auditory pathway is not used to stimulation of any kind, and they ability to assess auditory sensation during first days and months is very poor. It results in improper assessment of stimulus properties, and the estimation of MCL values that is based on psychophysical methods is not reliable. In time, when adaptation of auditory cortex is taking place and patients are getting used to electrical stimulation, their auditory sensation assessment is more reliable and more stable in time [4]. This effect usually takes longer and is more visible for high frequency. It is believed, that after 1 year of using cochlear implant system, stable and optimal MCL values are obtained. On the other hand, ESR thresholds, being an objective measure, are stable in time. The same current level is needed for eliciting stapedius muscle reflex both 1 and 12 month after cochlear implantation [5].
REFERENCES 1. 2. 3. 4. 5.
Niparko JK (2000) Cochlear Implants. Principles & Practices, Lippincott Williams & Wilkins Lorens A, Sliwa L, Walkowiak A (1999) Principle of speech processor fitting in the programme of rehabilitation of children after cochlear implantation, New Medicine 3:33-35 Stephan K, Welzl-Muller K (2000) Post-operative stapedius reflex tests with simultaneous loudness scaling in patients supplied with cochlear implants. Audiology 39(1): 13-18 Wasowski A, Lorens A, Piotrowska A, Skarżyński H (2006) Factors effecting speech perception in cochlear implanted adults, 9th International Conference of Cochlear Implants Proc, Wiena 2006 Lorens A, Wasowski A, Walkowiak A, Piotrowska A Skarzynski H (2004) Stability of electrically elicited stapedius reflex threshold in implanted children over time. International Congress Series, vol. 1273, 84-86, 2004 Author: Arkadiusz Wasowski Institute: Street: City: Country: Email:
Institute of Physiology and Pathology of Hearing Zgrupowania AK “Kampinos” 1 Warsaw Poland
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of biofeedback of abdominal muscles during exercise in COPD M. Tomsic1 1
Institut Jozef Stefan, Jamova 39, SI-1000, Ljubljana, Slovenia
Abstract— Biofeedback method to alter breathing pattern in patients with COPD during physical activity was evaluated. Respiratory muscles EMG was reliably detected during exercise. The relaxation of abdominal muscles in expiratory phase was tested. There was no significant difference between different abdominal muscles activation. In conclusion, the present study showed that the EMG technique is reproducible and sensitive enough to assess changes in respiratory abdominal muscle activity and breathing patterns in patients with COPD. Keywords— Biofeedback, Pulmonary rehabilitation, Respiratory muscles, Breathing retraining, COPD.
II. METHODS A. Subjects Seven COPD male patients were informed about the study and gave informed consent to participate. COPD was defined according to the American Thoracic Society criteria [4]. Their characteristics are given in Table 1. Routine pulmonary function tests were given to each subject. Baseline pulmonary function data are given for all study subjects. The study was approved by the National Medical Ethics Committee of the Republic of Slovenia.
I. INTRODUCTION Depending on the severity, patients with Chronic Obstructive Pulmonary Disease (COPD) are often unable to perform activities of daily living, which leads to increasing physical inactivity, physical deconditioning, and reduced quality of life. Pulmonary rehabilitation is typically undertaken to improve the ability of patients to undertake physical activities and increase daily walking distance. Numerous exercise protocols have been investigated for COPD, and several hypotheses have been advanced to explain why physical activity and exercise are often limited in COPD [1]. The most obvious symptom of COPD, shortness of breath, could be a factor in exercise limitation. There have also been reports that leg effort is a factor in exercise limitation. One hypothesis is that excessive expiratory pressures produced by increased abdominal muscle respiratory activity during expiration may lead to decreased venous return, which in turn would contribute to exercise limitation. If this hypothesis were true, then seeking a method to reduce or eliminate abdominal muscle activity during exercise would be warranted. Biofeedback has often been used to help individuals decrease unwanted muscle activity [2,3]. The purpose of this study was to attempt to reduce or eliminate abdominal muscle contractions (reduce abdominal EMG activity) during expiratory phase of the breathing cycle and the resulting abdominal EMG by biofeedback. This study provides normative reference protocols and results, against which future data from patients with COPD can be tested and compared.
Table 1 Baseline characteristics of subjects COPD patients n Age (years) Height (m) Weight (kg) VC (dm3) FEV1 (dm3)
means +- SD 7 65+-6 1.69+-0.08 71+- 6 4.1 +- 0.9 1.2 +- 0.2
B. Experimental protocol The incremental exercise protocol was performed on cycle ergometer (Corival Cycle Ergometer, Medical Graphics Corporation). After 1 minute of measurements during quiet breathing, followed by 1 minute of unloaded pedaling, the work rate was increased every minute (10W) to the limit of tolerance while patients maintained a pedaling frequency of 60 rpm. The recording was continued for 2 minutes during recovery (1 minute after stopping recovery unloaded pedaling). After each minute of exercise, pedaling was stopped to record 5 to 7 breaths without motion artifacts and muscles contractions artifacts of skeletal movement during exercise. Airflow, ECG, noninvasive EMG, and thoracoabdominal movements measurements were performed during the whole exercise test. C. Data acquisition A pneumotachograph was used to measure and record airflow at the mouth. The analog flow signal was digitized by BIOPAC data acquisition system (MP100 System,
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 961–964, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
962
Biopac Systems Inc.). Pneumotach transducer TSD107B was used. Volume was computed from the flow signal. Airflow was recorded continuously in time for each session at a sampling rate of 100 samples per second. ECG100C Electrocardiogram Amplifier was used to record electrical activity generated by the heart. Connected to the MP100 System it collects ECG data from three-lead montage at a sampling rate of 1000 samples per second. Noninvasive electromyography (EMG) of the abdominal muscles [5] was realized by TeleMyo 2400T EMG telemetric system (Noraxon USA Inc.). The TeleMyo system sends 4 channel real-time EMG and other possible analog signals up to 100m by wireless transmission. The processed analog EMG signals were also digitized at a sampling rate of 1000 samples per second. Substantial heart activity interferes with the EMG signals measured at the trunk. The electrical heart activity was removed from the respiratory muscles electrical activity by cutting the EMG signal to completely filter out the ventricle activity of the heart (QRS) complex. Overall changes in depth of breathing were measured additionally by means of the Respiratory Effort Transducer (TSD201, Biopac Systems Inc.). The belt transducer measures the changes in thoracic or abdominal circumference that occur as the subject breathes. Thoracoabdominal movement quantification was done by two belts placed at the level of the nipples and umbilicus respectively. The analog signals from the belts was digitized by the MP100 System. The purpose of these measurements was only to document the breathing patterns used by the subject, e.g., normal synchronous, asynchronous, paradoxical [6]. D. Electrodes Disposable surface EMG electrodes were placed over the abdominal muscles. Electrode placement for this study was determined based on the anatomy of the four distinct abdominal muscles, each of which have been shown to have varying degrees of activity in patients with COPD that appear to differ in onset and level from otherwise healthy subjects. The most likely abdominal muscle to be active in expiration in COPD is the transversus, followed by the obliques. A study of appropriate surface electrode placements based on determination of muscle fiber orientations of the obliquus externus abdominis, obliquus internus abdominis and rectus abdominis suggested that recording of activity from the transversus (the deepest of the abdominal muscles) was always a distinct possibility with surface electrodes [7]. It was deemed beyond the scope of the present study to distinguish between the gradations of activity in these four abdominal muscles, as this would have required use of concentric needle or fine wire electrodes, and place-
M. Tomsic
Fig. 1 EMG electrodes placement ment with ultrasonic verification [8]. The hypothesis of the present study is concerned with any activity of any of the abdominal muscles, thus, surface EMG was deemed to be an acceptable method to satisfy the both the scientific requirements of the study and not introduce additional unnecessary risks to the human subjects. Future research questions might require additional clarification of the role of each abdominal muscle. After the skin was shaved, cleaned, and dried, electrodes were placed over the muscles to be investigated. Figure 1 shows EMG electrode placement and telemetry transmitter. Two electrodes were placed 5 cm apart longitudinally over the rectus abdominis muscle at the level of the umbilicus 3 cm from the midline. Two electodes were placed 5 cm apart over the obliquus externus abdominis muscle in the middle between rib cage and pelvis (iliac crest). E. Biofeedback A commercially available biofeedback system EMG Retrainer (Chattanooga Group, Inc.) was used. Biofeedback System and its surface EMG electrode placement is shown in Figure 2. It measures and quantifies muscle movement with the result clearly displayed on a display. This dual channel unit is designed to continuously monitor muscle contraction. The system used disposable surface EMG electrodes which were placed over the abdominal muscles unilaterally at the same sites as during incremental exercise measurements. The biofeedback system provided both auditory and visual feedback to the subjects.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Evaluation of biofeedback of abdominal muscles during exercise in COPD
963
Normalized EMG rectus 1.2 1
EMG
0.8 0.6 0.4 0.2 0 1
2
3
4
5
6
7
Time (min)
Fig. 3 EMG of rectus abdominis muscle during incremental exercise test Fig. 2 EMG biofeedback device, retrainer Normalized EMG obliq.
The subjects tried to voluntary relax abdominal muscles in expiratory phase of the breathing cycle when activated muscles triggered audio-visual signal.
1.2
1
F. Data Analysis and Statistics The full-wave rectified and smoothed (low-pass filter) EMG was computed from the raw EMG signal. EMG signal was used as a measure of muscle activation. Plots of all direct and derived variables over time were produced and manually inspected for data quality control purposes. The flow signal was used to detect the beginning of an expiration. The data were calculated during exercise interrupts, in time intervals with no cycling. Five successive breathings' EMG data were averaged for each interrupt and for each patient. EMG values were normalized: a ratio was calculated of the expiratory activity EMG at each minute of exercise and at maximal load. The key outcome of this study was reduction in abdominal EMG. Plots of EMG activity over time were prepared for experimental conditions. In preparing these plots the EMG was averaged across all subjects at each point in time. The plots showed mean EMG level, plus and minus 95% confidence intervals. This facilitated a statistically based graphical interpretation as to the reduction in abdominal EMG actually achieved.
EMG
0.8
0.6 0.4 0.2 0 1
2
3
4
5
6
7
Time (min)
Fig. 4 EMG of obliquus and transversus muscles during incremental exercise test
due to exercise and postural movements were successfully avoided during interrupts. Figures 3 and 4 show normalized values of the recorded EMG signals during exercise. Application of biofeedback EMG Retrainer was successful when using it during time intervals with no pedaling. Few breathing cycles of teaching during exercise interrupt were enough for breathing retraining and abdominal muscles relaxation in expiratory phase. IV. DISCUSSION
III. RESULTS All subjects exhibited detectable abdominal EMG activity that was synchronized with the breathing pattern at rest. EMG signal increased clearly with increasing load. Artifacts
There is a relatively constant EMG at rest, consistent with existing knowledge, and was an expected result. In the biofeedback condition, when biofeedback was made available for the very beginning of the trial, EMG gradually de-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
964
M. Tomsic
clined. These findings are a preliminary validation of our hypotheses, and suggest further studies with COPD patients. Existence and evidence of abdominal muscle activity in patients with COPD and its hypothesized role in reducing venous return is the cornerstone of the rationale for the present study. Early studies using surface electrodes to detect abdominal EMG in patients with COPD had mixed results. One of the most carefully controlled studies assessed patterns of abdominal muscle activity in 40 stable COPD patients, recording EMG with concentric needle electrodes from rectus abdominis, external oblique, and transversus abdominis muscles. While breathing at rest, 28 of the 40 patients with COPD had abdominal muscle contraction during expiration. The most important observation was that the transversus muscle was most frequently seen to be active [8]. A variety of approaches including biofeedback have been tried for helping patients improve their breathing patterns or learn new breathing patterns. Madden [9] studied patients who had pain after abdominal surgery. The hypothesis was that abdominal muscle activity contributed to post operative pain. EMG feedback was provided from abdominal muscles, using a tone that decreased in frequency with decreasing EMG. Results showed that mean EMG activity decreased by approximately 50% due to the biofeedback. Breathing retraining program in which patients with COPD learn to control their pattern of breathing under the stress of performing different modes of exercise at increasing intensity and duration may markedly decrease dyspnea and improve gas exchange [10]. V. CONCLUSIONS The surface EMG technique was shown to be reproducible and sensitive to assess changes in respiratory abdominal muscle activity and breathing patterns in patients with COPD. Its noninvasive and nonintrusive character makes this technique useful in assessing respiratory activity and breathing patterns during different rehabilitation programs.
ACKNOWLEDGMENT This work was supported by the European Community CARED FP5 project and the Slovenian Research Agency. The cooperation of the Golnik hospital is gratefully acknowledged.
REFERENCES 1.
Aliverti A, Macklem PT (2001) How and why exercise is impaired in COPD. Respiration 68:229-239 2. Ritz T, Leupoldt A, Dahme B (2006) Evaluation of a respiratory muscle biofeedback procedure - effects on heart rate and dyspnea. Appl Psychophysiol Biofeedback 31:253-261 3. Esteve F et al (1996) The effects of breathing pattern training on ventilatory function in patients with COPD. Biofeedback Self Regul 21: 311-321 4. ATS Committee (1995) Standards for the Diagnosis and Care of Patients with Chronic Obstructive Pulmonary Disease. Am J Respir Crit Care Med 152: S77-S120 5. ATS/ERS Committee (2002) ATS/ERS Statement on Respiratory Muscle Testing. Electrophysiologic Techniques for the assessment of respiratory muscle function 6. Tobin MJ (1991) Respiratory Monitoring. Churchill Livingstone, New York 7. Ng JK, Kippers V, Richardson CA (1998) Muscle fibre orientation of abdominal muscles and suggested surface EMG electrode positions. Electromyogr Clin Neurophysiol 38:51-58 8. Ninane V et al (1992) Abdominal muscle use during breathing in patients with chronic airflow obstruction. Am Rev Resp Dis 146:1621 9. Madden LC et al (1978) The effect of EMG biofeedback on postoperative pain following abdominal surgery. Anaesth Intensive Care 6: 333-336 10. Collins EG et al (2001) Breathing pattern retraining and exercise in persons with chronic obstructive pulmonary disease. AACN Clinical Issues 12:202-209 Author: Martin Tomsic Institute: Street: City: Country: Email:
Instutut Jozef Stefan Jamova 39 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Experimental Evaluation of Training Device for Upper Extremities Sensory-Motor Ability Augmentation J. Perdan1, R. Kamnik1, P. Obreza2, T. Bajd1 and M. Munih1 1
Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia 2 Institute for Rehabilitation, Republic of Slovenia, Ljubljana, Slovenia
Abstract—The aim of our research was to develop and evaluate FES system for augmenting the sensory-motor abilities of the hand. The system is designed to train the finger flexors and finger extensors by accomplishing the force tracking task in isometric conditions. The system is designed to allow full voluntary control of hand opening or closing, while the FES is added for facilitating the voluntary contributions of the patient. The FES is closed-loop controlled according to the difference between the desired and actual force. Actual forces are acquired by a specially designed adjustable measurement device instrumented by two force sensors. The visual feedback about tracking performance is provided to the patient. The system was evaluated in experimental training study with two incomplete tetraplegic patients. In addition to the therapeutic treatment, the patients trained with the FES system approximately 45 minutes per day over a period of 4 weeks. After the training period, the results show that both patients have strengthen the finger flexor and extensor muscles and reduced the tracking error. Results thus imply to the improvement of grip force control. Keywords— Force tracking task, closed-loop FES, hand opening, hand closing, isometric conditions.
I. INTRODUCTION Grasping and manipulating of objects requires versatile control of grip forces. Neuromuscular disease, stroke or an injury to a central nervous system (CNS) can result in loss of sensory and motor functions in upper extremities what consequently reduces the hand functionality. Because of this impairment patients have trouble or are incapable of grasping and manipulating objects [1]. Patients with spastic finger flexors after stroke or incomplete spinal cord injury usually preserve control over finger flexion; however, it is common that because of spasticity and weakness in finger extensor muscles, they have difficulties with voluntary opening of the hand [2]. As a result, they are normally able to hold an object, but are incapable of grasping or releasing already grasped object. Assessment of hand function is important for therapists to evaluate the condition of lesion or to monitor the progress of therapy. Different methods for evaluating hand function are known in clinical practice [3]. Those methods consist of
pick and place and/or volitional range of motion tests. The performance is either evaluated by therapist according to numerical scale or by measuring the time needed to complete the tasks. The time is the simplest parameter to measure, but it is not an accurate description of hand function [4]. Thus, evaluation results primarily depend on therapist objectivity. Furthermore, these tests lack the information about the grip strength. In clinical practice the dynamometers are used for measuring maximal voluntary grip force, but no information about submaximal force control can be obtained this way. For measurement of sensory-motor control capability the tracking tasks are suitable [4]. In tracking, the subject has to track the target as closely as possible by voluntary controlling the position of (or force applied to) some sensor. During the task, visual feedback is provided to the patient on the exercise performance. Beside the evaluation, the tracking systems can also be used as a training tool. It was shown that the force tracking training improved accuracy of grip force control and increased grip strength [5]. In addition to conventional therapy the functional electrical stimulation (FES) can be used during rehabilitation period. FES uses a train of electrical pulses to artificially excite muscle contraction. Stimulation can be applied by means of surface or implanted electrodes. Surface FES is suitable for use during rehabilitation, because it is practical for usage and noninvasive. The majority of FES systems used in clinical practice are feed-forward controlled (openloop) by therapist or patient. Specific and fixed tasks can be achieved with pre-programmed stimulation patterns. On an experimental level the closed-loop controlled FES systems have being developed and tested. Closed-loop FES systems demonstrated better input-output linearity, repeatable system response and better disturbance rejection. The feedback sensors providing information about the force and/or joint position are required in such configuration. The aim of our research was to develop and evaluate the system for training of hand closing and opening combining the FES and force tracking task. The system is designed for training of finger extensors and finger flexors. It comprises the visual feedback display providing information about the reference and actual force, the hand force measuring device and the closed-loop controlled FES for facilitating voluntary activity of the patient. The system was evaluated in experi-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 950–953, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Experimental Evaluation of Training Device for Upper Extremities Sensory-Motor Ability Augmentation
mental training. Two patients were trained in addition to therapeutic treatment over a period of 4 weeks. The hardware and the software of the system for hand sensory-motor capability augmentation are presented in the next section together with the methodology of training protocol. In Results the training outcomes are outlined. The results are discussed and conclusions are presented in the final section. II. METHODS A. Training system In Fig. 1, a conceptual scheme of the training system for upper extremity sensory-motor augmentation is presented. The system is aimed for training of finger flexors and extensors by accomplishing the force tracking task. The core of the system is a personal computer (PC) which is used for reference force generation, actual grip force acquisition, visual presentation of reference and grip force, and stimulation control. The software application for controlling the system was developed in C++ programming language. During training, the patient has to track the desired force represented by the target signal as closely as possible by adjusting his grip strength. The system is designed to allow full voluntary control of hand opening or closing in isometric conditions, while the FES is added for facilitating the voluntary contributions of the patient. Force reference signal is composed of four periods of sinusoidal signal with superimposed DC component. Between the periods, rests with duration of 15 s are inserted. The amplitude of the reference signal ranges from zero to maximum positive value for training of hand closing and from zero to maximum negative value for training of hand opening. During tracking task the reference and actual grip force are displayed on the monitor screen for providing visual feedback to the patient. The difference between the momentary values of both forces serves as an input to the proportional-integral (PI) controller. The inverse approximation of muscle recruitment curve is added to the final stage of controller to cancel the nonlinearity of the muscle response. The output of the controller is directly proporVisual feedback
PC
controller Force reference
Fref
-
PI
Stimulator
Fact
Patient
Force measuring device
Fig. 1 Conceptual scheme of the training system for upper extremity sensory-motor augmentation
951
tional to the width of the stimulation pulse, while the pulse amplitude is kept constant throughout a single training session. The stimulation parameters are sent to the stimulator via RS-232 serial connection with the frequency of 33 Hz, determining the width of each pulse. Two channel stimulation was utilized stimulating the finger flexors and finger extensors independently. The device for measuring isometric hand force is built out of aluminium strut elements. Two JR3 force/torque sensors (50M31A-I25; JR3, Inc., Woodland, USA) and forearm support are mounted on the mechanical construction. Left sensor is aimed at measuring the thumb force. The thumb is fixated to the force sensor by means of finger support and Velcro strap. The right sensor is used for measuring force of other four fingers. The finger fixation is made out of two parallel aluminium profiles, which fully constrain finger motion in both directions. All finger supports are padded with neoprene material to prevent unpleasant interaction. Finger fixation in such arrangement enables acquisition of isometric forces of hand opening and closing. To ensure proper position and to prevent arm and wrist from moving during training, forearm is fixated to the arm support by Velcro straps. Usage of the strut profiles enables arbitrary positioning of the sensors and forearm support. In this way, the measuring setup can be adjusted to each individual assessing either the right or the left hand. A PCI board is used for data acquisition from sensors. The data are sampled with frequency of 100 Hz and then filtered in real time using on-board integrated filter with the cut-off frequency of 31.25 Hz and delay of approximately 32 ms. B. Participants Two incomplete tetraplegic patients participated in evaluation of the system for upper extremity sensory-motor augmentation. Patient AD was 28 years old, almost 4 years after injury and with spinal chord injury at the C5/C6 level. He had strong but spastic finger flexors, causing him the difficulties with hand opening. Patient AS was 15 years old, with incomplete tetraplegia at the C3/C4 level and 8 months after injury. He preserved considerable voluntary control over finger flexor and extensor muscles. Both participants were trained with the system in addition to therapeutic treatment at the Institute for Rehabilitation, Republic of Slovenia. Each patient signed the consent form confirming that he had been acquainted with the training protocol and that he voluntarily participated in the study. C. Training protocol With the system for upper extremity sensory-motor augmentation an experimental training was accomplished in
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
952
J. Perdan, R. Kamnik, P. Obreza, T. Bajd and M. Munih
duration of 4 weeks. Five training sessions were completed each week, one session per working day. In both patients the dominant right hand was treated. The training was supervised by experienced physiotherapist. During training, the patient was seated behind the desk in front of a computer screen (see Fig. 2). The physiotherapist positioned the surface electrodes on his right forearm in such a way that the maximal flexion/extension of index, middle, ring and little finger without flexion/extension of wrist was obtained. After the placement of electrodes, the fingers and forearm are fixated onto the force sensors and forearm support, respectively. The maximal amplitude of stimulation pulse was determined by manually increasing the pulse amplitude up to the level where the patient felt stimulation as uncomfortable. In the test, the stimulation pulse width was set to maximum value of 500 μs. The pulse amplitude determined in this test was then used throughout the training session. For tuning the PI controller, i.e., setting the gains KP and KI, the model of muscle in isometric condition was built. The muscle was represented by Hammerstein model, which consists of the static recruitment nonlinearity and linear discrete-time transfer function. To identify the recruitment curve of isometric muscle response, linearly increasing stimulation was applied to the muscles, lasting for 1.5 s and increasing pulse width in a range from 0 to 500 μs in increments of 10 μs. Five identification trials were accomplished and the results averaged to obtain approximation of muscle recruitment, which represents the nonlinear relationship between the pulse width and the finger force. In the next step of muscle response identification, the response to pseudo-random binary signal (PRBS) was measured for
identification of the linear discrete-time transfer function. In identification procedure the inverse recruitment curve was used to approximately cancel the recruitment nonlinearity. The measured response of finger forces and the stimulation values were used in identification procedure. For the PI controller tuning, the discrete model was built in Matlab-Simulink simulation environment. The simulation model comprised the model of closed-loop FES system and the model of muscle. Parameters KP and KI were defined with the optimization procedure minimizing the tracking error. Simulink Response Optimization toolbox was used for this purpose. After successful adaptation of the system to an individual, the maximal force that the patent was able to voluntarily achieve was acquired. On the basis of measured value, the maximal amplitude of reference force signal was determined as 50% of the maximal voluntary force. The training consisted of tree repetitions of two tracking tasks: Task A and Task B. In Task A, the patient had to accomplish tracking task only with his voluntary activity, while in Task B, the FES was added to facilitate patient's voluntary effort. The training consisted of repetitions of Task A, followed by Task B. During tracking, the reference and actual force signal and the stimulation output were sampled with the frequency of 100 Hz and saved in a text file. The training procedure was accomplished for training in isometric hand closing and opening. For assessing the tracking performance the relative root mean square error (rrmse) was calculated for each tracking task. III. RESULTS In Fig. 3 the maximal voluntary forces of hand opening and closing measured prior to the training are presented. In upper graph the maximal voluntary forces in hand opening test are shown throughout the training days. In lower graph the maximal voluntary hand force in hand closing is shown. In Fig. 4, an example of the tracking performance of Task A and Task B in hand closing is presented. As it is evident, the patient had trouble with maintaining the grip force for longer time. He could not track the longer periods of sinusoidal signals, but was able to follow the last, quickest, period of the reference force signal. The results of tracking in Task B facilitated by the FES are presented in Fig. 4(b). As it can be seen, the FES considerably improved the force tracking performance of the patient. The stimulation intensity during Task B is shown in Fig. 4(c).
Fig. 2 Patient during tracking task
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Experimental Evaluation of Training Device for Upper Extremities Sensory-Motor Ability Augmentation (a) Maximal voluntary hand force in hand opening 80 patient AS patient AD
F [N]
60 40 20 0
1
2
3
4
5 6 7 8 9 10 11 12 13 14 15 16 17 18 Day of training (b) Maximal voluntary hand force in hand closing
120 patient AS patient AD
F [N]
100 80 60 40
1
2
3
4
5
6
7
8 9 10 11 12 13 14 15 16 17 18 Day of training
Fig. 3 Maximal voluntary hand force achieved prior to the training for both patients Patient AD, hand closing
953
in hand closing quickly improved in the first days of training but later remained at the constant level. Patient AD had steady and good tracking performance, comparable to a healthy person. His average rrmse in hand opening tasks was less than 0.35 and average rrmse in hand closing tasks less than 0.4, except for two successive days where rrmse was higher than 1. He usually performed better without help of the FES. Patient AS had larger tracking errors at the beginning of training (rrmse > 2), but improved his tracking and more than halved tracking error (rrmse < 1). Addition of FES significantly increased his tracking performance, resulting in lower tracking error. First evaluation results of training with system for upper extremity sensory-motor augmentation are encouraging in means of muscle strengthening and reducing the force tracking error which implies to the improvement of grip force control.
(a) Tracking result of Task A, rrmse = 2.46 80
ACKNOWLEDGMENT
60 F [N]
40 20 0 -20
0
20
40
60
80 t [s]
100
120
140
160
(b) Tracking result of Task B, rrmse = 0.51 80
The authors acknowledge the Republic of Slovenia Ministry of Education, Science and Sport grant Motion Analysis and Synthesis in Human and Machine (P2-0228C). The authors would also like to thank both patients for participating in experimental training.
60 F [N]
40
REFERENCES
20 0 -20
1. 0
20
40
60
80 100 120 140 t [s] (c) Stimulation intensity (pulse width) during Task B
160
pw [ms]
30
2.
20
3.
10
0
0
20
40
60
80 t [s]
100
120
140
160
Fig. 4 Tracking of patient AD (a) Task A, (b) Task B, (c) output of the controller during Task B
4. 5.
Popovic MR, Thrasher TA, Zivanovic V at al. (2005) Neuroprosthesis for retraining reaching and grasping functions in severe hemiplegic patients. Neuromodulation 8:58–72 Hines AE, Crago PE, Billian C. (1995) Hand opening by electrical stimulation in patients with spastic hemiplegia. IEEE Trans Rehabil Eng 3:193–205 McPhee SD. (1987) Functional hand evaluations: a review. Am J Occup Ther 41:158–163 Kurillo G, Zupan A, Bajd T. (2004) Force tracking system for the assessment of grip force control in patients with neuromuscular diseases. Clin Biomech 19:1014–10221 Kurillo G, Gregoric M, Goljar N et al. (2005) Grip force tracking system for assessment and rehabilitation of hand function. Technol health care 13:137–149
IV. DISCUSSION AND CONCLUSION As it is evident from results, both patients gained strength in finger flexor and extensor muscles. Patient AD achieved steady improvement of maximal voluntary hand force in closing and opening throughout the training. Patient AS also achieved steady improvement of maximal voluntary force in hand opening, while maximal voluntary force
Author: Institute: Street: City: Country: Email:
Jernej Perdan Faculty of Electrical Engineering Tržaška 25 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
New Experimental Results in Assessing and Rehabilitating the Upper Limb Function by Means of the Grip Force Tracking Method M.S. Poboroniuc1, R. Kamnik2, S. Ciprian1, Gh. Livint1, D. Lucache1 and T. Bajd2 1
Technical University of Iasi/Faculty of Electrical Engineering, Iasi, Romania University of Ljubljana/Faculty of Electrical Engineering, Ljubljana, Slovenia
2
Abstract— The aim of the paper is to present new experimental results while using a tracking system for the assessment and training of grip force control in patients with neuromuscular diseases. In conjunction with the Jebsen-Taylor hand test the Grip Force Tracking System proved to be a valuable tool to assess hand dexterity and to quantify the hand rehabilitation process in stroke patients. Keywords— grip strength, stroke, hand rehabilitation.
I. INTRODUCTION Stroke is a leading cause of serious, long-term disability. The World Health Organization estimates for the year 2001 that there were over 20.5 million strokes worldwide. 5.5 million of these were fatal [1]. Many stroke survivors have problems with thinking as well as with their physical disabilities. It has been estimated that 33% of stroke survivors need help caring for themselves, 20% need help walking, 70% cannot return to their previous jobs and 51% are unable to return to any type of work after stroke [2], [3]. According to the National Stroke Association: 10% of stroke survivors recover almost completely, 25% recover with minor impairments, 40% experience moderate to severe impairments that require special care, 10% require care in a nursing home or other long-term facility, 15% die shortly after the stroke and 14% (approximate) of stroke survivors experience a second stroke in the first year following a stroke [4]. 28% of people who suffer a stroke are under age 65 and a well established rehabilitative process may bring more of them back to work. The above enlisted data inspire the scientific and medical community to find the best rehabilitation methods to treat and assess the stroke patients during the post-stroke recovering process. Stroke affects different people in different ways, depending on the type of stroke, the area of the brain affected and the extent of the brain injury. Paralysis with weakness on one side of the body is a common after effect. Within the physical therapy, the rehabilitation process aims to help the patient to regain its ability to walk and the mobility of the affected upper limb as it was before the unwished event. Functional Electrical Stimulation (FES) has been proven to be an efficient method to improve walking in hemiplegia
[5]. Measures as walking speed and physiological cost index (PCI) over 10 m has been proven efficient in assessing the functionality improvements of the affected lower limb both with and without usage of the Odstock Dropped Foot Stimulator (ODFS),. Assessing the hand functionality and overall functionality of the upper limb is more difficult. Some tests assessing the range of motion & sensation, strength and dexterity have been proposed. The Jebsen-Taylor test has been proven to be an objective test of hand functions commonly used in activities of daily living [6]. The test items include a range of fine motor, weighted and non-weighted hand function activities which are sorted as: (1) writing (copying) a 24 letter sentence, (2) turning over 3 x 5” cards, (3) picking up small common objects such as a paper clip, bottle cap and coin (4) simulated feeding using a teaspoon and five kidney beans, (5) stacking checkers, (6) picking up large light objects (empty tin can) and (7) picking up large heavy objects (full tin can x 1 lb). Once the stroke patient regains some movement over the shoulder and elbow, hand dexterity and grip force have to be assessed. The O’Connor Finger Dexterity Test requires hand placement of 3 pins per hole. The test is designed as an eye-hand coordination test similar to the Minnesota Manual Dexterity test [7]. Some grip strength measurements are predominantly focused on the assessment of the maximal voluntary grip force, but it is important to assess the ability to control the grip strength of sub-maximal forces which are employed during grasping and manipulation of different objects. The Grip Force Tracking System (GFTS) has been developed as an assessment tool to evaluate effects of physical therapy or to train stroke patient’s grip force [8]. The GFTS involves biofeedback training methods and consists of two gripmeasuring devices of different shapes (cylinder and thin plate) which connect to a personal computer through an interface box. During the tracking task a person applies the grip force according to the visual feedback on the target signal while minimizing the difference between the target and the actual response. In the paper we present experimental results that have been obtained during the assessment and training of some incomplete spinal cord injured (SCI) or stroke (CVA) pa-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 954–957, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
New Experimental Results in Assessing and Rehabilitating the Upper Limb Function by Means of the Grip Force Tracking Method
955
tients while performing hand exercises within the Rehabilitation Hospital of Iasi, Romania. The Jebsen-Taylor test as well as the Grip Force Tracking System based test have been performed. The aim of our study was to find out ways to increase the hand functionality recovery rate, and to identify if among the enlisted categories of patients there are some that can not benefit by using the GFTS system. II. MATERIAL AND METHODS A. Grip Force Tracking System The Grip Force Tracking System (GFTS) consists of two grip-measuring devices of different shapes (cylinder and thin plate). The cylindrical device allows assessment of grip forces up to 300 N with the accuracy of 0.02% over the entire measuring range. The second device is made up of two metal parts which shape into a thin plate at the front end being used to assess or train fingers pinch task. It can measure forces up to 360N with the accuracy of 0.1%. The output from the two grip-measuring devices is sampled through the interface box, consisting of an amplifier with supply voltage stabilizer and an integrated 12-bit A/D converter. The interface box connects to the parallel port of a personal computer, which is used for data acquisition and visual feedback. The tracking task as part of the biofeedback training involves the patient in tracking of an on-screen presented target by applying the appropriate grip force to the endobject of the grip-measuring device. The computer screen shows a blue ring that will modify its vertical position in accordance with a target signal. The voluntary applied grip force is related to a red spot which moves upwards when the force is applied to the measuring object and returned to its initial position when the grip is released. The aim of the tracking task is to continuously track the position of the blue ring by dynamically adapting the grip force.
Fig. 1 The SCI patient performing the (7)th Jebsen-Taylor hand test task Table 1 CVA patients Patients
P1 P2 P3 P4
Age [years] 58 60 65 41
Gender
Male Male Male Female
Time post stroke [months] 6 4 9 12
Affected side of the body Right Right Left Left
med consent. All the investigations have been accomplished at Rehabilitation Hospital of Iasi (Neurology Clinic) under the supervision of physicians and kinetotherapists. C. Data analysis
B. Participants One incomplete SCI patient (injury level: C5-C6, years post injury: 2, 45 years old, female) and four CVA patients (see Table 1) participated in the evaluation study of the Grip Force Tracking System. The SCI patient is able to sustain for short time a standing position between parallel bars and regularly performs arm exercises with 5 kg dumb bells while lying down on a bed. In this case the GFTS has been used as an assessment device to assess grip and pinch forces. The Jebsen-Taylor hand test provided data about the functionality of the upper limbs on a longer period (see Figure 1). Prior to the investigation, all subjects gave in for-
Each Jebsen-Taylor hand test task is timed to a maximum of 80 seconds. A number of five objects have to be moved while performing a test task (i.e. a number of five playing cards for the 2nd test item). The performance of the GFTS sinus task has been assessed by calculating the relative root mean square error (RRMSE) between the target force and the measured output force over the trial time [9]. The tracking error was normalized by the maximal value of the target signal to allow comparison among the results obtained in different grips and patients. A lower tracking error suggests better activation control of the corresponding muscles and improved hand functionality.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
956
M.S. Poboroniuc, R. Kamnik, S. Ciprian, Gh. Livint, D. Lucache and T. Bajd
III. RESULTS
10
8
Force [N]
The SCI patient started the usual kinetotherapy treatment after three months of the trauma. Only five months after the beginning of the treatment the patient has been able to coordinate the main movements of the upper limbs. It was the moment to start testing the arm functionality and some of the Jebsen-Taylor hand test items provided consistent data. An FES based rehabilitation treatment of the upper limbs that started after 12 months since the spinal cord trauma has been proven beneficial. The SCI patient has been able to perform all the Jebsen-Taylor hand test tasks, and over a three months period the time to perform each task decreased with 10÷15%. It was the moment when a training device as GFTS seems to bring more benefits while it improves the hand grip and finger movements. The patient remarked a better dexterity while tapping at the computer keyboard. The main difficulty with the treatment of CVA patients is the reduced time (three weeks) that they are allowed to be hospitalized during their rehabilitation period. They have to go back home and to continue with an ambulatory treatment, being able to be reassessed only after another few 4-5 weeks during a new hospitalization period. Therefore, the GFTS has been proven to be more effective as an assessment tool than a training tool. All the CVA patients have been assessed during their hospitalization when they are performing an usual kinetotherapy treatment. The discussion takes into account the first (beginning of the hospitalization) and the last assessment (end of hospitalization). The patient P1 (Figure 2) that performed the grip force sinus tracking test (lateral grip) showed a RRMSE=2.15 while the mean maximal force was 42.5 N.
Patient P1, Right hand, Lateral Grip (RRMSE=2.15)
12
6
4
2
0
0
10
20
30
40
50
60
time [s]
Fig. 3 Tracking results of
the P1 CVA patient
It is important to observe that the patient had difficulty with releasing the grip what made him difficult to reach the minimum picks of the sinus. Treatment of this patient did not showed relevant improvements in grip force or RRMSE after three weeks of rehabilitation period. For the same tasks the patient P3 achieved a RRMSE=0.85 with a mean maximal force of 28.5 N and the patient P4 achieved a RRMSE=1.5 with a mean maximal force of 26.3 N. For all these three patients we have
Patient P2, Right hand, Lateral Grip (RRMSE=0.71) 20
Force [N]
15
10
5
0 0
10
20
30
40
50
60
time [s]
Fig. 2 The P1 patient being assessed with the GFTS
Fig. 4 Tracking results of
the P2 CVA patient
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
New Experimental Results in Assessing and Rehabilitating the Upper Limb Function by Means of the Grip Force Tracking Method
concluded that there is a need for another assessment after more than three weeks in order to obtain conclusive results. It is intended to assess them all once they will return for a new hospitalized rehabilitation treatment. It is interesting to remark that the patient P2 has achieved interesting results over a period of three weeks. The maximal force was in the range of 68÷70 N, and the RRMSE decreased from 1.2 to 0.85. The results suggest that treatment at time only 4 months after stroke is more beneficial and faster recovery can be achieved.
Republic of Slovenia Ministry of Education, Science and Sport. The work has been supported within the frame of the Slovene-Romanian Bilateral Scientific and Technological Cooperation Project: "Standing-up motion augmentation in paraplegia by means of FES and robot technology " and the Romanian grant CEEX24-I03/2005.
REFERENCES 1. 2.
IV. CONCLUSIONS In summary, the results of our tests in stroke patients show that there is a degree in which their grip force control is affected by the disease. The level depends from person to person. The Grip Force Tracking System proved to be a valuable tool to assess CVA patients and will be further used as a training tool. One observation that came from the physiotherapists suggests that GFTS wouldn’t be appropriate to be used in patients that elicit severe spasticity in the upper limb. That kind of patients usually elicit high grip forces but they are unable to release the grip. The physiotherapists suggested also that it will be interesting to use the GFTS in therapy on patients that recovers after a peripheral nerve lesion. The biofeedback associated with the performance of the tracking task can further assist the overall rehabilitation process by providing feedback on the progress to the patient.
ACKNOWLEDGMENT The authors gratefully acknowledge the financial support of the Romanian Ministry of Education and Research and
957
3. 4. 5. 6. 7. 8. 9.
The Internet Stroke Center at http://www.strokecenter.org. Black-Schaffer RM, Osber JS (1990) Return to work after stroke: development of a predictive model. Arch Phys Med Rehab 71:285290. Stroke Awareness at http://www.strokeawareness.org. Stroke: Conventional Treatments for Stroke at http://www.holisticonline.com. Taylor, P.N., Burridge, J.H., Dunkerley, et al (1999). Patients perceptions of the Odstock Dropped Foot Stimulator (ODFS). Clinical Rehabilitation, 13:439-446. Jebsen, R.H., Taylor, N., Trieschmann, R.B., et al (1969). An objective and standardised test of hand function. Arch of Physical Medicine and Rehabilitation, 50(6):311-319. Dexterity tests: hand eye coordination tests, at http://www.rehaboutlet.com. Kurillo G, Zupan A and Bajd T (2004) Force tracking system for the assessment of grip force control in patients with neuromuscular diseases, Clinical Biomechanics 19:1014–1021. Jones R D (2000) Measurement of sensory-motor control performance capacities: tracking tasks. In: Bronzino, J.D. (Ed.), The Biomedical Engineering Handbook, second ed, vol. II. CRC Press, Boca Raton. Author: Marian Poboroniuc Institute: Street: City: Country: Email:
Technical University of Iasi 53 B-dul D. Mangeron Iasi Romania
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The “IRIS Home” A. Zupan, R. Cugelj, F. Hocevar Institute for rehabilitation, Republic of Slovenia, Linhartova 51, Ljubljana, Slovenia Abstract— The article presents the IRIS Home. IRIS is an acronym for Independent Residing enabled by Intelligent Solutions. It is planned as a demonstration apartment located at the Institute for rehabilitation in Ljubljana. It will be fitted with the latest equipment, technical aids and rehabilitation technology. The aim of the IRIS home is demonstration, testing and application of contemporary technological solutions that compensate for the most diverse kinds of disabilities and thereby improve the quality of life of persons with disabilities and assure their optimal occupational, educational and social integration in society. Keywords— IRIS Home, persons with disabilities, rehabilitation technology
I. INTRODUCTION Through diverse rehabilitation programs we seek to ameliorate or eliminate the patient's disability and handicap. We use the medical, psychological, social and occupational rehabilitation methods. Our ultimate goal is to enable the patient to achieve optimal participation in his/ her social life and occupation. Technical aids and rehabilitation technology are essential means for improving the life situation of severely impaired individuals; by these means we can compensate significantly for these patients’ disabilities. This latter aspect of rehabilitation gains increasing importance with advances in rehabilitation techniques and technology based upon overall developments in the sciences from which they derive. IRIS is an acronym for Independent Residing enabled by Intelligent Solutions. The IRIS Home is planned as a demonstration apartment of approximately 90 m² to be located at the Institute for rehabilitation in Ljubljana. It will be fitted with the latest equipment, technical aids and technology that compensate for various forms of disability. This apartment will be designed to enable persons with diverse disabilities, along with the elderly, to attain maximum functional independence. The apartment will be equipped in a way that facilitates control over living space. It will be possible, for example, to activate powered windows, doors, blinds and heating system controls through different remote controls and systems that can be
activated by touch, voice commands and eye-pupil movement. The apartment will also be equipped with the latest communication technology adapted to diverse kinds of disability that allows the user to communicate outside the apartment in order to study, work and access entertainment. II. METHODS A. Project objectives • •
•
•
• •
Facilitate public access in Slovenia to the demonstration of contemporary technology that assists persons with diverse disabilities, as well as the elderly. Provide persons with disabilities and the elderly an opportunity in the demonstration apartment to try out and select technical solutions for their respective disabilities enabling maximum functional independence in a home environment. Advise patients and the elderly, along with their caregivers, on the most rational and economic adaptation of their current living quarters with regard to their particular needs. Provide equipment manufacturers and service providers in the field of rehabilitation technology opportunity to promote and test their solutions for various types of disability in the integrated environment of the demonstration apartment. Create possibilities for research and development in the fields of e-accessibility and e-inclusion in Slovenia. Facilitate activities for the promotion and application of a policy of e-accessibility in Slovenia.
B. Immediate goals • • • • •
Facilitate greater independence among all groups of users. Reduce the cost of home-care (health care, nursing and other forms of assistance). Improve the safety of the user. Reduce the need for re-location in retirement homes and other suitable institutions. Create modular solutions that can be applied in diverse user environments (private living quarters, social institutions, retirement homes, etc.).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 958–960, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
The “IRIS Home”
C. Users • • • •
•
Persons with different disabilities (with physical/mobility impairments, blind or sight impaired, with hearing impairments, as well as the elderly). Professional organizations will use the demonstration facility for training and planning diverse activities for the users: persons with different disabilities and the elderly. The general public who can be familiarized with the requirements of patients and the elderly and the technical solutions to their needs. Students of medicine, social services and technology who through training in this facility familiarize themselves with the needs and problems of diverse kinds of disability and with solutions to these problems. Designers of similar facilities, especially architects, interior designers and equippers responsible for the technical documentation of new housing and adaptation of existing housing to meet the needs and demands of IRIS users. III. RESULTS AND DISCUSSION
From the beginning this project has been planned and led to facilitate a “full life” for its users. It is our aim, through broad promotion of this demonstration apartment among the general public, to provide individuals and their families the possibility to see and experience technological solutions to the problems of persons with different disabilities and the elderly and to find solutions enabling greater quality of life among these persons. The IRIS Home will be integrated into regular rehabilitation programs for the most severely impaired patients. In this way we will be able to determine what practices and technological equipment can provide optimal solutions for independent and quality living in one’s home environment. Occupational therapists and technicians employed in the IRIS Home will demonstrate the use of various technical solutions and inform potential users (and their families) about the most cost effective and readily available solutions to their individually specific needs. The professionals mentioned above will also assist in the recommendation and adaptation of technology (installed in the IRIS Home) for use in the home environment of individual patients. In this way we will transfer rehabilitation from an institutionalized setting to patients’ home environments. By using visiting health personnel to conduct rehabilitation programs in
959
the technologically well-equipped home environment of the individual patient we will attain a more complete and better adapted program of care. This approach represents a big advance in the way we conceive and practice rehabilitation. Rehabilitation will also be transferred from institutions to home environments using “telemedicine” and “telerehabilitation” programs. By promoting and using the most recent communication technologies it will be possible to monitor remotely the health status of patients and thereby assure greater security for the patient and reduce the costs of supervision, control and care of patients. It will also be possible to administer remotely certain elements of rehabilitation programs, most importantly, the counseling and teaching of patients. This will reduce transportation, visitation and care costs. The results of different studies regarding the use of smart technology for persons with different disabilities and the elderly showed that this kind of treatment improves the quality of the patient's life, enables his/her more independent living in home environment and that this kind of treatment is cost-benefit and costeffective (1-9). IV. CONCLUSION The IRIS Home represents a significant advance in Slovene rehabilitation medicine whereby we introduce a new field of activity – demonstration, testing and application of contemporary technological solutions that compensate for the most diverse kinds of disabilities and thereby improve the quality of life of persons with disabilities and assure their optimal occupational, educational and social integration in society. Rehabilitation will become centered in individual patients’ home environments but nevertheless incorporate effective connection and communication with those outside institutions upon which the patient depends. The work of the IRIS Home involves new expenditures: the employment of new professionals, maintenance of the demonstration apartment and its equipment and renewal and continual improvement of the same. Furthermore, it will be necessary to secure financing and support for technical aids and technological solutions for those citizens who require them to attain social equality and a better quality of life in their home environment. In spite of the initial high costs for this technology, in the long run this investment will reduce significantly public expenditures for the social and medical care of these persons.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
960
A. Zupan, R. Cugelj, F. Hocevar
REFERENCES 1.
2. 3.
4. 5.
Magnusson L, Hanson E, Borg M. A literature review study of information and communication technology as a support for frail older people living at home and their family carers. Tehnol Disab 2004;16:223-235. Panek P. et al. 2001, Smart home applications for disabled persons – experiences and perspectives in Tang P, Venables T 2000, Smart homes and telecare for independent living. Andrich R. et al., The DAT Project: A Smart Home Environment for People with Disabilities. Proceedings: Computers Helping People with Special Needs, 10th International Conference, ICCHP 2006, Linz, Austria, July 11-13, 2006. http://www.hi.se/, http://www.hi.se/global/pdf/2002/02323.pdf. http://www.housing21.co.uk/pdf/pdf/Solutions%20autumn%202006.pdf .
6. 7. 8. 9.
http://www.telecareaware.com/2007/26/01/smart-living-for-peoplewith-dementia-in-bristol-uk/. http://www.tiresias.org/cost219ter/inclusive_future/(14).pdf. http://www.sentha.tu-berlin.de. http://www.dh.gov.uk/PublicationsAndStatistics/Publications/Publicat ionsPolicyAndGuidance/PublicationsPolicyAndGuidanceArticle/fs/en ?CONTENT_ID=4081593&chk=eE9iLz. Author: Dr. Anton Zupan Institute: Street: City: Country: Email:
Institute for rehabilitation, Republic of Slovenia Linhartova 51 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Use of rapid prototyping technology in comprehensive rehabilitation of a patient with congenital facial deformity or partial finger or hand amputation T. Maver1, H. Burger1, N. Ihan Hren2, A. Zuzek3, L. Butolin4, and J. Weingartner5 2
1 Institute for Rehabilitation, Centre for orthotics and prosthetics, Ljubljana, Slovenia University Medical Centre, Department of Maxilofacial and Oral Surgery, Ljubljana, Slovenia 3 IB-PROCADD d.o.o. Dunajska 106, Ljubljana, Slovenia 4 TECOS Slovenian tool and die development centre, Celje, Ljubljana, Slovenia 5 RTCZ Rapid prototyping and rapid tooling center, Hrastnik, Slovenia
Abstract—Our experiences show that patients wish to replace the lost part of their body with a prosthesis – epithesis that is a mirror image of the relevant healthy part of the body. Four years ago we linked up with other institutions, companies and the University of Ljubljana in order to search for new more advanced technological possibilities to bring the form of epitheses closer to the form of a healthy hand or part of a face. A healthy and impaired part of the body were scanned. A digital virtual model was made by using a computer programme. 3D printing technology, DMLS (Direct Metal Laser Sintering) and SLS (Select Laser Sintering) technology were used to build up the first model or mould for manufacturing a silicone epithesis. Through our development project we have found the way for the high-resolution digitising of body parts and technology to produce a prototype model and mould allowing the fine recognition of skin details. By using CAD-CAM high resolution technology, the highest-quality prosthetic design can be achieved even when the prosthetist lacks artistic skills. Keywords— prototyping
Epitheses,
Prostheses,
Digitising,
Rapid
I. INTRODUCTION At the Institute for Rehabilitation of the Republic of Slovenia we have been manufacturing and applying epitheses since 1993, by using silicone technology. Nowadays, this technology is based on manual shaping with which we strive to restore patient’s aesthetic appearance. Our experiences show that patients wish to replace the lost part of their body with a prosthesis that is a mirror image of the relevant healthy part of the body. Four years ago we linked up with other institutions and the University of Ljubljana in order to search for new more advanced technological possibilities to bring the form of epitheses closer to the form of a healthy hand or part of a face. Thus we started to develop an appropriate highresolution CAD-CAM system.
II. MATERIALS AND METHODS The development project covers three areas: • • •
a scanning system; positive model construction technology; and tool construction technology.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 943–946, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
944
T. Maver, H. Burger, N. Ihan Hren, A. Zuzek, L. Butolin and J. Weingartner
A digital virtual model was made by using a computer programme. The healthy part of the body was treated and a mirror picture of the digitalised model was thereby obtained. This virtual digitalised model was then gradually adjusted to the model of impaired part of the body. The digitalised picture of the model was transferred into the STL database.
Scanning system During the development phase, three laser and optical scanners were tested in the making of a digitalized 3D model of a hand and stump. The following scanners were tested: freescanner CAPOD CAD-CAM system, Zscanner 700 and 3D optical scanner ATOS II 400. First, a healthy part of the body were scanned.
Later, a plaster model of the impaired part of the body that had previously been corrected was scanned.
Positive model construction technology
Mould construction technology 3D printing technology, SLS ( Select Laser Sintering) and DMLS (Direct Metal Laser Sintering) technology were used to build up the first model or mould. 3D printing technology was used to make a prototype model of the auricular epithesis.
Further, DMLS technology was used to make a tool for manufacturing a silicone finger epithesis.
At the last trial, SLS technology was used to produce a tool for manufacturing a silicone finger epithesis
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Use of rapid prototyping technology in comprehensive rehabilitation
III. RESULTS With the assistance of experts from the companies participating in this project we tested and identified the devices and technological procedures that enable the manufacturing of epitheses. The best results when scanning were achieved by using the ATOS II photo scanner. When scanning directly on the body, there were some problems due to slight movements of the body. This was the reason for additionally scanning plaster models of the healthy as well as impaired parts of the body.
945
to depend on the apparency of skin prints. The highest quality of the mould surface was achieved by the DMLS technology and the lowest by the 3D print technology, which produced a rougher surface of the prostheses test model despite the satisfactory apparency of the skin prints. The SLS technology was selected for mould manufacturing due to its accessible cost. The apparency of skin prints achieved by the SLS was not essentially lower than that achieved by the DMLS technology.This mould can be directly used to be filled with silicone material. IV. DISCUSSION During the development phase, CAD-CAM technology processes were defined to enable the production of silicone prostheses after partial hand amputation, which in their form mirror the patient’s healthy hand. Most centers for manufacturing silicone hand prostheses nowadays use the procedures of manual modeling [1, 2]. The quality of such prostheses depends on the artistic skills of the prosthetist. By using CAD-CAM high resolution technology, the highest-quality prosthetic design can be achieved even when the prosthetist lacks artistic skills. Such technology has been already used in designing and making of epitheses [3]. The same procedure is mentioned by Didrick [4], the author of an article on the manufacturing of finger prostheses. V. CONCLUSIONS
The picture of a virtual positive model shows all the skin details, including fingerprints. In this way the first part of the development project was completed. This virtual model helps to make a prototype model of an epithesis or mould in the STL database. The programme allows the adaptation of the digital model of the healthy part of the body, to the digital model of the stump or the impaired part of the face. The highest apparency of skin details in the mould was achieved by the DMLS technology (Direct Metal Laser Sintering) with 0.04mm accuracy. In the testing of the SLS (Select Laser Sintering) technology and the print technology, the accuracy was 0.1mm. When inspecting the moulds, the most accurate surface was found to be that produced by the DMLS technology. Silicone was poured into the moulds and after the vulcanization the quality of test prostheses was found
The final appeareance of the prosthesis depends greatly on its shape. Our experiences in using the CAD-CAM high resolution technology have shown that such technology enables computer-based manufacturing of prostheses, which in their form mirror the healthy hand.
By using CAD-CAM high resolution technology, the highest-quality prosthetic design can be achieved even when the prosthetist lacks artistic skills.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
946
T. Maver, H. Burger, N. Ihan Hren, A. Zuzek, L. Butolin and J. Weingartner
REFERENCES 1. 2. 3. 4.
O'Farrell DA, Montella BJ, Bahor JL et. al. (1996) Long-term follow-up of 50 Duke silicone prosthetic fingers. J of Hand Surgery. 21B: 5: 696-700, 1996 Pilley MJ, Quinton DN.(1999) Digital prostheses for single finger amputations. J Hand Surg [Br]. Oct;24(5):539-41 Sykes LM, Parrott AM, Owen CP et.al. (2004) Application of rapid prototyping technology in maxillofacial prosthetics. Int. J Prosthodont. Jul-Aug;17(4);454-9. Didrick D at http://www.oandp.com/edge/issues/articles/200511_06.asp Author: Tomaz Maver Institute: Street: City: Country: Email:
Institute for rehabilitation Linhartova 51 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using computer vision in a rehabilitation method of a human hand J. Katrasnik1, M. Veber1 and P. Peer2 1
2
Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
Abstract— We developed this program for the purpose of a rehabilitation method that requires a patient to move an object around with his hand. Using a black and white firewire camera the program determines the position and orientation of a black rectangle on a white plane. The user must enter the length and width of the rectangle before the start. With this information the position is determined even if a part of the rectangle is obscured by a user’s hand. The program works in real-time (15 to 20 frames per second). Keywords— computer vision
I. INTRODUCTION One of the major goals of rehabilitation is to make quantitative and qualitative improvements in daily activities in order to improve the quality of independent living. When parts of the brain have been impaired by trauma, incomplete spinal cord injuries and stroke, the functions that those parts of the brain had must be relearned. Relearning is aided by rehabilitation. Relearning is fastest when rehabilitation is done early and if the patient performs task oriented exercises [3]. By using virtual reality in rehabilitation task oriented exercises become more motivating and engaging than formal repetitive therapy. Another positive aspect of virtual reality is that it is programmable, which means that tasks can be adapted to the patient. When the patient advances, tasks can be made more difficult. Our motivation was to develop a cheap method for measuring position and orientation of an object and use that information in virtual reality exercises. Position and orientation can be measured with commercial products such as OPTOTRAK. The main drawback of using such products is their high price. For example OPTOTRAK costs approximately $150 000. If we could develop a system, that would use only a black and white firewire camera and a PC, this rehabilitation method would be a lot more accessible to the patients, which could then do rehabilitation at home. That would reduce resources needed for rehabilitation and increase the time a patient spends in rehabilitation. Our goal was to determine if the position and orientation of a black rectangle, with known dimensions, on a white plane could be accurately resolved with a computer vision system in real-time. The system must be able to find out the
position of the object even if it is partly obscured with the user’s hand. II. TOOLS AND METHODS A. Tools used In developing this system we used some computer vision algorithms already implement in OpenCV [1], an open source computer vision library for C++. We also used this library for capturing images from the camera, displaying images on the screen and saving images to disk. For writing, compiling and debugging the program we used Microsoft Visual Studio 2005. For capturing the scene we used a black and white firewire camera with resolution of 640x480 pixels. The computer used for processing was a PC running Microsoft Windows 2000. The rectangular object was made of wood and painted black. B. Image processing We captured the image from camera using OpenCV [1] functions. On the captured image, which can be seen in figure 1, we used the Canny edge detection algorithm, which is implemented in OpenCV [1]. In order to find the
Fig. 1 Captured image
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 947–949, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
948
J. Katrasnik, M. Veber and P. Peer
gles that the line segments form with the x axis and then we compared these angles with one another. If the difference between two angles was 90 ± 2 degrees, we calculated next parameters: • • • • •
Fig. 2 The result of Canny edge detection algorithm
Fig. 3 Straight lines detect with cvHoughLines2 rectangle in the picture we first needed to detect straight lines. We did this with a function cvHoughLines2 with the CV_HOUGH_PROBABILISTIC parameter that returns a sequence of line segments. This function is implemented in OpenCV [1]. These line segments can be seen in figure 3. They are drawn in different colors. Figure 2 shows the output of the Canny algorithm. The Hough transform, Canny edge detection algorithm and their effects are described in [2]. C. Finding the rectangle When we had the data of all of the line segments in the picture, we needed to analyze that data, in order to find the line segments that form a rectangle. We calculated the an-
the shortest distance between the ends of line segments length of both line segments the angle between the line segments the orientation of the angle that the line segments are forming the position of the vertex of this angle
We saved these parameters in a structure representing a right angle. If the orientation and the vertex in multiple angles were very close together we averaged these right angles. We averaged all the parameters, except the lengths of both line segments; instead we kept the longest lengths. The angles, with the shortest distance between the ends of line segments, longer than half of the longest line segment, could not be a part of a rectangle and were therefore eliminated. If two angles lie on the same line and their orientation is correct, they form a side of a rectangle. So we checked each ray of each angle, if on any of these rays lays a vertex of another angle. If there was another angle we compared the orientations. If the difference in orientations was ±90 degrees the two angles formed a side of a rectangle. Whether this side was the longer or the shorter one, we found out by comparing the length of the line segments. If the line segment lying on the ray, we were checking, was longer than the other line segment of the angle, then that side of the rectangle was the longer one. We only searched for the shorter sides of the rectangle, because the user would probably be touching the longer sides. One side of the rectangle and the information about the model is enough to calculate the position of the center and the orientation of the rectangle. Dimensions of the model were scaled to fit to the short side found by the algorithm. If two short sides were found we calculated the position and orientation with both of them and then averaged the results. III. RESULTS The system calculated the position and orientation of the rectangular object even if someone was holding it with his hand. The system could not detect the object if it was moving to fast, if it wasn’t parallel with the plane it was on and if the lighting was inadequate. The system works as fast as 15 to 20 frames per second. The output of the program can be seen in figure 4.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Using computer vision in a rehabilitation method of a human hand
949
have to follow it. The progress of the patient would be measured by calculating the mean square distance between the objects and the mean square difference in orientations of the objects in a certain amount of time. The faster the reference object would move around the screen the more difficult the exercise would be. The system would be a lot more useful if it worked in three dimensions. This application indicates that determining the position and orientation of a rectangular box in three dimensions could be done with algorithms similar to the ones used in our program. Each face of the rectangular box would have to be in different color and a color camera would be necessary.
REFERENCES Fig. 4 Rectangle detected by the system IV. DISCUSSION The system worked well, if the lighting was good and if the object wasn’t moving to fast. This could be improved with higher shutter speeds, which would also affect the sharpness and brightness of the image. This system could be used as a cheaper alternative to OPTOTRAK. Using this system a simple rehabilitation method could be easily developed. The display of the PC would show a reference object and the object in the patient’s hand. The patient would then have to move the object he is holding in the position indicated by the reference object. The reference object would move around the screen and the patient would
1. 2. 3.
OpenCV library at http://sourceforge.net/projects/opencvlibrary/ Russ J C (1995) The Image Processing Handbook. Boca Raton Sveistrup H (2004) Motor rehabilitation using virtual reality. Journal of NeuroEngineering and Rehabilitation 1:10 Author address: Author: Institute: Street: City: Country: Email:
Jaka Katrasnik University of Ljubljana, Faculty of Electrical Engineering Trzaska cesta 25 SI-1000 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Hierarchical SOM to Identify and Recognize Objects in Sequences of Stereo Images Giovanni Bertolini, Stefano Ramat, Member IEEE Dip. Informatica e Sistemistica, University of Pavia, Pavia, Italy Abstract— Identification and recognition of objects in digital images is a fundamental task in robotic vision. Here we propose an approach based on clustering of features extracted from HSV color space and depth, using a hierarchical self organizing map (HSOM). Binocular images are first preprocessed using a watershed algorithm; adjacent regions are then merged based on HSV similarities. For each region we compute a six element features vector: median depth (computed as disparity), median H, S, V values, and the X and Y coordinates of its centroid. These are the input to the HSOM network which is allowed to learn on the first image of a sequence. The trained network is then used to segment other images of the same scene. If, on the new image, the same neuron responds to regions that belong to the same object, the object is considered as recognized. The technique achieves good results, recognizing up to 82% of the objects. Keywords— Artificial vision, hierarchical SOM, binocular
I. INTRODUCTION Object identification and recognition plays a fundamental role in human interactions with the environment. Building artificial systems able to automatically understand images is one of the greatest challenges of robotic vision [1]. Image segmentation is the first important process in many vision tasks since it is responsible for dividing an image into homogeneous regions so that the merging of two adjacent regions would produce a non homogeneous region [2]. Humans often achieve recognition using semantic characteristics to group parts of complex objects; an approach that goes well beyond the homogeneity of a single feature of the pixels in the image. The algorithm described here represents a hybrid approach to image segmentation, but it can be broadly considered as belonging to the class of region merging processes. Once the sought objects are identified and correctly segmented, the problem of recognition is often formulated as that of finding a description suitable for comparing the objects across the different frames or for building models that the objects in subsequent images have to match [3]. The aim of this work is to present a way to process natural binocular images for distinguishing meaningful objects from the background and storing a description of these objects for
recognizing them in subsequent frames. This can be seen as a two step problem: segmentation and recognition. The key idea of our system is to store the description of the objects in terms of the rules learned for identifying them in the first image and then use these rules to segment the subsequent frames. This approach allows to combine segmentation and recognition. II. ALGORITHM OVERVIEW Despite the importance of segmentation and the great amount of literature on this topic, there is currently no optimal solution to it. The reasons for this lack are well explained by Fu and Mui in [4]: “the image segmentation problem is basically one of psychophysical perception and therefore not susceptible to a purely analytical solution”. Thus, we reasoned that trying to mimic some of the putative processing of the Central Nervous System (CNS) in building such psychophysical perception could provide interesting hints for solving the problem at hand. It is currently believed that the CNS processes information in retinal images through parallel neural pathways that build up a description of different features of the visual scene. Those can then be combined at a higher level to obtain the visual perception of the world as we see it [5]. Such processing allows the CNS to dynamically build different interpretations of the environment, varying the weights it assigns to the different features. The approach to segmentation described in this work combines information on color, edges and depth of the image, which are computed using independent algorithms, to build a first segmentation. The combined output of the above procedures is used to train a hierarchical self organizing map network that builds the final segmentation by merging the regions that it assigns to the same cluster. Using this approach the network also stores a distributed model of the observed world, which works as an “expected image” to be used in the analysis of the subsequent frames. Overall, the described processing allows to produce a segmentation while simultaneously recognizing the objects that are present in the observed scene. To acquire the binocular images, we developed a simple vision system using two commercial web-cams with aligned
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 977–981, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
978
Giovanni Bertolini, Stefano Ramat, Member IEEE
optical axes so that the two images differ only for a translation perpendicular to such axes. Since depth information is available only for the part of the visual scene that is visible in both images, we applied all the processing techniques described in the following only to such common region. III. EXTRACTION OF FEATURES We chose to combine color space information with that on edges derived from the gray scale version of the images, and with depth information derived from the comparison of each image pair acquired by the binocular camera system. As we reasoned that it could be profitable to imitate the CNS approach to processing visual information, the extraction of each one of these three features is performed by a different algorithm so that the overall algorithm could be run in parallel. A. HSV color space Although color is usually represented in terms of its intensity in red, green and blue wavelengths (RGB space), such coding may not be the most appropriate for image segmentation because, in such space, the distance between two colors does not resemble that perceived by humans [2]. RGB coding therefore does not allow to reliably establish the similarity between two colors, which may instead be a useful criterion for determining if two regions pertain to the same object in the scene. We therefore decided to transform the data in the HSV (Hue, Saturation, Value) color representation, which separates color and intensity information, making it especially efficient in representing similarities when non uniform illumination creates differences between pixels of the same surface. In HSV space color information is represented by the hue (the dominant wavelength in the spectral distribution) and the saturation (the purity of the color) while the value is the intensity. B. Edge detection and watershed Edges, which can be identified as discontinuities in the grey level of the pixels in the image, are detected using a Sobel filter applied to the grayscale image. The resulting gradient map is then used as the input of the watershed algorithm [6], which builds an over-segmented map. The output of this processing step is an early, raw segmentation of the image that will be the basis for all subsequent elaborations. It is therefore important to not to oversee any boundary between regions, although this may lead to an oversegmented output. For this reason, the watershed algorithm, which intrinsically produces an over-segmentation since it
returns all the boundaries as having all the same ‘height’, appeared to be a good approach to this task. C. Depth estimate The knowledge of the distance of every point in the image from the cameras could provide valuable additional information in the image analysis process and in identifying the objects in the scene. The problem faced by the algorithms that compute depth from visual information (stereo vision algorithms), is that they need to know the correspondences between the pixels of the two images, i.e. the two projections needed for 3D reconstruction. A stereo vision algorithm outputs a map of the size of the common region discussed in section 2. In this map each element represents a disparity value, that is the difference between the position of two corresponding pixels in the two images: the larger the disparity, the closer the object is to the cameras. Based on a recent work by Di Stefano and colleagues [7] the proposed an algorithm attempts to estimate the disparity of a point in the reference image based on computing the sum of the absolute difference (SAD), in gray level, between a square window centered in the pixel of interest (the reference window) and an equal sized window (the sliding window) sliding on the corresponding scan line of the other image (the search image). In addition, following the suggestions in [8], we developed a multiple windows approach combining information over areas larger than the single window, which allows us to determine the matching pixel with greater confidence. To estimate the uniqueness of each match, besides the tests of sharpness and distinctiveness suggested in [7], we used the following additional criterion. For each scan line in the reference image we apply an iterative procedure that finds the collisions (multiple matches, i.e. groups of pixels of one image that correspond to the same pixel of the other image), keeps only the pair of pixels with the best SAD value and replaces the “loosing” matches with their succesive candidate within a subset of its best four solutions. This step can be seen as implementing a global constraint over each row of pixels, yet it is applied only to a subset of locally established matches. Two more procedures scan the resulting disparity map to remove non reliable matches pertaining to low texture areas and fill missing matches caused by occlusions. D. Merging of regions To perform a first merging of the small regions generated by the watershed we implemented a slightly modified version of an algorithm [9] that merges adjacent regions based on a local measure combining the difference in color
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Hierarchical SOM to Identify and Recognize Objects in Sequences of Stereo Images
and the mean intensity of the edges that separate two regions. At each step the variance of the gray level within the new region is calculated and a threshold on variance is computed from the merging history of the region itself. Such threshold is then used to decide when to exclude a region from the merging. In the original algorithm [9] the color difference is measured as the distance between hue levels and is combined with edge intensity, weighing them respectively 80% and 20%. However, the HSV color space has some singularities that make the hue level less reliable as the saturation diminishes [2]. To avoid such pitfall, we chose to use an adaptive algorithm for selecting the weights of the two components of the similarity measure. Before beginning the merging process we subtract the background (the pixels that lie further than a chosen depth threshold). This forces the algorithm to consider the background as a single element, speeding up the merging. The resulting region map is made up of regions that represent homogenous and possibly meaningful parts of the objects in the scene. Each identified region is then summarized as a six values feature array containing the coordinates of its centroid and the median of each of the four computed features: hue, saturation, value and disparity. These feature arrays represent the input to the HSOM network, which produces the final output of the recognition process. IV. HSOM NEURAL NETWORK The Self Organizing Map [10] belongs to the class of unsupervised learning neural networks. SOMs are basically a clustering tool, able to build a map of the distributions of the input data, grouping them in clusters topologically ordered within the SOM structure. SOMs are a popular choice for clustering problems in image segmentation, which can be seen as a clustering process where each cluster encloses the portion of the feature space that represents an homogeneous region. On the other hand one of the main shortcomings of SOMs in our context is that they need the user to define the number of neural units before the segmentation begins, because it affects the number of regions in the result, which is a priori unknown. In our work we make use of a Hierarchical Self Organizing Map, an evolution of the classical SOM that tries to overcome the aforementioned shortcoming by building a hierarchical structure in which each layer is a single layer SOM. The main idea is that it is possible to segment an image by grouping its features at different levels of analysis, supposing that each layer of the HSOM could achieve a viewpoint from a different scale [11]. This could be very useful in an object identification task since it allows to
979
gradually group the elements composing the objects without a priori defining the homogeneity criterion. Moreover this hierarchical structure grows during the segmentation process, thus partially overcoming the limit imposed by the fixed number of neurons of the classical SOMs. The inputs to the first layer of the network are the sixelements feature vectors characterizing each identified region. The first four are the median of the H, S and V component of the HSV color representation and of the depth over the pixels of each region, while the last two are the X and Y coordinates of the centroids of the region. Thus, half of the feature space takes in account the color descriptions, while the other half represents the position of the regions in 3D space. The size of each SOM layer is NxN where N depends upon the number n of input vectors (i.e. the number of regions for the first layer) through the formula:
N = 0.8 ∗ n
(1)
One of the main drawbacks of this approach is that it doesn’t take into account the size of the regions in building clusters. This could lead to errors in cluster formation, because small regions are usually less reliable then the bigger ones. To enhance the role of large regions we increase the number of times that their vectors are presented to the HSOM based upon their area. Once the first layer is trained, the weight vectors of each neuron that won for at least one input become the input vector for the second layer. The size of the second layer is also determined using Eq. 1, where n is now the number of winning neurons in the first layer. Thus, the network grows dynamically, adding new layers until the desired size of a 3x3 neurons layer is reached. The choice of nine neurons in the final layer, allowing to recognize up to nine different objects, is empirical and depends on the structure of our images, which usually contained at most three objects. At the end of this process the trained network has learned a description of the visual scene that can be used to recognize the same objects in other frames of the same scene. V. EXPERIMENTAL RESULTS Following the idea suggested by Smith et al. [12] we can consider an object shown in different images as recognized if it is correctly labeled in all the images and if its position in the different images is correctly estimated. We therefore decided to evaluate the whole system only in terms of its object recognition performance, which is the overall goal of the approach proposed here. To this goal, we acquired sequences of stereo images of a fixed visual scene while moving the binocular system along
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
980
Giovanni Bertolini, Stefano Ramat, Member IEEE
a straight line perpendicular to the optic axes of the cameras of a predetermined amount between two frames. Therefore, when an object is detected in one image, the estimate of its position in another image in the sequence can be easily computed. Such estimate is used for evaluating the correctness of the position of the recognized objects, and for modifying the weights of the neurons to move them according to the information on the displacement of the cameras. Our “recognition test” can be summarized as follows: 1. Train the HSOM with one image of the sequence 2. Modify the coordinates stored as network weights using the information about the displacement of the cameras 3. Use the trained network for the segmentation of another image of the sequence, identifing the main region generated by each neuron in each image 4. Compute a dissimilarity measure (Eq. 2) between each possible pair of regions belonging to the two images 5. Match each region in the training image with the region having the lowest dissimilarity measure in the test image 6. Consider as recognized only those regions of the training image that match regions of the test image identified by the same neuron and having a dissimilarity measure I below a predefined threshold We repeated this procedure by training the network on each image in the sequence and testing the trained network on all the other images in the sequence. The dissimilarity measure I is made out of two parts that consider the variation of the area and the error in the centroid position, respectively. The formula can be written as I=
Area(Ri ) − Area(R' j ) Area(Ri )
+
Centroide (Ri ) − Centroid(R' j ) Centroide (Ri ) − Centroid(Ri )
(2)
where R represents regions in the training image and R’ regions in the test image; i and j represent the i-th and j-th neuron of the network, respectively, while the subscript e refers to the expected centroid position based on camera motion information. We consider that an object represented by Ri is recognized if the minimum of I is less than 2 and is obtained for j=i. The recognition performance is then evaluated as the percentage of recognized objects over the total number of expected objects (based on the number identified in the training image), and in terms of the mean error in area and in centroid position. We tested our system over three sequences of six binocular images each; two sequence showing three objects in the foreground while the other only two. Due to the arrangement of our binocular system the foreground is defined as the portion of the observed scene lying between 50 cm and 150 cm from the cameras. Altogether we tested our system over a total of 18 training images, 90 test images and 240
Fig. 1 Images 1, 2 and 5 of the 2nd sequence (a,b,c) and the respective output of the algorithm (d,e,f). Each color represent a label, i.e. an object. The network is trained only on the first image (a).
objects that had to be recognized. The system recognized 82% of the objects, with an average error of 14% in terms of area and of 25% in the position of the centroids. VI. CONCLUSIONS We developed a binocular vision system for detecting unknown objects in the foreground of the visual scene and recognize them in other views of the same scene. This is achieved using a HSOM network combining information from color space, edges and depth derived by independent algorithms. The HSOM shows good clustering ability, almost always correctly grouping the regions that compose the objects. The information stored in the network allows recognition of the same objects in the other frames of the sequence. Thus, the network weights represent an “expected image”, helping both segmentation and recognition. Our preliminary results appear to be very promising. Future developments of this work will consider continuously adjusting the description of the scene moving the HSOM neurons by partially retraining the network on each new image pair, and using camera motion information provided by inertial sensors. The ultimate goal will be that of being able to detect and track unknown objects in an unconstrained environment.
REFERENCES 1. 2. 3. 4. 5. 6.
Kragic D, Björkman M, Christensen H, Eklundh J (2005) Vision for robotic object manipulation in domestic settings. Robotics and autonomous Systems 52:85-100 Cheng HD, Jang X, Sun Y, Wang J (2001) Color image segmentation: advances and prospects. Pattern Recognition 34:2259-2281 Roy SD, Chaudhury S, Banerjee S (2004) Active recognition trought next view planning: a survey. Pattern Recognition 374:429-446. Fu KS, Mui JK (1981) A survey on image segmentation. Pattern Recognition 13:3-16. Kandel ER, Schwartz JH, Jessel TM (1999) Principles of Neuroscience. Elsevier Beucher S (1992) The watershed trasformation. Scanning Microscopy
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Hierarchical SOM to Identify and Recognize Objects in Sequences of Stereo Images 7. 8. 9.
Di Stefano L, Marchionni M, Mattoccia S, Neri G (2004) A fast areabased stereo matching algorithm. Image and Vision Computing 22: 983-1005 Hirshmüller H (2002) Real-time correlation-based stereo vision with reduced border errors. International Journal of Computer Vision 47:229-246 Navon E, Miller O, Averbuch A (2005) Color image segmentation based on adaptive local tresholds. Image and Vision Computing 23:69-85
981
10. Kohonen T (1982) Self-organized formation of topologically corrected feature maps. Biological Cybernetics 43:59-69 11. Bhandarkar SM, Koh J, Suk M (1997) Multiscale image segmentation using a hierarchical self-organizing map.Neurocomputing 14:241-272 12. Smith K, Gatica-Perez D, Odobez JM, Ba S (2005) Evaluating MultiObject Tracking,.IEEE conference on computer vision and pattern recognition
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A model arm for testing motor control theories on corrective movements during reaching D. Curone, F. Lunghi, G. Magenes and S. Ramat Dip. Informatica e Sistemistica, Università degli Studi di Pavia, Pavia, Italy Abstract— Based on a simple robotic toolkit, we have developed a robotic arm control system to be used as a humanoid benchmark for testing trajectory planning models and control hypotheses for both reaching and corrective movements during reaching. The developed system integrates visual sensory feedback of the end-effector of the arm allowing controlling its movement online. The system may operate in 1) open-loop configuration providing the servos at the joints of the robot one point of an initially planned trajectory every 20 ms; 2) correction closed loop configuration using visual feedback only for planning the corrective movement trajectory or 3) continuous closed loop configuration using initial conditions derived from visual feedback information and computing the next trajectory point at every time step. Although the planning and control of reaching movements has been extensively investigated, not much is know on the planning of corrective movements. The research tool we developed will be used to implement and test different trajectory planning and movement control models. Keywords— Anthropomorphic robotics, movement planning, biomimetic artifacts
I. INTRODUCTION When planning a movement the central nervous system (CNS) has therefore a redundant number of DOF and the problem of determining the joint angles to take the end effector to the target is thus ill-posed. Also, the CNS may choose any one trajectory (determining hand position as a function of time) among the infinite possible trajectories that would allow to reach the intended target. The same occurs if the movement is to be performed on a planar surface, reducing the dimensionality of the space to two. Yet, experimental results have shown that planar reaching movements are characterized by a number of invariants that are common to all normal subjects, e.g. the quasi-linearity of the trajectory of the end-effector (the hand) and the unimodal, bell-shaped velocity profile of the hand [1]. Many studies have thus faced the ‘reverse engineering’ problem of understanding which principles may govern the selection of the trajectory to be carried out. It is generally accepted that the CNS plans the trajectory based on an optimization principle, that is, it determines the trajectory that minimizes some cost function. Different proposed models can be found
in the scientific literature that are able to produce reasonably human-like trajectories as the minimum jerk model [2] and minimum torque-change model [3]. These models can and have been used to plan the movement trajectories of humanoid robots both because they allow to produce humanlike behavior, and for their intrinsic ability to reduce the stress on the actuators by producing smooth movements. On the other hand, reaching movements are not ballistic movements that are preprogrammed and carried out in an open-loop context, without the possibility to intervene on the planned trajectory to modify it. A number of studies have proved the ability of the CNS to correct the planned trajectory following a target jump during pointing and reaching movements [4-6]. Moreover, recent studies [7] have highlighted the ability of the CNS to make online trajectory corrections when a visible target is displaced, even unconsciously, at the beginning of the reaching movement. An interesting question is therefore that of understanding how the hand trajectories are modified online when the intended target is displaced or follows an unpredictable motion pattern. Does the CNS compute a new trajectory using an optimization model with new initial conditions imposed by the ongoing movement, or is the correction controlled through different mechanisms? An important advantage of model-based representations is that they offer the possibility to be implemented and tested and thus provide a benchmark for the theories and hypotheses they rely upon. One constraint imposed on any humanoid model to be used for studying motor responses to target displacements is that of the reaction times of humans. Experimental findings have shown that humans can respond to visual target displacements within about 110 ms [5;6] and that they are able to correct the direction of the trajectory in about 250 ms [8;9]. Our goal in the work described here was that of building one such benchmark model to be able to test motor control theories on reaching movements through the reproduction of real experimental conditions. The following describes our software and hardware implementation of a three DOF robotic arm control system integrating online visual sensory information on target and hand location. Initially we implemented motor planning based either on the minimum jerk [2] or on the minimum variance model [10], both for preprogramming and for online correction of the end effec-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 986–989, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A model arm for testing motor control theories on corrective movements during reaching
tor trajectory. For completing this task we employed very limited economic resources, which represented our cost function in this context.
987
of the LED mounted on the end-effector tip. These are acquired through two analog input lines of the PCI 6224 card. The overall system is pictured in Figure 1. B. Software control
II. MATERIALS AND METHODS A. Hardware The toolkit we used was a Lynx6 (Lynxmotion, Inc.) five DOF robotic arm. The three degrees of freedom of the arm that were exploited in this project are actuated by four Hitec HS475 servo motors, two moving the shoulder, and one each for the elbow and wrist joints. The robotic arm has an arm length of 12.0 cm, a fore arm of 12.0 cm and a hand of 4.7 cm. The servos are controlled in position using a PWM code with pulse cycle of 20 ms and a range of pulse duration between 0.5 and 2.5 ms. The angular excursion of each servo is of about 200 deg, thus a discretization of the 2 ms maximum pulse duration in 1000 steps of 2 μs each, yields angular increments of 0.2 deg, which allow for a good accuracy in determining the angular positions of each servo and thus of the position of the hand in space. The angular velocity of each joint is therefore controlled by modulating the increments of pulse duration that are sent to the servos every 20 ms. The minimum velocity for each servo is therefore 0.2 deg in 20 ms, or 10 deg/s. To cope with the timing requirements imposed by the experimental results, we chose to avoid the controller provided with the modeling kit and decided to directly drive the motors through the digital output lines of a National Instruments PCI 6224 card. The ‘visual’ sensory feedback is provided by a infra-red (IR) camera system with its controller hardware, model Hamamatsu PHS controller C2399, that provides the X and Y planar coordinates of a IR emitter (typically a IR LED) in its workspace. The controller outputs two voltages (range 5V to +5V) that are proportional to the X and Y coordinates
Y X
A Figure 1 A.
B
overview of the complete system with the IR camera providing the visual feedback on hand position. B: detailed view of the robotic arm.
In order to meet the timing requirements imposed by the need to be able to close the loop of the control system and intervene on the servo commands to correct the overall endeffector trajectory we have developed a custom software architecture comprising three subsystems: •
•
•
A Matlab interface allowing to set the movement parameters (2 DOF or 3 DOF, target position, timing, etc.) and to graphically visualize the performance of the system by comparing it to the ideal trajectory. A C++ ActiveX server providing the procedures for planning the end-effector trajectory. The higher computational efficiency of this language allowed us to develop a component that permits to close the arm control loop. It is able to use feedback sensory data (currently vision only) in planning the new position of the endeffector, determine the corresponding joint positions, compute and send the new command to the servos within the 20 ms interval between one command and the following one. A LabView dynamic link library (dll) component that is invoked by the ActiveX module and interacts with the PCI 6224 card.
C. Software architecture Overall, the user interacts with the Matlab program, sets the parameters of the desired trajectory and defines, if needed, where and when the target will be displaced within the working area, and the operating mode of the control: open loop, correction closed loop or closed loop. In the first mode the planned trajectory is sent, one command every 20 ms, to the servos without considering the visual feedback of hand position. In correction closed loop mode, instead, the visual feedback provides information on the ongoing movement only when a displacement of the target is perceived. Then, a new trajectory is planned based on the new target position and on the initial conditions determined by the ongoing movement. In continuous closed loop mode, the visual feedback information is used together with target position to compute a new trajectory at each 20 ms step. The trajectory is then planned in Cartesian coordinates using either the minimum jerk model or the minimum variance model implemented in the ActiveX control. The general solution of the minimum jerk model implemented in the ActiveX component is therefore (for each movement dimension):
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
988
D. Curone, F. Lunghi, G. Magenes and S. Ramat
v a0 2 l 3a t + (10 3 − 6 02 − 0 )t 3 + ... 2 2d d d v0 3 a0 4 v l l 1 a0 5 + (−15 4 + 8 3 + )t + (6 5 − 3 04 − )t 2 d2 2 d3 d d d d x(t ) = v0t +
(1)
Where d is the movement duration, l is the distance to the target, v0 is the initial velocity, a0 the initial acceleration. The final velocity and acceleration are always set to zero. For planning the correction allowing to reach the displaced target, v0 and a0 are computed based on the visual feedback. This is accomplished using the current sample and one or two previous data points for computing the average velocity and acceleration, respectively. At each step, the following position to be reached is then transformed into joint coordinates through an inverse kinematic model. Whilst for the two DOF configuration we have implemented the analytical solution (Cartesian coordinates are univocally mapped on the two joint coordinates of the arm), for the three DOF configuration the inverse kinematic problem is solved using a multilayer neural network (MLP) previously trained offline. The target joint configurations used for the training of the MLP were obtained through numerical optimization procedures. This step therefore determines the angular positions of the servos allowing to reach the following planned position. The desired position of each joint is coded in the 1 to 1000 scale of the servos and sent, every 20 ms, to the LabView dll which encodes them in terms of duration of the square wave controlling the servos. III. RESULTS When facing a target displacement, the described system allows to compute and send the commands for correcting the trajectory within two sampling intervals (40 ms) which is much
Figure 2.
Planned and recorded minimum jerk trajectories for a 19.7 cm diagonal movement on the XY plane. Open loop. Top panel: end-effector position traces. Bottom panel: velocity traces.
shorter than the reaction times reported in the literature [4;7]. Therefore the timing performance of the system is more than sufficient to reproduce realistically the features of human movements in terms of response times to perturbations of the experimental conditions. Thus, to reproduce the timing aspects of the different experimental conditions, we have added a further parameter that specifies the latency of the correction of the arm commands. Open loop mode. The planned minimum jerk trajectory and the recorded one for a diagonal movement on the XY plane are shown in Figure 2. The arm moves from point (0,10) to the target in (18,18) with a diagonal movement of amplitude 19.7 cm. We chose to plan the movement in order to accomplish it in one second. The data shows how accurately the end-effector follows the planned trajectory resulting in a mean squared error over the whole movement of 0.07 and 0.03 cm along the X and Y axis, respectively. Correction closed loop mode. Figure 3 shows the planned and the recorded trajectory of a reaching movement during which the target is displaced 8 cm to the right after 600 ms from movement onset. The initially planned trajectory had only an X component to move the end-effector from its starting point (0,10) to the target’s initial position in (0,18). The target displacement to the right causes the system to plan a new Y component, also following the minimum jerk model, that begins after about 720 ms and brings the endeffector on target. The mean squared error between the performed and the planned movement was of 0.03 cm on the X trajectory and 0.09 cm on the Y trajectory. Finally, Figure 4 shows the behavior of the end-effector during a movement in which the target is displaced further along the same direction of the initially planned trajectory.
Figure 3. Planned and recorded trajectory for a reaching movement with lateral target displacement. The movement stets off to reach a straight ahead target, which jumped 8 cm to the right 600 ms after the onset of the trial. Top panel: end-effector position traces. Bottom panel: velocity traces.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A model arm for testing motor control theories on corrective movements during reaching
989
the minimum variance model, although we plan to integrate further models in the system. The continuous closed loop configuration of the control system allows to reprogram a new trajectory considering the real hand position at every time step and thus implement and evaluate the performance of a next-state planner control [11;12]. Overall, the proposed solution represents a valuable tool for evaluating motor control hypotheses by allowing to easily reproduce the experimental conditions that are found in the literature.
REFERENCES 1.
Figure 4.
Position (top) and velocity (bottom) traces from a reaching trial during which the target is displaced in depth with respect to the original position. Mse was 0.04 for the X axis component and 0.02 for the Y axis.
The movement is therefore prolonged and the minimum jerk trajectory yields a velocity profile that shows an acceleration of the end-effector corresponding to the movement correction. The corrected movement lasted 200 ms longer than planned for the initial target. IV. DISCUSSION The prototype system presented in this work represents our attempt at developing a cheap humanoid robotic model that can be used as a benchmark for testing motor control hypotheses on reaching movements. The system implements a three DOF robotic arm based on a cheap commercial kit using servos for radio controlled models. Of the original kit, only the hardware of the arm was used in our prototype system, while the control software and hardware were custom developed for this project. The system was coupled to an IR camera system able to output the planar coordinates of a IR LED which was mounted on the arm end-effector in order to provide a ‘visual’ feedback of hand position that is used to control the movement online. Specifically, the system uses the fed back hand position to estimate its instantaneous velocity and acceleration (computed as a mean over the most recent data samples). This information is in turn used to provide the initial conditions for programming corrective movements following unexpected target displacements or for continuous closed loop control. Currently, both the initial and the corrective movement trajectories are planned based on the minimum jerk or on
Morasso, P., "Spatial control of arm movements" Exp.Brain Res., vol. 42, no. 2, pp. 223-227, 1981. 2. Flash, T. and Hogan, N., "The coordination of arm movements: an experimentally confirmed mathematical model," J.Neurosci., vol. 5, no. 7, pp. 1688-1703, July1985. 3. Uno, Y., Kawato, M., and Suzuki, R., "Formation and control of optimal trajectory in human multijoint arm movement. Minimum torque-change model," Biol.Cybern., vol. 61, no. 2, pp. 89-101, 1989. 4. Soechting, J. F. and Lacquaniti, F., "Modification of trajectory of a pointing movement in response to a change in target location," J.Neurophysiol., vol. 49, no. 2, pp. 548-564, Feb.1983. 5. Brenner, E. and Smeets, J. B., "Fast Responses of the Human Hand to Changes in Target Position," J.Mot.Behav., vol. 29, no. 4, pp. 297310, Dec.1997. 6. Paulignan, Y., MacKenzie, C., Marteniuk, R., and Jeannerod, M., "Selective perturbation of visual input during prehension movements. 1. The effects of changing object position," Exp.Brain Res., vol. 83, no. 3, pp. 502-512, 1991. 7. Sarlegna, F., Blouin, J., Bresciani, J. P., Bourdin, C., Vercher, J. L., and Gauthier, G. M., "Target and hand position information in the online control of goal-directed arm movements," Exp.Brain Res., vol. 151, no. 4, pp. 524-535, Aug.2003. 8. van Sonderen, J. F., Denier van der Gon JJ, and Gielen, C. C., "Conditions determining early modification of motor programmes in response to changes in target location," Exp.Brain Res., vol. 71, no. 2, pp. 320-328, 1988. 9. Castiello, U., Paulignan, Y., and Jeannerod, M., "Temporal dissociation of motor responses and subjective awareness. A study in normal subjects," Brain, vol. 114 ( Pt 6) pp. 2639-2655, Dec.1991. 10. Harris, M., and Wolpert, D., "Signal-dependent noise determines motor planning," Nature, vol. 394, pp. 780.784, Aug. 1998. 11. Hoff, B. and Arbib, M. A., "Models of Trajectory Formation and Temporal Interaction of Reach and Grasp," J.Mot.Behav., vol. 25, no. 3, pp. 175-192, Sept.1993. 1. Shadmeher, R. and Wise, S. P., The computational neurobiology of reaching and pointing Cambridge: MIT Press, 2005. Author: Stefano Ramat Institute: Dip. Informatica e Sistemistica. Università degli Studi di Pavia Street: Via Ferrata, 1 City: Pavia Country: Italy
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Assessment of hand kinematics and its control in dexterous manipulation M. Veber1, T. Bajd1 and M. Munih1 University of Ljubljana/Faculty of Electrical Engineering, Ljubljana, Slovenia Abstract— The aim of our work was to design a method for assessment and training of human hand dexterity while manipulating an object. A virtual environment was used to display a target object in various poses. The target poses were first recorded for a single person – a virtual trainer. The poses of a real object, held by the subjects included in the investigation, were assessed by a motion tracking device and displayed within the virtual environment. The subjects were asked to align the 3D images of real object and the target object. The target poses were normalized with respect to the different sizes of arms and hands. In this way all subjects were able to reach the desired target postures. Satisfactory repeatability of hand movements was observed in a single subject and across a group of twelve unimpaired subjects. Keywords— Human hand, Dexterity Assessment, Training, Rehabilitation, Virtual environment
I. INTRODUCTION The human hand is controlled at the central nervous system level [1], where visual and proprioceptive information is processed into a motor action, and also at the peripheral level, where motion is further determined by biomechanical constraints [2]. From the kinematic point of view there are at least two problems which have to be solved in order to perform a motion of forearm, wrist, and fingers [3]. The first one deals with the selection of a proper trajectory among an infinite number of possible trajectories from the starting to the final hand posture. The second problem is related to the transformation of hand pose into the angles of individual wrist and finger joints. In the studies of human prehension the solution of both tasks is frequently referred to as an optimal hand control [4]. The hand transport and preshaping phases of prehension have already been extensively studied [4, 5, 6]. In contrast to many studies related to the first two phases of prehension, the hand kinematics during the manipulation of a grasped object is not extensively described in the literature. The fact that there is still missing an appropriate method for assessment of hand and finger kinematics might be a possible reason for the dexterous manipulation phase of prehension not receiving the deserved attention. The aim of this paper is to present a virtual environment (VE) system for assessment and training of manipulation performed by fingertips, hand, and forearm. The proposed
approach employs optical tracking system to assess the poses of hand and arm segments while only collective activity of fingers is acquired. The method enables training of different poses of a target object displayed within VE by the help of a virtual trainer and normalization of the poses so that they can be reached by hands and arms of different sizes. II. METHODS A. Description of the system The experiment setup aimed for assessment of hand kinematics and motion of the object consists of the optical tracking system (Optotrak® Northern Digital Inc.) and ergonomically designed brace firmly attached to a desk as shown in Figure 1. The brace is designed to prevent elbow flexion-extension while allowing free pronation-supination of the forearm. The wrist and the fingers, which predominantly influence dexterity of the hand, are free to move within their range of motion. The angle between forearm and the desk is adjusted to approximately 40°. Elbow flexion is kept fixed at approximately 55°. Infrared markers are attached to the hand and the object to assess the motion of hand and object. Three infrared markers are placed on the brace representing the reference frame. The other markers are placed on anatomical land-
Fig. 1 The right arm supported by a custom designed brace
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 982–985, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Assessment of hand kinematics and its control in dexterous manipulation
marks established by palpation. One marker is attached to the elbow (olecranon), two are placed on the forearm (ulnar and radial styloid process), one is fixed to the wrist (capitate bone), and two markers are attached to the dorsal aspect of the hand (end of 2nd and 4th metacarpals). Six markers are placed on the prismatic object to ensure that at least three markers are visible in every reachable pose. Threedimensional (3D) positions of markers measured by the camera system are sent via local area network to the client computer for processing and visualization. A virtual reality library Maverik is used to develop the VE. The real object is displayed within the VE with opaque color. At the same time a semi-transparent target object with bright colored vertices is shown on the screen. The goal of the task is accomplished when the real object displayed within the VE is moved inside the target object. A new pose of the target object is shown when deviations of position and rotation are reduced below threshold values of required accuracy. At that time the color of the displayed real object changes to indicate matching of pose of both objects. The target poses shown in Figure 2 are chosen based on the poses of objects encountered in different daily activities such as holding a glass (Figure 2b), fastening a light bulb (Figure 2c) or throwing an object (Figure 2d). The initial pose, which is displayed at the beginning of each trial, is shown in Figure 2a. Once the initial pose is reached, the first target pose is displayed. When the real object coincides with the target object, the subject is instructed to bring the virtual object back to the initial pose. The described sequence is repeated for all target poses of the object. Prior to the tests, subjects are allowed to practice to get accustomed to the task presented in the VE. The display of virtual objects is additionally improved by adding texture gradient and linear perspective. The scale of VE is also equalized with the scale in the real world. Shallow holes are drilled
a
b
c
d
983
into the surface of real object to ensure repeatability of position of fingertip contacts for different trials and subjects. The proposed system is aimed for the training and evaluation of hand dexterity in patients with neuro-muscular diseases. A patient with impaired hand is prone to compensate the reduced functionality by using the healthy hand or by moving the whole arm instead of using wrist and fingertips. The constraint imposed by the brace forces the patient to practice dexterous hand manipulation. However, as the motion of the whole arm is prevented, some target poses of the object might not be reached by arms and hands of different sizes. For this reason an adaptation of target poses is implemented. B. Postprocessing Poses of coordinate frames (Figure 3) attached to the elbow (HE), the forearm (HF), the dorsal aspect of hand (HD), and the object (HO) can be estimated from 3D positions of markers for each subject. Transformation matrices [7] describing the poses of forearm with respect to the elbow (TEF), the dorsal aspect of hand with respect to the forearm (TFD), and the object with respect to the dorsal aspect of hand (TDO) can be decomposed into product of position (P) and rotation (R) matrices (PEFREF, PFDRFD, PDORDO). If the axes of the coordinate frames are aligned with the axes of
yO
HO zO
xO
Fig. 2 Hand postures: the initial pose (a), holding a glass - hold (b),
Fig. 3 Coordinate frames describing postures of elbow,
fastening a light bulb - screw (c), throwing an object - throw (d)
forearm, hand, and object
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
984
M. Veber, T. Bajd and M. Munih
III. RESULTS
a
Elbow - forearm
80
40 RPY angles (°)
rotation of joints and the origins of coordinate frames are positioned into the centers of joint rotation, than hand anthropometry and motion of joints can be described by the corresponding position and rotation matrices. Each rotation matrix can be represented by a sequence of three rotations around axes z, x, and y for rotation angles (roll - R), (pitch - P), and (yaw - Y). Three sets of RPY angles are calculated from the rotation matrices (REF, RFD, and RDO) comprised in transformation matrices (TEF, TFD, and TDO). Transformation matrices are assessed at the time instant when the task is completed, i.e. when the real object displayed in VE is aligned with the target object.
0
-40
-80
P Y Forearm - dorsum
R
b
RPY angles (°)
20.0
0.0
-20.0
-40.0
c
P
R
Y Dorsum - object
20
RPY angles (°)
Twelve volunteers with no deficiencies in functionality of their right hand participated in this study. All subjects were able to reach the target poses of object. The results are shown as mean values of RPY angles with accompanying standard deviations (Figure 4). In figure panel (a) we present relative rotations of forearm with respect to elbow. Relative rotations of the dorsal aspect of hand with respect to forearm are depicted in figure panel (b), while collective activity of fingers with respect to the dorsal aspect of hand, are presented in panel (c). RPY angles calculated from the rotation matrices can be associated with movements of elbow, wrist, and finger joints in the following way. P angles calculated from matrices TEF describe forearm rotation (Figure 3, HF, P). Flexion/extension of the wrist is related to R angles obtained from matrices TFD (Figure 3, HD, R), while Y angles calculated from matrices TFD describe radial/ulnar deviation of the wrist (Figure 3, HD, Y). Matrices TDO contain information on the collective movements of fingers. R angles are related to the rotation of object in the plane parallel to the dorsal aspect of hand (Figure 3, HO, R). Y angles describe rotation of object around the axis which goes through the centre of gravity (COG) of the object and is parallel to the side of the object where fingertip contacts occur (Figure 3, HO, Y). P angles obtained form matrices TDO describe rotation of object around the axis passing through the tip of thumb and the COG of the object. The task “hold” requires repositioning of the object from the initial into the upright pose. This is achieved primarily by rotating the forearm (P, Figure 4a). Radial/ulnar deviation (Y) and flexion/extension (R) of the wrist (Figure 4b) translate the object to the target position, while fine tuning of the object orientation is achieved by rotating the object in the plane parallel to the dorsal aspect of the hand (R, Figure 4c).
0
-20
Hold -40
R
Screw
P
Throw
Y
Fig. 4 RPY angles assessed after rotation of arm and hand segments when the real object coincides with a target pose: forearm with respect to elbow (a), dorsal aspect of hand with respect to forearm (b), and object with respect to dorsal aspect of hand (c)
The elbow joint is close to its limit of range of motion when performing the “screw” movement. The forearm rotates in the opposite direction (P, Figure 4a) and is notably smaller than in the “hold” movement. Considerable amount of object rotation is performed by fingers (R, Figure 4c). The same observation is valid for the movement when shifting the object into the “throw” pose. The forearm is even less rotated (P, Figure 4a) while the rest of the movement is
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Assessment of hand kinematics and its control in dexterous manipulation
distributed between the wrist joint (Figure 4b) and fingers (Figure 4c). Standard deviations of RPY angles remain below 10° in almost all cases, except for the forearm rotation while performing screw movements and P angles extracted from the matrices RFD while performing the hold movements. In these two cases standard deviations reached 12°.
985
can be obtained. It can be also established to what extent the fingers can change the orientation of the object. In this investigation, typical movements of forearm, hand, and fingers of unimpaired subjects at performing manipulation of an object were obtained. These data can serve as reference in further studies of patients with neuromuscular impairments.
ACKNOWLEDGMENT
IV. CONCLUSION The described VR system for assessment and training of hand dexterity is based on conventional PC and motion tracking device. VE is used to provide an augmented feedback, while the haptic information is completely preserved. The poses of target object are first recorded for one subject (a virtual trainer) and are adapted online for different sizes of hand. The system requires minimal intervention when adding new manipulation tasks. The results of the investigation show that precise model of the hand does not necessarily have to be built to design the tasks. However, several dimensions of the hand have to be acquired online during the start up of the system in order to perform the normalization. Standard deviations of rotation angles obtained for the whole group of subjects are relatively high when compared to the range of motion of wrist and elbow joints. Large standard deviations can be explained by the kinematic redundancy of the hand. When performing e.g. screw movement a subject can choose between rotating the forearm and moving the fingers. The angle of rotation performed by forearm, fingers, and wrist also depends on the current posture of the hand and especially on the vicinity of joints to the limits of range of motion. The method can be implemented by any other costefficient motion tracking system such as a PC compatible magnetic tracking system. The approach proposed enables offline analysis of subject’s performance. Angles in individual joints cannot be assessed accurately, nevertheless an indication of control abilities over elbow and wrist joints
This project was supported by the Slovenian Research Agency. The authors are grateful to Gregorij Kurillo for many productive ideas and comments during work.
REFERENCES 1. 2. 3. 4. 5. 6. 7.
Roby-Brami A, Jacobs S, Bennis N, Levin MF (2003) Hand orientation for grasping and arm joint rotation patterns in healthy subjects and hemiparetic stroke patients. Brain Res 969(1-2):217-229 Scholz JP, Schoner G, Latash ML (2000) Identifying the control structure of multijoint coordination during pistol shooting. Exp Brain Res 135(3):382-404 Wang X (1999) Three-dimensional kinematic analysis of influence of hand orientation and joint limits on the arm postures and movements. Biol Cybern 80(6):449-463. Jeannerod M (1981) Intersegmental coordination during reaching at natural visual objects. Lawrence Erlbaum, New York Mamassian P (1997) Prehension of objects oriented in threedimensional space. Exp Brain Res 114(22):235-245 Supuk T, Kodek T, Bajd T (2005) Estimation of hand preshaping during human grasping. Med Eng Phys 27(9):790-797 Sciavicco L, Siciliano B (2002) Modelling and control of robot manipulators. Springer-Verlag, London Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Mitja Veber Faculty of Electrical Engineering Trzaska cesta 25 Ljubljana Slovenija
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Can haptic interface be used for evaluating upper limb prosthesis in children and adults H. Burger1, D. Brezovar1, S. Kotnik1, A Bardorfer2 and M. Munih2 1
2
Institute for Rehabilitation, Linhartova 51, Ljubljana, Slovenia Faculty for Electrical Engineering, Tržaška 25, Ljubljana, Slovenia
Abstract— There is a lack of objective measurement methods for assessing upper limb prosthetic use in adults and children. The aim of the present study was to find out whether haptic interface could be used for that purpose. Fifty-five adults and twenty-three children were included into the study. All were tested by UNB observational test and haptic interface, and they filled in one or two questionnaires. Haptic interface showed differences between hands and prostheses, the results depended on the age of a child or adult and correlated with the amputation level and the stump length. Correlations were also found among the results of haptic interface, UNB test and questionnaires. It was not demonstrated that the results of haptic interface depended on the time from amputation to fitting with the first prosthesis, or amputation of the dominant hand. It was not possible to test subjects after shoulder disarticulation or very high trans-humeral amputation. Haptic interface seems a promising tool for assessing upper limb prosthetic use in adults and children after trans-radial amputation. Keywords— upper limb amputation, prosthesis, rehabilitation, outcome measurement, haptic interface
I. INTRODUCTION Human hand is a very complex organ with many different functions and its functional integrity is essential for many activities. Upper limb amputation is a great catastrophe for an individual. A primary rehabilitation goal after upper limb amputation is the achievement of maximal functional ability and independence at home, at work or at school and in the community [1]. Engineers have developed new sophisticated and also very expensive prosthetic components; however, their benefits for users have not really been explicitly demonstrated. Unfortunately there are not many measurement methods and outcome measures for assessing rehabilitation outcome after upper limb amputation or benefits of different prosthetic components, especially not for adults. In general there are two methods that can be used for the assessment of hand and upper limb prosthetic function. These are observational tests and questionnaires. By means of an observational test a person is assessed in clinical environment, which is not necessarily the same as home envi-
ronment. Furthermore, the test is therapist-subjective and more time consuming than questionnaires. Questionnaires are subjective both for adults or children, but less time is needed to fill them in and subjects answer how they perform an activity in their real environment in every day life. For very young children, answers are based on parental answers which are not always completely realistic. There is no absolutely objective measurement method that would measure hand function at the activity level and not only at the level of body function. The aim of present study was to find out whether haptic interface could be used for the evaluation of upper limb prosthetic use in children and adults. II. SUBJECTS AND METHODS Fifty-five adults with upper limb amputation who had finished rehabilitation at the Institute for Rehabilitation in Ljubljana at least one year prior to the study, and twentythree children, the sum total of the patients at the children’s upper limb prosthetic clinic at the Rehabilitation Institute in Ljubljana, Slovenia, were included into the study. All of them had been fitted with a prosthesis at least one year before the testing. The skill at performing five different tasks in virtual environment was tested by a haptic interface (robot) as a measuring device. The tasks can be divided into three groups: tasks for measuring accuracy (points hitting, linear and circular tracking), task for measuring velocity and accuracy (labyrinth) and a maximal force task. At “points hitting” the subjects had to hit a point which was randomly changing its position. At “linear and circular tracking” the subject had to follow a ball on a straight or circular line in two directions – firstly, outwards or counter clockwise and secondly, inwards or clockwise. At “labyrinth” the subjects had to go through a labyrinth as fast as possible without hitting its wall. At “maximal force tasks” the subjects had to follow a straight line against the opposing force of 50N. All the tasks were measured by a custom-built virtual reality simulator consisting of a PHANTOM Premium 1.5 haptic interface and a graphics workstation [2]. The simula-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 965–968, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
966
H. Burger, D. Brezovar, S. Kotnik, A Bardorfer and M. Munih
tor provided visual and tactile (force) feedback to the subjects. The healthy (non- amputated) upper extremity was always tested first, followed by the amputated side. All the subjects used their prostheses. All the adults filled in two questionnaires in a form of an interview; firstly, the ABILHAND [3] and secondly, the part of the OPUS [4] questionnaire on upper extremity function. All the children and adults were tested by the University of New Brunswick (UNB) test of prosthetic function [5] which was specially developed for assessing children. The UNB test assesses ten developmentally based, ageappropriate activities in 2 to 13-year-old children. It is also appropriate for testing adolescents above the age of 13, using the activities for 11 – 13 year-olds [6]. It assesses the spontaneity and skill of prosthetic use. The subtests for 11 – 13 years were used for the adults. For children younger than seven years, their parents filled in a Child Amputee Prosthetics Project – Functional Status Inventory for Preschool children (CAPP-FSIP) [7], while older children filled in Child Amputee Prosthetics Project – Functional Status Inventory (CAPP-FSI) [8] by themselves. III. RESULTS A. Adults Among the adults there were 42 (76.4%) men and 13 (23.6%) women, 56 years old on average (SD 17 years). Two thirds had trans-radial amputation. At testing, 75% had esthetic prosthesis, 13.5% body powered, 3.8% myoelectric and 7.7% passive prosthesis with terminal device for work. The measurement with haptic interface was not possible on the amputated side in both subjects after shoulder disarticulation and in six out of nine subjects after trans humeral amputation due to too limited ROM in the shoulder joint. In 12 subjects after trans-radial amputation who had esthetic prosthesis, it was not possible to fix the haptic interface stick properly and their measurements were excluded from further analysis. Valid measurements on both sides were done on 28 adult subjects. At all five measured task, the results of the tests were lower when performed with the prosthesis. Differences were observed in almost all of the measured parameters. Many measured parameters correlated with the clinical parameters, the results of UNB test and questionnaires. The elderly subjects had greater maximal and average deviation from the tracking trajectory at linear (Figure 1) and circular tracking (maximal deviation linear r = .33, p < .01, average deviation r = .28, p < .05, circular tracking maximal devia-
Fig. 1 Results of linear tracking without disturbances in 74 years (upper part) and 45 years (lower part), both with prosthesis
tion r = .35, p < .05, average deviation r 0 .27, p < .05) with the prosthesis. The results achieved by the hand correlated only with the maximal deviation on both tests. The subjects with higher UNB activity score achieved higher end point velocity using their hand on both linear (r = .55, p < .05) and circular tracking (r = .63, p < .01), did both tasks quicker (linear r = .52, p < .05, circular r = .70, p < .01) and had greater absolute power of the patient movement contribution at both tasks (linear r = .62, p = .01, circular r = .61, p < .01). The results of the CIR questionnaire had no influence on linear and circular tracking with the prosthesis. Likewise, the results of ABILHAND questionnaire correlated with the results of the hand only. The elderly subjects needed more time to come through the labyrinth (r= .36, p <.01), made more collisions (r = .37, p < .01) with the labyrinth walls with greater force (r =.37, p <.05).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Can haptic interface be used for evaluating upper limb prosthesis in children and adults
967
B. Children Among children there were 13 boys and 10 girls, from three and half to eighteen years old, seven years on average. Children younger than three years were no able to follow the instructions. Two had trans-humeral amputation, all other (85.7%) trans-radial amputation and wore myoelectric prostheses. The younger children often released the haptic interface stick and it had to be fixed to the prosthesis by adhesive tape. In all subject, there was a difference between the results achieved by the hand and with the prosthesis. The elderly children had smaller maximal and average deviations from tracking trajectory with the prosthesis (maximal r = .-53, p = .01, average r = -.47, p <.05) and hand (maximal r = .-68, p = .01, average r = -.58, p <.05) at linear tracking and with hand only at circular tracking (maximal r = .-68, p = .01, average r = -.99, p <.001) (Figure 2). The older children had less collisions (r = -.67, p< .01), with smaller force (r = .-43, p < .05) and shorter duration (r = -.50, p < .05) with the prosthesis and by the hand (number of collisions r = -.70, p< .01, force r = -.49, p< .05, time r = -.55, p< .05) (Figure 3). Spontaneity and skill of prosthetic use of UNB test correlated with the results of haptic interface with hand only. The
Fig. 3 Results of labyrinth of a 4-year-old (left) and a 7-year-old (right), by hand (above) and with the prosthesis (below)
and average (spontaneity r =.76, p< .001, skill r =.57, p< .05) deviations from tracking trajectory at linear tracking and smaller (maximal spontaneity r =-.51, p< .05, skill r =.73, p< .01, average spontaneity r =-.45, p= .05, skill r =.66, p< .01) at circular tracking. High correlations were observed also between several parameters and the results of questionnaires, but none were statistically significant due to the small number of children or parents who filled in each questionnaire (10 in the younger and 13 in the older group). Since there were only two children after trans-humeral amputation it was not possible to compare the results between those after trans-radial and those two after transhumeral amputation. No correlation was found between the haptic interface results and the stump length. IV. DISCUSSION
Fig. 2 Results of circular tracking of a 4-year-old (left) and a 7 year-old (right), by hand (above) and with the prosthesis (below)
Haptic interface showed differences between hand and prosthesis in both adults and children. Several measured parameters correlated with different clinical data, such as age, the level of amputation and the stump length. However, there were still several others, such as the time from amputation to fitting with the first prosthesis and amputation of the dominant hand where the pattern was not so clear and where other studies have demonstrated a relevant influence on the rehabilitation outcome.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
968
It was surprising that in children the measurements with the prosthesis did not correlate with skill and spontaneity of prosthetic use, but with the results of the hand only. In adults, the measurements correlated with both, spontaneity and skill of prosthetic use. Most adults had used their prosthesis for several years and already knew exactly how to benefit from it at the most. In children, different questionnaires had to be used for different ages. The number of children in each age group was small and although there were high correlation coefficients for most of the measured parameters many were not statistically significant. In older children, there were more significant correlations for children’s answers than for parents’ answers. That truly indicated that parents did not necessarily know for which activities and how often their child used the prosthesis. In younger children, correlations were observed mainly in the measurements with the prosthesis and in the questions on the prosthetic use, whereas in older children the correlations were observed mainly in the questions on the frequency of performing an activity, for both hand and prosthesis. That meant that the more often an older child did an activity the better was his or her performance either by the hand or with the prosthesis at haptic interface tasks. In younger children, the more often they used their prosthesis for an activity the better was their performance on haptic interface by the prosthesis. In order to be able to do haptic interface tasks subjects needed at least some activity in the shoulder joint. In subjects after shoulder disarticulation or very high transhumeral amputation it was not possible to perform the test with the prosthesis. All the other subjects after transhumeral amputation who did the test, did it with a locked prosthetic elbow. Due to a small number of adults with body powered or myoelectric prosthesis, it was not possible to compare the results of the subjects using an active and those using a passive prosthesis. The subjects with a passive prosthesis were not able to grasp the haptic interface stick actively and it had to be taped to the hand. Although some subjects in the study were quite elderly and thought that they would not be able to perform the test, all of them finished it and were also very proud that they succeeded doing something new. Several young children had to pause and do something else for a while since they found some tasks rather boring. It would be better if in the future the tasks tested with haptic interface were the same as the activities tested by UNB tests. This would allow testing the objectivity of UNB
H. Burger, D. Brezovar, S. Kotnik, A Bardorfer and M. Munih
scores and the results would be easier to interpret by a clinician. V. CONCLUSIONS Haptic interface is a promising objective tool for assessing skill of upper limb prosthetic use in subjects after transradial and low trans-humeral amputation, especially if they have an active prosthesis (body-powered or myoelectric). It cannot be used in subjects after shoulder disarticulation and in very young children who are not able to follow the instructions. Tasks that would copy every day activities would make the test easier for interpretation.
REFERENCES 1.
2. 3. 4.
5. 6.
7. 8.
Wright VF, Hubbard S, Jutai J, Naumann S (2001) The prosthetic upper extremity functional index. Development and reliability of a new functional status questionnaire for children who use upper extremity prostheses. J Hand Ther 14:91-104 Bardorfer A (2003) Haptični vmesnik pri kvantitativnem vrednotenju funkcionalnega stanja gornjih ekstremitet : PhD thesis. Ljubljana: [A. Bardorfer] Penta M, Thonnard JL, Tesio L (1998) ABILHAND: A Rash-built measure of manual ability. Arch Phys Med Rehabil 79:1038-1042 Heinemann AW, Bode RK, O'Reilly C. (2003) Development and measurement properties of the orthotics and prosthetics users' survey (OPUS): a comprehensive set of clinivcal outcome instruments. Prosthet Orthot Int 27:191-206 Sanderson ER, Scott RN (1985) UNB test of prosthetic function: a test for unilateral amputees [test manual]. Fredricton, New Brunswick, Canada: Bio-Engineering Institute; University New Brunswick Wright VF, Hubbard S, Jutai J, Naumann S (2001) The prosthetic upper extremity functional index. Development and reliability testing of a new functional status questionnaire for children who use upper extremity prostheses. J Hand Ther 14: 1491-1504 Pruitt SD, Varni JW, Seid M, Setoguchi Y (1998) Functional status in children with limb deficiency: Development of an outcome measure for preschool children. Arch Phys Med Rehabil 79: 405-411 Pruitt SD, Varni JW, Setoguchi Y (1996) Functional status in children with limb deficiency: Development and initial validation of an outcome measure. Arch Phys Med Rehabil 77: 1233-1238 Author: Helena Burger Institute: Street: City: Country: Email:
Institute for Rehabilitation, Republic of Slovenia, Ljubljana Linhartova 51 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
FreeForm modeling of spinal implants R.I. Campbell1, M. Lo Sapio2 and M. Martorelli2 1
2
Loughborough University, Department of Design and Technology, Loughborough, UK University of Cassino, Department of Mechanics, Structures and Environment, Cassino (FR), Italy
Abstract— To design a customized prosthesis that is tailored to the size and the shape of a unique anatomy provides better medical treatments and outcomes along with improved comfort and quality of life for patients. In this paper an innovative approach to spine implant design is proposed which relies on freeform modeling software and a haptic interface. The system mimics working on a physical replica of the patient’s spine and allows the user to model a prosthesis which might represent a promising concept to fix a curved vertebral column. Keywords— Spinal deformities, spinal implants, FreeForm, Haptic modeling, shape memory alloy springs.
I. INTRODUCTION Scoliosis is a condition that involves complex lateral and rotational curvature and deformity of the spine. Looking at the vertebral column from the frontal view, scoliosis is a lateral bending of the spine, or a curvature in the anatomical plane referred as to coronal plane. The deformity is essentially three-dimensional, but pathological curvatures may occur in the lateral plane, or sagittal plane, when the normal inward (lordosis) or outward (kyphosis) curves of portion of the vertebral column are abnormal [1]. Although the causes of scoliosis are poorly understood, the mechanical factors that contribute to its progression are now better defined. For reasons that can be related to congenital vertebral anomalies, an imbalance in the coronal or/and sagittal planes is created that leads to increased compressive loads across one side of a vertebral body. According to the HueterVolkmann law, a differential is created that causes a diminished growth on the area subject to compression (concave side), while, conversely, distractive forces induce an exuberant growth on the convex side. Thus, in the setting of an established spinal asymmetry such as scoliosis, a vicious cycle of increased load asymmetry and progressive deformity is created: the greater the growth imbalance, the more severe the deformity, and this only stabilizes at skeletal maturity when the growth plates fuse [2]. II. BACKGROUND Strategies for managing a case of scoliosis depends on the age of the patient, its severity and the predictable out-
come and range from doing nothing to performing spinal surgery. Bracing currently represents the standard of care for immature patients with curvatures measuring 20 to 40 degrees, but 18% to 50% of these will progress in spite of bracing [3]. So even though bracing is noninvasive and preserves growth, motion and function of the spine, it is only modestly successful in preventing curve progression. Moreover it must be considered that bracing for teenagers is not an easy treatment; it is uncomfortable and it may have a negative psychological impact due to compliance and aesthetic issues. For teenagers who have curves around 40 degrees or more surgery is usually chosen. Spinal deformity surgery aims to achieve a reasonable correction of the curvature that prevents further progressions as well as restoration or preservation of function and optimization of cosmetic issues [4]. Spinal fusion is the most widely performed surgery for scoliosis. Bone is implanted to the vertebrae so that two or more of them are combined when the bone itself heals. Patients with fused spines and permanent implants tend to have normal lives with unrestricted activities. The main limitation of spinal fusion is due to the loss of flexibility in the fused segments of the spine. To overcome this drawback new techniques are being developed that exploit the patient's growth and redirect it to achieve correction [5]. Rather than holding the spine with rods and screws, these new methods rely on vertebral bodies being stapled on the convex side of the curvature. Fusionless scoliosis surgery has many potential advantages over either bracing or fusion techniques. Bracing in fact, only transmits forces indirectly by mean of the ribs, pelvis and torso. On the other hand, spinal stapling can preserve motion lessening the chance of back pain in adulthood [6]. The first attempts to modulate vertebral growth with skeletal fixation devices date back to the 1950s. Despite the initial enthusiasm, the outcomes where disappointing mainly because the staples proved to be unstable and the loosening rate was high. The dislodgment of the implants seemed to be the main cause of failure when performing vertebral stapling until these implants began to be made of NickelTitanium, an alloy that exhibits a particular property called the shape memory effect. Before the surgery, the staples are first immersed in an ice bath and the prongs, initially bent, are deformed to a straight position; when finally applied to
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 969–972, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
970
R.I. Campbell, M. Lo Sapio and M. Martorelli
Fig. 2 The Phantom haptic device developed along with the FreeForm software to work on complex geometries
Fig. 1 Implantation of staple memory alloy staples on a goat model the bones, through the heat of the body the staples clamp down to a "C" position for a secure fixation (Fig. 1). Studies have shown how vertebral body stapling for the treatment of adolescent idiopathic scoliosis is feasible and safe as well as having the capability of stabilizing curves. Moreover, experiments carried out on animals, have demonstrated bone growth modulation determined by the stapling, even though the strain provided by the implants was not able to reverse fully the Hueter-Volkmann effect. The use of shape memory alloys is relatively new and, although these results are encouraging, it is generally believed that a long term follow-up is needed as well as a better understanding of the forces acting on the vertebrae when the staples are inserted [7]. III. AN INNOVATIVE APPROACH TO SPINAL IMPLANT DESIGN In spinal fusion surgery, to keep the straightened spine still until the bones are fused together, multiple hooks or wires are attached to the back of the individual vertebra and these are connected to one or two metal rods which have been pre-bent to the desired contour. On the one hand, the implant length must be sufficient to apply the necessary bending moment to the spine. On the other hand, it must not be so long that it creates excessive spinal stiffness. Vertebral stapling provides the great advantages of preserving spinal flexibility, representing at the same time a less invasive procedure, but the corrective force is not great enough to significantly correct the deformities.
An innovative implant that aims to address the limitations of both these techniques, could be thought as having a design which embraces a wider surface of a vertebral body. The purpose is to generate the corrective forces needed and, at the same time, not to use fusion which makes a spinal segment rigid just to keep the new vertebral position. To achieve a good design that fits anatomical shapes and naturally occurring geometries is not a simple task with traditional CAD packages as these are mainly used to define regular mathematical geometries. However, software known as FreeForm (from SensAble Technologies) has been developed to manipulate solid, complex, unconstrained three-dimensional shapes and forms. It provides tools analogous to those used in physical sculpting and works through a haptic interface (Fig. 2.), allowing the user to “feel” the object being worked on in the software. The models are geometrically represented through voxels and referred to as virtual ‘clay’ owing to the fact that they can be modified in an arbitrary way. FreeForm has been successfully employed in the design process of implants which perfectly fit natural occurring geometries [8]. IV. IMPLANT DESIGN Working with a tactile-feedback stylus, the user is allowed to design surfaces on already existing models, the latter representing in this case a three-dimensional reconstructed geometry of an anatomical site. The surface can be edited through a set of control points although remaining constrained to the bone of the vertebral body (Fig. 3). Once the surface has been extruded, a model which is tailored to the size and shape of the anatomical bone is obtained.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
FreeForm modeling of spinal implants
Fig. 3 An anatomical patch drawn on the vertebral body. It's the first step to create an implant that perfectly fits the bone geometry
Using these guidelines a plate has been designed that surrounds the whole vertebral body and attaches to the transverse processes to anchor the implant on the back side (Fig. 4). The starting model will be a curved spine with a complex three-dimensional deformity that can occur in more than one anatomical plane, involving also a torsion of the vertebra. Our hypothesis is that holding the vertebral body through a custom made plate that perfectly fits an anatomical form, it might be possible to generate corrective forces that act on the whole bone rather than through a single point as happens with the staples. Thus as well as preserving spine flexibility, the design could help more effectively to reach the goal of bone growth modulation and progressive correction of the deformity, according to the direction of the load.
Fig. 4 The anatomical shape integrates the implant with the bone
971
Fig. 5 The implant closes on the transverse processes As pointed out by previous works [9] FreeForm excels in quick organic form giving but is weak in the creation of engineering details. So the implant model has to be exported to an external CAD package in order to be detailed (Fig. 5). The plates shaped around the vertebral body could be connected to each other through a coil spring made of a shape memory biocompatible alloy. The placement of the coil spring between the vertebrae should be chosen according to the plane in which the deformity occurs. Such kind of springs exhibit a property known as superelasticity due to a solid-solid reversible austenite-martensite transformation that happens when a tension is applied. Superelasticity means that a strain range with approximately constant stress can be observed in stress-strain diagram of a Ti-Ni alloy (Fig. 6).
Fig. 6 Theoretical stress-strain curve of SMA. AB: Elastic Austenite, BC Loading plateau-transformation from austenite to force induced martensite, CD: Elasticity of martensite, DE: Unloading plateau - retransformation martensite austenite, EA: Elasticity of austenite.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
972
R.I. Campbell, M. Lo Sapio and M. Martorelli
The super elasticity is the most important advantage afforded by this material: if the extension of the spine between two points to which the spring is hooked falls in the elongation range where constant stress is provided, then a constant force can be applied to straighten the spine, allowing at the same time the flexibility required during the patient's movements (Fig. 7). Shape memory alloy coil springs are commercially available for orthodontic applications; they provide the orthodontists with the ability of exerting light constant spring-back force over a large range of deformations so that help to gradually straighten the teeth [10]. The material properties can be sensibly tuned through proper thermomechanical treatments to meet optimally the demand for the specific clinical situation and thus, the effectiveness of a particular appliance is closely related to the specific material properties. Biocompatibility of Ni-Ti alloy is an issue that was investigated shortly after these material were discovered [11]. Although Nickel is a highly poisonous element, titanium and its compounds are biocompatible. What confers a good biocompatibility to Ni-Ti alloys, is the innocuous layer of TiO2 produced by the oxidation of the titanium. This layer surrounds the sample making the alloy harmless to the human body. V. CONCLUSIONS To shape the implants to the exact features of the anatomical bones is a promising concept to develop more effective spine devices designs in the future. The forces exerted by the prosthesis can be directed more precisely and several issues like the accessibility of the sites during the surgery can be addressed even before the real intervention takes place, using the CAD interface. It must be pointed out that FreeForm is an easy to learn software: the interface is friendly and once a new user has become familiarized with some basic concepts, it is straightforward to work with and the design tasks can be carried out without experiencing many problems. In our view, the design process should be thought out and configured by drawing upon experiences and skills from different working and cultural backgrounds. Due to the complexity and the specificity of the issues faced in both the medical and engineering fields, only through a collaborative project can a reliable and robust design be achieved. Indeed, future studies should focus on how the innovative implants integrate with muscles and ligaments; moreover, according to which is the best insertion strategy, the design should be modified and updated to meet the demand for safe medical procedures and for an ever better quality of life for the patients who undergo spinal surgery.
Fig. 7 The implants between two vertebrae
REFERENCES 1. 2.
Scoliosis at http://en.wikipedia.org Michael J, (2006) Spinal growth and congenital deformity of the spine. SPINE Volume 31, 20:2284-2287 3. Betz R, Kim J, D’Andrea L et al. (2003) An Innovative technique of vertebral body stapling for the treatment of patients with adolescent idiopathic scoliosis: a feasibility, safety and utility study. SPINE Volume 28, 20S:S255-S265 4. Schlenk R, Kowalski R, Benzel E (2003) Biomechanics of spinal deformity. Neurosurg Focus 14(1):Article 2 5. Braun J, Hines J, Akyuz E et al (2006) Relative versus absolute modulation of growth in the fusionless treatment of experimental scoliosis. SPINE Volume 31, 16:1776-1782 6. Braun J, Akyuz E, Ogilvie J et al. (2005) The efficacy and the integrity of shape memory alloy staples and bone anchors with ligament tethers in the fusionless treatment of experimental scoliosis. J. Bone Surg. Am. 87:2038-2051 7. Braun J, Akyuz E, Udall H et al. (2006) Three dimensional analysis of 2 fusionless scoliosis treatments: a flexible ligament tether versus a rigid shape memory alloy staple. SPINE Volume 31, 3:262-268 8. Bibb R, Rocca A, Evans P (2002) An appropriate approach to computer aided design and manufacture of cranioplasty plates. Journal pf maxillofacial prosthetics & technology 5(1):29-31 9. Bahar Sener, Owain Pedgley, Paul Wormald, Ian Campbell (2002) Incorporating the FreeForm haptic modelling system into new product development, EuroHaptics Conference, Edinburgh, United Kingdom, 2002 10. Raboud D, Faulkner M, Lipsetta A (2000) Superelastic response of NiTi shape memory alloy wires for orthodontics applications. Shape Memory Alloy. Volume 9, 5:684-692 11. Machado L, Savi M (2003) Medical application of shape memory alloys. Brazilian journal of medical and biological research 36:683-691 Author:
Massimo Martorelli
Institution: Street: City: Country: E-mail:
University of Cassino Via G. Di Biasio, 43 03043 Cassino (FR) Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Grip force response in graphical and haptic virtual environment J. Podobnik1 and M. Munih1 1
Laboratory of Biomedical Engineering and Robotics, Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia Abstract— Current state of the art in virtual environment development allows different levels of immersion, from graphical environments, where only visual information is conveyed to the user, to haptic environments, where whole set of visual and kinesthetic information is conveyed to the user. This paper presents results of two experimental sets conducted one in a haptic virtual environment (HVE) and second in a graphical virtual experiment (GVE). The grip force response to a haptic or visual cue is investigated and compared. Although the underlying neural control mechanism triggered by haptic or visual cue are different, both responses are well pronounced, have similar shape and can thus be compared. Response triggered by the haptic cue has shorter delay, is stronger and shorter in duration in comparison to the response triggered by the visual cue. Keywords— grip force, precision grip, virtual environment, haptic interface.
I. INTRODUCTION Virtual environments and haptic devices have been recognized to be very suitable for rehabilitation and have shown wide potential use [1][2][3]. Computer generated virtual and haptic environments allow different levels of immersion [4] to which users respond differently. Haptic devices convey a kinesthetic sense of presence to a human operator interacting with a computer-generated environment. Through this interaction virtual sense of touch is achieved [3]. Haptic technology allows defining exact dynamics and behavior of a virtual environment [5]. This paper presents the comparison of the results of an experiments conducted in two kinds of virtual environment that convey different levels of immersion. First set of experiments was conducted in graphical virtual environment (GVE) setup composed of the force transducers for measuring a grip force and graphical interface. Second set of experiments was conducted in haptic virtual environment (HVE) setup composed of force transducers for measuring a grip force, haptic interface HapticMaster and the graphical interface. The injury of central nervous system, hand injury and neural or neuromuscular disease affect the hand function. Understanding the development and control of human precision grip at healthy subjects is the fundamental for under-
standing and developing the techniques and technology for rehabilitation [2]. In a study of the development of human precision grip Eliasson et al [6] have conducted experiments on triggered grip actions during sudden loading. The experiments have shown that when an abrupt vertical force perturbation acts on the object held by a precision grip, somatosensory input from the digits triggers an increase in grip force to restore an adequate safety margin, preventing frictional slips. In the study conducted by Eliasson et al the unpredicted increase in load force was induced by dropping a small disc on to a receptacle attached to the object held by the subject. In HVE experimental conditions the haptic interface produced the programmed unpredicted increase in load force on the top of the end effector. On the robot end-effector a grip force-measuring device was installed to measure subject's grip force response. In the HVE experimental conditions somatosensory input from the digits triggered the increase in grip force, while in the GVE experimental set visual cue in form of falling sphere shown on the graphical display triggered the increase in grip force, which stopped the sphere. II. METHODS A. Apparatus Grip force measuring handle was constructed from two load cells for measuring grip force FG. Load cells are attached to a frame, which was attached to the end-point of the haptic interface. Parallel contact surfaces attached on each load cell are covered with a layer of cork and measure 33x30 mm and are 42 mm apart. Measuring range of the grip force-measuring handle is 50 N. Haptic interface used in this study was three-degree of freedom admittance controlled haptic interface HapticMaster developed by FCS Control Systems. New control algorithm for controlling the haptic interface arm was designed and implemented on RTLinux with 2.5 kHz sampling loop frequency. A standard PC with 21 inch LCD display was used for displaying 3D graphical interface.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 973–976, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
974
J. Podobnik and M. Munih
B. Subjects Seven healthy male adults (25-28 years old) participated in the experiments. The participants had no history of neuromuscular or musculoskeletal disorders related to the upper extremities and gave their informed consent to participate. C. Procedure and experimental protocol
m ⋅ a (t ) = FL (t ) + FF (t ) ≤ 0
(2)
FF (t ) = k f FG (t )
Graphical virtual environment consists of a sphere on which the loading force acts and two cones representing virtual fingers (see Fig. 1). If grip force was smaller than 3.5 N, the distance of the cones from the sphere was in proportion to the grip force. If grip force was larger than 3.5 N the cones became attached to the surface of the sphere. Movement of the sphere was allowed only in direction of z-axis (up-down movement) and cones moved with the sphere. Subject was seated in front of the haptic interface and LCD display. Subject was instructed to lightly grip the grip force-measuring handle with pinch grip. The minimum grip force for resisting the slip at the downward load force of 5 N was in range of 2.5 - 3.5 N. Hence, the light grip was denoted as grip with a grip force smaller than 3.5 N. The subject was instructed to stop the sphere from falling. HVE experiment: End-point position of the haptic interface was displayed as a sphere. Two forces act on the sphere with mass m: programmed load force FL and measured endpoint force FM. The measured end-point force FM was force applied by the human operator and was measured with three-degree of freedom force sensor attached on the endpoint of the haptic interface.
m ⋅ a (t ) = FL (t ) + FM (t )
tion of the load force the sphere on graphical display started to move downward but the end-point of the haptic interface stayed in the fixed position. The FM was in GVE experiment substituted with friction force FF, which was proportional to the measured grip force FG.
(1)
Force FF is a model of friction force that opposes the load force FL. If the friction force FF was greater than load force FL, the equation (2) was forced to m·a(t)=0, to avoid sphere to start moving upward at excessive grip force. The friction coefficient
kf =
5N = 1.43 was 3.5 N
calculated so that grip force
of FG =3.5 N would result in FL+FF=0 at FL=5 N. To fully stop the sphere after the sum of forces was zero the velocity was gradually decreased to zero applying the following rule: t
v(t ) = ∫ a (t )dt − 0
t| 2k f FG s (t )t| v =0 a =0
m
.
(3)
D. Data analysis Signals were recorded at 250 Hz sample frequency, 9-10 trails was recorded for each of two experiments per subject. Experiment can be divided in three phases (see Fig. 2): • • •
Preloading phase: phase before the initiation of the load force FL. Dynamic loading phase: phase of abrupt response of the subject to stabilize the object, from the onset of the grip response to the settling of the grip force. Static loading phase: phase of suboptimal grip force for resisting the slip of the object due to constant (static) load force.
Following parameters were extracted from the signals of grip force (see Fig. 2): • • • Fig. 1. Graphical virtual environment. Force FL was a step function with amplitude of 5 N. The FL force was initiated randomly between 2 and 7 second after start of the each experiment. GVE experiment: At GVE experiment the position of the end-point of the haptic interface was in fixed position through all duration of the experiment. Hence, at the initia-
• • •
Baseline grip force FGb was grip force in the preloading phase. Grip force peak FGmax. Static grip force FGs was grip force in the static loading phase. Grip response latency Tl was the time from the initiation of the load force FL and the onset of the grip force increase. Grip response rise time Tr was the time from the onset of the grip force and the grip force peak FGmax. Grip response fall time Tf was the time from the grip force peak and settling of the grip force.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Grip force response in graphical and haptic virtual environment Fmax
Tr
Tf
30
0.4
b
35
F [N]
4
Tl [s]
40
0.2 1
B
A
25
C
a
[N]
0.2
max
Tl
0.1
15
1 10
-0.5
0
0.5
1
1.5
t [s]
2
2.5
3
3.5
4
Fig. 2. The grip force response (red line) to the step load force (black line). A - preloading phase, B - dynamic loading phase and C - static loading phase.
Mann-Whitney U test was used for testing the significant differences in the parameter values for HVE and GVE conditions. P value p<0.01 was selected as level of significance. III. RESULTS For both the HVE and the GVE experiments the grip responses have similar and distinctive skewed bell shaped profile. After the initiation of load force FL small latency is present in grip force response, after which grip force response rapidly increases to peak grip force, and then decreases to a constant force, which is higher than grip force before the initiation of load force FL. In the preloading phase the baseline grip force FGb was constant. Median value for all subject was 2.6 N for HVE experiments and 2.8 N for GVE experiments and do not differ significantly (p>0.01; see Fig. 3(b)). The response latency in HVE experiment was much shorter as in GVE experiment (p<0.01; see Fig. 3(a)). The median response latency time Tl for all subjects was 180 ms in HVE experiment and 480 ms in GVE experiment. The median grip response rise time Tr was around 150 ms in both experiments indicating that the grip response rise time Tr is not experiment dependant (p>0.01; see Fig. 3(c)). Grip force peak FGmax was in HVE experiment significantly and twice as larger (FGmax=38 N) as in GVE experiment (FGmax=17 N; p<0.01; see Fig. 3(d)). Grip response fall time Tf in HVE experiment was shorter Tf =1.5 s) than in GVE experiment (Tf=2.5 s; p<0.01; see Fig. 3(e)).
5 4 3 2 1 1
e
2
b
2
40 20
2
1
s
Tf [s]
0 -1
Fb
c
F [N]
Fs 5
1
F
20
2
2
0.3
Tr [s]
force [N]
975
14 12 10 8 6 4 2
1
d
f
2
2
Fig. 3. Statistical values for measured parameters for HVE (blue box) and GVE (green box) experiments: (a) grip response latency Tl, (b) baseline grip force FGb, (c) grip response rise time Tr, (d) grip force peak FGmax, (e) grip response fall time Tf, (f) static grip force FGs.
In general the median static grip force FGs in HVE and GVE experiment did not differ significantly (p>0.01; see Fig. 3(f)). The static grip force in HVE experiments was FGs=6.8 N and FGs=6.2 N in GVE experiment. IV. DISCUSSION Grip force response in HVE experimental conditions is initiated by the cutaneomuscular reflex [6][7]. Cutaneomuscular reflex is elicited with cutaneous stimulation of mechanoreceptors, which produces short- and long-latency EMG responses in hand muscles [8]. Short-latency EMG responses are of spinal origin, while long-latency EMG responses are probably of cortical origin and involve higher CNS centers [6][7][8]. Long-latency EMG responses are around 60 ms, are larger and more robust than short-latency EMG responses, and are more important for the automatic control of grip force [9]. The response latency time of grip force response in HVE experiments was 180 ms, while Eliasson et al [6] reports grip response latency was around 70 ms. In GVE experiments the grip force response was triggered by a visual stimulus. Visual information is processed by a visual cortex, which is an area of cerebral cortex. Processing is organized hierarchically with extensive feedback and parallel processing [10]. Average reaction time to visual stimulus in humans is around 380 ms [11]. Median response latency time of grip force response in GVE experiments was 480 ms. For both the HVE and GVE experimental condi-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
976
J. Podobnik and M. Munih
tions the grip force response latency is at least 100 ms longer than cited in the available literature [6][11]. Grip response rise time Tr was found to be 150 ms and virtual experiment independent, Eliasson et al [6] reports grip force rise time was about 100 ms, which is 50 ms shorter. This longer rise time can be contributed to the fact that peak grip force reported by Eliasson et al was 12 N, while in present experiments the grip force peak FGmax was in range 25 to 50 N for HVE experiments and 12 to 25 N for GVE experiments. Decrease of the grip force after grip force peak is in GVE experimental conditions slower than in HVE experimental conditions. The cutaneous information from mechanoreceptors and cutaneomuscular reflex result in much faster and stronger grip response and shorter dynamic loading phase. While the grip force peak FGmax in HVE experiments depends on the amplitude of the load force [12], in GVE experimental conditions the grip force 3.5 N would be sufficient to stop the sphere, yet all the subjects responded in increase of grip force above the necessary level, but still significantly lower than in HVE experiments. V. CONCLUSION Grip force response has distinctive skewed bell shape for both the HVE and GVE experimental conditions. Differences are in latency of the grip force response, amplitude and the duration of the grip force response in the dynamic loading phase. Also the underlying neural control mechanisms triggered by haptic and visual cues are briefly discussed. Though different, both neural control mechanisms result in muscular response and increased muscular work for achieving the goal of the experiment: stopping the sphere. Current investigation clearly shows differences in responses that can be expected in adult healthy subjects to haptic or visual virtual environment. Since haptic plus visual or only visual environments are common in advanced rehabilitation technology this investigation shows what is the potential of the two kinds of the virtual environments.
ACKNOWLEDGMENT The authors wish to acknowledge the financial support of the Slovenian Research Agency.
REFERENCES 1.
Krebs HI, Hogan N, Aisen ML et al (1998) Robot-Aided Neuro-Rehabilitation. IEEE Trans Rehabil Eng 6:75–87 2. Kurillo G, Zupan A, Bajd T (2004) Force tracking system for the assessment of grip force control in patients with neuromuscular diseases. Clin Biomech 19:1014–21 3. Mali U, Goljar N, Munih M (2006) Application of haptic interface for finger exercise, IEEE Trans Neural Syst Rehabil Eng 14:352–60 4. Holden MK (2005) Virtual environments for motor rehabilitation: review. Cyberpsychol Behav 8:187–211 5. Podobnik J, Munih M (2005) Improved haptic interaction control with force filter compensator, IEEE 9th International Conference on Rehabilitation Robotics, Chicago, USA, 2005, pp. 160–163 6. Eliasson AC, Forssberg H, IkutaK et al (1995) Development of human precision grip: V. anticipatory and triggered grip actions during sudden loading. Exp Brain Res 106:425–33 7. Evans AL, Harrison LM, Stephens JA (1989) Task-dependent changes in cutaneous reflexes recorded from various muscles controlling finger movement in man. J Physiol 418:1–12 8. McNulty PA, Macefield VG (2001) Modulation of ongoing EMG by different classes of lowthreshold mechanoreceptors in the human hand. J Physiol 537:1021–1032. 9. Macefield VG, Johansson RS (2003) Loads applied tangential to a fingertip during an object restraint task can trigger shortlatency as well as long-latency EMG responses in hand muscles. Exp Brain Res 152:143–9. 10. Van Essen DC, Anderson CH (1995) Information processing strategies and pathways in the primate visual system, An Introduction to Neural and Electronic Networks. Academic Press 11. Schiefer U, Strasburger H, Becker ST et al (2001) Reaction time in automated kinetic perimetry: effects of stimulus luminance, eccentricity, and movement direction. Vision Res 41: 2157–64 12. Podobnik J, Munih M (2006) Evaluation of coordination between grasp and load forces in power grasp in humans with a haptic interface, IEEE International Conference on Robotics and Automation, Orlando, USA, 2006, pp. 2807–2812
Institute: Laboratory of Biomedical Engineering and Robotics, Faculty of Electrical Engineering, University of Ljubljana Street: Trzaska c. 25 City: Ljubljana Country: Slovenia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Novel Testing Tool for Balance in Sports and Rehabilitation N. Sarabon1,2,3, G. Omejec3 1
2
University Medical Centre/Institute of Clinical Neurophysiology, Ljubljana, Slovenia Terme Krka/Terme Smarjeske Toplice/Prevention and Rehabilitation Sports Centre, Smarjeske Toplice, Slovenia 3 University of Ljubljana/Faculty of Sport, Ljubljana, Slovenia
Abstract — the aim of our study was to test sensitivity and reliability of two commonly used balance tests (Rhomberg and Flamingo) and to make a comparison with our new method for balance testing (Clever Balance Board (CBB)). The study was carried out on 102 pupils (39 men, 41 women; 14.3+2.7 yrs; 148+23 cm; 44.1+10.1 kg; 21.5+7.8% body fat). Every subject performed all three balance tests, each of them three times with 3 to 6 minutes rest initervals between consecutive trials. All of the measured parameters related to a single test were analyzed for sensitivity (stdev, min/max) and repeatability (correlation based test-retest analysis). Results showed that Romberg and Flamingo tests have poor sensitivity since there were a large number of subjects achieving the best results possible. Therefore, frequencies showed right-asymmetrical type of distribution for these two tests. However, normal distribution and high sensitivity was observed for all the parameters of the CBB test. The latter has also dominated considering the repeatability, inter-class correlation coefficients being 48.9, 61.1, and 73.0-81.2 % for Rhomberg, Flamingo, and CBB respectively. It was concluded that metric characteristics of the CBB in comparison to the two clinical balance tests are dominating. Because of its portability and moderate price CBB could be easily applied to routine balance diagnostic procedures. Keywords— testing, balance, repeatability, sensitivity.
I. INTRODUCTION Maintaining postural equilibrium requires the central nervous system to process and integrate afferent information from the somatosensory, visual and vestibular sensory systems into the selection and execution of appropriate and coordinated musculoskeletal responses throughout the joints of the lower extremities [1]. Physical training that challenges this kind of balance reactions has become a prominent component in the rehabilitation of sport injuries and has also quickly gained recognition as an important element in injury prevention and athletic performance enhancement programs [2-7]. The ability of the knee or the ankle to remain stable during jumping, running, throwing etc. is referred to as dynamic joint stability [8,9] and depends on a complex interaction of numerous neuromuscular mechanisms [10]. Equilibrium boards are commonly used to asses tilting reactions, where lateral destabilization [11] appears. In such
case, hip abductors and trunk provide primary control [8,11], while ankle is activated with narrow stance [12]. Standardized tests and measures of balance are available to evaluate functional performance. Traditional tests of balance have focused on the maintenance of posture (static balance), balance during weight shifting or movement (dynamic balance) and responses to external perturbations. Items for static control typically include double limb stance, single limb stance, tandem stance (heel-toe position), Romberg test (eyes open to eyes closed) and sharpened Romberg test (tandem foot position, eyes open to eyes closed). Dynamic tests items include standing up, walking, turning, stopping and starting [11]. Many of these tests do not correlate with dynamic nature of sport activities and do not present enough challenge to elicit balance deficiencies in healthy active athletes [13]. Therefore, they are less appropriate or even inappropriate to be used in sports and medical routine. Sophisticated laboratory measurements (static force plate, movable force plate) are reliable, sensitive and valid, but unfortunately expensive and not portable [8]. In clinical or field balance testing subjective grading scales are normally used and therefore reliability measures are lacking [3]. It is important to consider measures with established reliability and sensitivity [14]. Measurement reliability is a degree, to which a measure is consistent and can be replicated [15]. If one wishes to use a clinical balance measurement as an outcome measure in rehabilitation or injury prevention setting, the reliability of the test used is crucial [3]. The test is sensitive, when it can measure appropriate results changes [15]. The most common methods of statistical analysis to describe testretest reliability are Pearson correlation coefficients (r) and intraclass correlation coefficients (ICCs) [16]. ICC›.75 is used to describe excellent reliability [17]. Therefore the aim of our study was to test sensitivity and reliability of two commonly used balance tests and to make a comparison with a novel method for balance testing (Clever Balance Board (CBB)). II. MATERIALS AND METHODS Subjects: One hundred and two pupils from five different primary schools participated in the study (46♂, 56♀;
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 998–1001, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A Novel Testing Tool for Balance in Sports and Rehabilitation
Analyses: Data were analyzed using SPSS for windows 13.0. Bivariate correlational analysis was first conducted between scores of three successive trials for each of the three tests to determine the existence of significant correlation between three measures. Pearson correlation coefficient (r) was used to assess test-retest reliability. To determine the sensitivity the SD and min-max was calculated. III. RESULTS Results demonstrated high values of correlation coefficients between three successive trials in six different parameters (Table 1) that ranged from 73 to 81,2% for different parameters. They showed normal distribution and good sensitivity (Graph 1). Flamingo Test has the lowest values of Pearson correlation coefficients between three successive trials (Table 2) and ICCs value 61,1 %. In Advanced Romberg Test correlations are higher (Table 3) and ICCs value 48,9%. Both tests have poor sensitivity (detects only extremes) (Graphs 2,3).
Table 1: Test-retest reliability (Pearson correlation coefficients) between three successive repetitions in 6 parameters of CBS
CBB parameters
1/2
2/3
1/3
1 parameter 2 parameter 3 parameter 4 parameter 5 parameter 6 parameter
68,9 76,2 73,2 70,7 76,6 70,8
88,7 90,5 71,9 67,6 81,4 78,7
65 63,3 71,7 54,5 72,2 65,9
Graph 1: CBB – Average frequency of board tilting 25 20 15 10 5 48 to 49
44 to 45
40 to 41
36 to 37
32 to 33
28 to 29
24 to 25
20 to 21
0 16 to 17
14.3+2.7 yrs; 4.86+.76 ft; 97.2+22.3 pt; 21.5+7.8% body fat). Participant’s parents or legal guardians signed an informed consent form, approved by Ethical Committee, after being advised of the purposes and methods of the study. None of the participants had sustained any injury within the study. Study Design: Each participant accomplished a test battery with two commonly used functional tests (static unipedal stance with eyes closed (Advanced Romberg Test) and Flamingo Test) and a novel method for balance testing (Clever Balance Board (CBB)). Each of the tests was carried out three times, using 3 to 6 minutes rest intervals between the consecutive trials. The order of tests was randomized between participants with each test lasting 60 s. Balance Tests: In Advanced Romberg Test participants were instructed on the required stance, asked to place their hands along the body and place the knee of nonsupportable leg by the knee of supportable leg. During the measurement the thighs have to be close to each other, perpendicular in the knee of nonsupportable leg, eyes closed. Participants were asked to stand quietly and remain as motionless as possible in the stance position. They were told if they lost their balance, they should make any necessary adjustments and return to the testing position as quickly as possible. Time of any touch down was written in a prepared table. In Flamingo Test participants were stood unipedal on special wooden crossbar (length 50 cm, height 4 cm, width 3 cm). Participants were instructed on the required stance, asked to bend nosupportable leg and held it with the arm on the same side above the ankle. During the measurement the thighs have to be close to each other. Instructor helps the participant to assume the balance position and starts measuring when participant releases instructor’s hand. Participants were asked to stand quietly and remain as motionless as possible in the stance position. After loosing a balance (touch down, release of nonsupportable leg) the time has to be stopped and all procedure has to be repeated until one minute run down. If the participant lost balance 15 times in first 30 s, the test has to be ended and scored with 0 points. Scores are given for the amount of trials, not the amount of interruptions. In the CBB Test participants were asked to stand on the tilt board bipedally symmetrically (width approximately 50 cm) and instructed to try sustaining as quiet and stable as possible, avoiding any unnecessary touch down of the edges of the board. Electronic goniometer and processing unit embedded in the CBB were acquiring changes in the angle of the board in time (60 s) and calculated several parameters (time on edges, sum of the angular change, average angular velocity while actively balancing, average frequency oscillations, etc.) after that.
999
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1000
N. Sarabon, G. Omejec
Table 2: Test-Retest Reliability (Pearson correlation coefficients) between three successive repetitions in Flamingo Test
Test
1/2
2/3
1/3
Flamingo
18
36,3
24,7
Table 3: Test-Retest Reliability (Pearson correlation coefficients) between three successive repetitions in Advanced Romberg Test
Test
1/2
2/3
1/3
Advanced Romberg
73,5
81,4
62,5
Graph 2: Flamingo Test (frequencies)
30 25 20 15 10 5
IV. CONCLUSION The main purpose of this study was to compare some basic metric characteristics of a novel method for balance testing with two well established clinical balance tests. We namely wanted to optimize diagnostics on this field by developing a handy, portable tilt board with embedded sensory and processing electronic unit that enables us to get well quantified data. This measurement tool is simple to use and, apart from the clinical tests, much less testerdependent. Routinely, in Romberg we normally use also less intense variations that are carried out first, before shifting to the advanced test with hands akimbo and eyes closed. However, including non-advanced variations would not change the leveling effect, if only differentiation among less balanced subjects would be better. Based on the results of our study it can be concluded that metric characteristics of the CBB in comparison to the other two clinical balance tests (Flamingo and Romberg) are dominating. Because of its portability and moderate price CBB could be easily applied to routine balance diagnostic procedures in sports and rehabilitation. Our results prove that the CBB dominates other two tests regarding both metric characteristics, sensitivity and repeatability, tested.
56 to 60
51 to 55
45 to 50
41 to 45
36 to 40
31 to 35
26 to 30
21 to 25
16 to 20
11 to 15
6 to 10
1 to 5
0
REFERENCES 1. 2.
Graph 3: Romberg Test (frequencies)
60
3.
50 4.
40 30
5.
20 10
6.
56 to 60
51 to 55
45 to 50
41 to 45
36 to 40
31 to 35
26 to 30
21 to 25
16 to 20
11 to 15
6 to 10
1 to 5
0
7.
8. 9.
Riemann BL, Guskiewicz KM, Shields EW (1999) Relationship between clinical and force plate measures of postural stability. J Sport Reh, 8:71-82 Emery CA, Cassidy JD, Klassen TP et al. (2005) Effectiveness of a home-based balance-training program in reducing sports-related injuries among healthy adolescents: a cluster randomized controlled trial. CMAJ 172(6):749-754 Emery CA, Rose MS, McAllister JR et al. (2007) A prevention strategy to reduce the incidence of injury in high school basketball: a cluster randomized controlled trial. Clin J Sport Med 17(1):17-24 McGuine TA, Keene JS (2006) The affect of a balance training program on the risk of ankle sprains in high school athletes. Am J Sports Med 34(7):1103-1111 Caraffa A, Cerulli G, Projetti M et al. (1996) Prevention of anterior cruciate ligament injuries in soccer. A prospective controlled study of proprioceptive training. Knee Surg Sports Traumatol Arthrosc 4(1):19-21 Myer GD, Ford KR, Palumbo JP et al. (2005) Neuromuscular training improves performance and lower-extremity biomechanics in female athletes. J Strength Cond Res 19(1):51-60 Holm I, Fosdahl MA, Friis A et al. (2004) Effect of neuromuscular training on proprioception, balance, muscle strength, and lower limb function in female team handball players. Clin J Sport Med 14(2):8894 Shepard TN, Telian AS (1996) Balance Disorder Patient. Singular Publishing Group, San Diego Emery CA (2003) Is there a clinical standing balance measurement appropriate for use in sports medicine? A review of the literature. J Sci Med Sport 6(4):492-504
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Novel Testing Tool for Balance in Sports and Rehabilitation 10. Williams GN, Chmielewski T, Rudolph KS et al. (2001) Dynamic knee stability: Current theory and applications for clinicians and scientist. J Orthop Sports Phys Ther 31:546-566 11. O’Sullivan BS, Schmitz JT (2001) Physical Rehabilitation: Assessment and Treatment, Fourth Edition. F. A. Davis Company 12. Cook-Shumway A, Woollacott HM (2001) Motor Control, Theory and Practical Applications, second edition. Williams &Wilkins, Baltimore 13. Riemann BL, Caggiano NA, Lephart SM (1992) Examination of clinical method of assessing postural control during a functional performance task. J Sport Rehab 8:171-83 14. Burger H (2003) Pomen ocenjevanja v rehabilitacijski medicini. Ocenjevanje izida v medicinski rehabilitaciji, zbornik predavanj. IRRS, Ljubljana, 29-40
1001 15. Bolton B (1976) Handbook of measurement and evaluating in rehabilitation. University Park Press, Baltimore 16. Shrout PE, Fleiss JL (1979) Intraclass correlations: uses in assessing rater reliability. Psych Bull 86:420-428 17. Fleiss JL (1986) The design and analysis of clinical experiments. John Wiley and Sons, New York Author: Institute: Street: City: Country: Email:
Corresponding Author: Nejc Šarabon Institute of Clinical Neurophysiology, UMC Zaloška cesta 2 Ljubljana Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Acceleration driven adaptive filter to remove motion artifact from EMG recordings in Whole Body Vibration A.Fratini1, M. Cesarelli1, P. Bifulco1, A. La Gatta1, M. Romano1, G. Pasquariello1 1
Dept. of Electronics and Telecommunications Engineering, University of Naples “Federico II”, Naples, Italy
Abstract — Whole Body Vibration training is more and more utilized in sport medicine to enhance athletic performance and training; recently, vibration treatment is being also used for therapy and rehabilitation for patients affected by different pathologies. It is also worth mention that excessive vibrations can be hazardous. The treatment is based on the hypothesis that under some circumstances vibration loads induce specific responses from the neuromuscular system; some clinical evidences suggest mechanical and metabolic reaction. Usually, vibrations are transmitted by means of a platform oscillating at different frequencies (10-80Hz) to patient body; mainly, limb muscles are involved. Many studies about the subject employ surface EMG recordings to evaluate muscle activity during vibration training. On electrodes, this condition generates large motion artifact at the vibration frequency. To get rid of such artifact an adaptive filter was designed: accelerometers placed onto platform or directly on muscles provide an error signal shape to be cancelled from the raw EMG. In particular, surface EMG have been recorded from leg quadriceps muscles of volunteers during whole body vibration therapy sessions at different oscillating frequency. A standard RLS adaptive filter have been used to cancel the motion artifact from EMG in real-time. Results show the effective cancellations of the vibration frequency from the raw EMG; the RMS value of the cleaned EMG results lower than that unprocessed in some cases up to 50%. Keywords— Whole body vibration, surface electromyography, adaptive filtering.
I. INTRODUCTION Vibration treatment, maybe better known as “Whole Body Vibration training” has become during the past years a novel training method. It is well known that sustained mechanical vibration applied to muscles or tendons can elicit a reflex muscle contraction: the so called Tonic Vibration Reflex (TVR). Some evidence suggests that vibrations induce activity of the muscle spindle Ia fibers, mediated by monosynaptic and polysynaptic pathways [1,2]. In general, Whole Body Vibration training consists of application of mechanical vibrations to the entire patient body by means of a platform oscillating in contact with patients’ limbs. In this way, vibrations are transmitted trough the skeletal and muscular systems generating different vibratory stimuli to various muscles, which are expected
to elicit the reflex. Many studies and some clinical evidences were reported on the subject [3, 4], but effects of vibration treatments are not yet fully understood and still debated. Physiological studies show that the human response to vibration depends mainly on three factors: amplitude, frequency, muscle-tension or segment-stiffness (the latter depends on patient positioning, training, etc.). Some controversy in previous results could be explained by the wide differences in stimulations and output parameters considered; indeed it is very difficult to know the actual mechanical stimulation delivered at a specific muscle too. It is also worth mention that excessive vibrations load can be hazardous [5,6], this is mainly related to long-term exposure. The present study concentrates on leg muscles training with vibrating platforms. Standard application consists of a certain number of vibratory sessions at predetermined frequencies and intensities followed by resting periods, while patient’s feet are onto the oscillating platform and knees are bent at a certain angle. Usually, before treatment, it is often suggested to take a “discovering session” to determine an optimal training frequency. Such evaluation can be achieved by maximising the RMS value of surface electromyography (SEMG) recorded by leg muscles, while patient is subject to vibrations of increasing frequencies (usually, from 10 to 80Hz, with few Hz increment). Finally, the vibration frequency that generates the maximum EMG RMS value (as a mean of higher muscular response) is used to deliver the proper sequences of vibration treatments [7]. However, it is well known that during surface biopotential recording motion artifacts may arise from relative motion between electrodes and skin. Polarisation at electrode-electrolyte interface, but also different concentration of electrolytes, even between different skin layers, play important roles in this phenomenon [8]. If the electrical double layer that forms between the electrode metal and the electrolyte is disturbed an electrode potential variation is measured; use of floating electrode, where the electrodeelectrolyte double layer is recessed from the skin, actually reduce the artifact. Presence of thick electrolytic layer (e.g. use of conductive gel) provides a shock absorption function. Obviously, also the electrode material contributes (e.g. use of silver chloride coating). In general, use of nonpolarizable wet electrodes (e.g. Ag/AgCl) provide significantly lower motion artifact compared with dry and insulat-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 990–993, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Acceleration driven adaptive filter to remove motion artifact from EMG recordings in Whole Body Vibration
ing electrodes (such as steel, Ti, Al) [9]; they results also less sensible to electrically charged bodies near electrodes. Moreover, between the inside and the outside of the skin a potential difference is observed [10]. Different skin layers act as permeable membranes trough which charged ions can diffuse. Skin stretch cause skin potential change; this phenomenon contributes to the motion artifact recorded. Skin abrasion, skin punctures (a sort of shortening between skin layers potentials) have been suggested to reduce this artifact. Cable movement can also cause motion artifact: triboelectric noise can arise from friction and deformation of the lead electrode cable insulation, which act as a piezoelectric movement transducer. Placement of bio potential amplifiers as near as possible to the electrode site helps in the artifact reduction. In general, literature particularly concentrates on clinical recordings. Motion artifact voltage amplitude can result even ten times larger than ECG signal and can be particularly troublesome in ambulatory ECG recordings and much more during exercise ECG such as those made during stress tests or Holter monitoring [9]. Also during clinical EMG recordings, the frequency content of motion artifact is typically considered below 10-20 Hz [11], consequently the general approach to motion artifact reduction is to high-pass filter the EMG (e.g. with a cut-off frequency of 20 Hz): little of the true-EMG signal power is lost, while most motion artifact is rejected. Motion artefacts also hinder EEG recordings, electrical impedance pneumography, etc. Indeed, in particular situation, such as during whole body vibration, motion artifacts offer a substantial contributes to EMG recordings. In the raw EMG recordings, relevant motion artifacts signal strictly related to the mechanical vibration frequency are present. This study proposes a method to remove motion artefacts by using knowledge of muscle motion by means of accelerometers. In particular, a time-varying adaptive filter is presented able to remove motion artefacts and consequently provide a better estimate of the “real” muscular activity during vibration treatment. II. METHODS A. System set-up A platform (XP Multipower, Tsem S.p.a.) was utilised to deliver vibration to patients. The platform used was modified by the manufacturer to allow remote control of principal parameters (i.e. vibration frequency and intensity) from an external PC. Moreover, a mono-axial piezoelectric transducer was included into the oscillating platform. A multichannel, isolated biomedical signal amplifier (BM623 Biomedica Mangoni) was used to record surface EMGs. A tiny
991
and lightweight three-axial MEMS accelerometer (Freescale semiconductors MMA7260QT) was used to measure accelerations onto patient’s skin, at EMG electrodes level; the sensor was set to measure acceleration in the range ±6g. EMG and acceleration signals were acquired using a PC multi-channel 16-bit data acquisition card (National Instruments DAQCard 6251); the same card was also used to control the platform oscillations. A specific software was designed to perform interactive platform control, acquisition and processing of signals, using the LabWindows/CVI IDE (National Instruments) environment. B. Subjects Ten males (age 29,2+5,8 weight 77,5+18,5) were voluntarily enrolled in the preliminary tests. Subjects gave their informed, written consent to participate. They are students at Univ. of Naples “Federico II” and not fully athletic. C. Protocol Vibrations were applied via lower extremities while the subjects kept a hack squat position on the top of the vibrating platform. Surface EMG of quadriceps (rectus femoris) was taken using small cup circular Ag/AgCl electrodes (5mm in diameter, inter-electrode distance of 20 mm arranged in the direction of the muscle fibres; a conductive paste was used), in accordance with the guidelines of SENIAM Project [11]. EMG signals were amplified with a gain of 1000 V/V and a 3dB frequency band ranging from 10 to 450 Hz; no notch filters were used to suppress line interference. MEMS accelerometer was stick onto skin as close as possible to electrodes. A set of 10 seconds vibrations at different frequencies, in a range of 15-45 Hz, was delivered to patients. The peak to peak displacement of the vibrating platform (vibration intensity) was set to about 6 mm. Before each vibratory stimulus 5 seconds of EMG and acceleration signals were acquired to study signals without any vibration artefacts: the subject held his hack squat position before the vibration starts. All signals were sampled at 1000 Hz. Figure 1 shows a typical EMG signal at vibration start. On the left of the Figure 1 (A) is visible the basal, rectus femoris EMG activity (patient is hack squatted). Vibration arises and it can be easily noted the EMG signal combined with a motion artifact (at vibration frequency); the lower subplot represents contemporary acceleration recorded at the electrodes level.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
992
A.Fratini, M. Cesarelli, P. Bifulco, A. La Gatta, M. Romano, G. Pasquariello
III. RESULTS AND CONCLUSION
Fig. 1: Motion artifact at vibration arousal : (A) raw EMG signal recorded at electrodes, (B) acceleration modulus recorded by the MEMS accelerometer
D. EMG Processing Accelerations signals provides information about patient oscillation. Signal acquired from the 3-axes MEMS accelerometer were pre-processed in order to exclude the influence of gravity. An adaptive filtering was designed to cancel out motion artefacts: Figure 2 depicts a block scheme. A standard Recursive Least Square (RLS) implementation [14] was utilised for the adaptive filter. Filter performance were first evaluated using simulated EMG (gaussian noise filtered with a Hanning band-pass 10100Hz) and motion artifact (pure sine added to EMG) signals. The RLS filter adopted had 64 weight factors; the forgetting factor was chosen to 0.85. As reference noise signal can be used either the acceleration signal from the platform piezoelectric or one (or linear combinations) of the 3 accelerations provided by the MEMS.
+
EMG + motion artifact acceleration signal
-
The implemented filter effectively reduces motion artefacts dependent on vibrations on EMG recordings. To appreciate filter performance, as example, Figure 3 shows a 700 milliseconds tract of the raw EMG (A) recorded from the rectus femoris of a patient while vibrating at 25 Hz; along the same time axis is shown the filtered (“true”) EMG (B), the motion artifact cancelled (C) and finally the reference signal (acceleration onto muscle) (D). Raw EMG signal power spectrum clearly shows the presence of sharp peaks: the taller at a frequency coinciding to that mechanical chosen on vibrating platform and relatively smaller contributes related to even and odd harmonics. The three acceleration signals recorded onto muscle belly generally results not in phase between each other, they show also different amounts of harmonics. Peak to peak amplitude value of acceleration depends on frequency; in some cases a resonant-like profile of amplitudes can be observed; the resonant frequency depends on patient. The adaptive filter shows an adequately fast response to variation of system condition (transients due to the onset and stop of the vibration). The RMS values of surface EMG significantly differs between the processed and unprocessed signals. Relatively to the preliminary results, processed EMG RMS value resulted in some cases reduced up to 50 %; this means that the power of the motion artifact is comparable (or at least not negligible) with respect to the muscular activity.
“true” EMG
Adaptive filter
Fig. 2: Adaptive filter block schematics
Fig.3: Example of the EMG adaptive filtering. (A) raw EMG signals, (B) filtered EMG, (C) motion artifact removed and (D) acceleration recorded by the MEMS
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Acceleration driven adaptive filter to remove motion artifact from EMG recordings in Whole Body Vibration 4.
The filtration of the motion artifact contributes to reduce uncertainness concerning neuromuscular response to vibration. Further studies are planned to investigate in more details the motion of different muscles and body parts of a patient subjected to whole body vibrations. More people will be involved in the experimentations. A multiple configuration is being set up in order to measure contemporarily SEMG from vastus medialis and vastus lateralis. Multiple knee angles will be considered. Approximate information about geometric anatomy of muscle involved (such as muscle length, diameters of the leg, etc. ) will be considered. Furthermore, it is reasonable to try to estimate displacement of the skin (near to electrodes site) starting from acceleration data recorded by the MEMS accelerometer and in turn to retrieve information about frequency-dependent mechanical behaviour of muscles.
10.
ACKNOWLEDGMENT
11.
Authors are particularly grateful to TSEM S.p.a. for providing the vibration training device, customer hardware modifications and generously funding the research activity.
12.
REFERENCES 1. 2.
3.
Roll JP, Vedel JP, Ribot E (1989) Alteration of proprioceptive messages induced by tendon vibration in man: a microneurographic study. Exp Brain Res 76:213-222 RomaigueÁ re P, Vedel JP, Azulay JP, Pagni S (1991) Differential activation of motor units in the wrist extensor muscles during the tonic vibration reflex in man. J Physiol (Lond) 444:645-667 C. Bosco, M. Iacovelli, O. Tsarpela, M. Cardinale, M. Bonifazi, J. Tihanyi, M. Viru A. De Lorenzo, A. Viru (2000): Hormonal responses to whole-body vibration in men; Eur J Appl Physiol 81: 449-454
5. 6. 7.
8. 9.
13.
14.
993
Bosco C, Colli R, Introini E et al. (1999b), Adaptive responses of human skeletal muscle to vibration exposure. Clin Physiol; 19: 183-187. Mester J., Spitzenfeil P., Schwarzer J., Seifriz F. (1999): Biological Reaction to Vibration- Implications for Sport; Journal of Science and Medicine in Sport, 2 (3): 211-226. Mester J., Kleinoder H., Yue Z. (2006): Vibration training: benefits and risks; Journal of Biomechanics, , 39:1056–1065 Cardinale M, Lim J. (2003): Electromyography Activity of Vastus Lateralis Muscle During Whole-Body Vibrations of Different Frequencies ; Journal of Strength and Conditioning Research, 17(3), 621–624 Tam H, Webster JG (1977) Minimizing electrode motion artifact by skin abrasion. IEEE Trans. Biomed. Eng. 24: 134139 Searle A and Kirkup L. (2000) A direct comparison of wet, dry and insulating bioelectric recording electrodes. Physiol. Meas. 21: 271-283 de Talhouet H and Webster JG (1996) The origin of skinstretch-caused motion artifact under electrodes. Physiol. Meas. 17: 81-93 Hamilton PS, Curley M, Aimi R (2000) Effect of adaptive motion-artifact reduction on QRS detection. Biomed Instrum Technol. 34(3):197-202 Clancy EA, Bouchard S, Rancourt D. (2001) Estimation and application of EMG amplitude during dynamic contractions. IEEE Eng Med Biol Mag. 20(6):47-54 Hermens H.J., Freriks B., Merletti R., Stegeman D., Blok J., Rau G., Disselhorst-Klug C., Hägg G. (2000) European Recommendations for Surface ElectroMyoGraphy – Results of SENIAM Project, Italian Version by Merletti R., CLUT Haykin S. (1996) Adaptive Filter Theory, Third Edition, Prentice-Hall Author: Institute: Street: City: Country: Email:
Antonio Fratini Dept. of Electronic and Telecommunication Engineering – University of Naples “Federico II” Via Claudio, 21 Naples Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Change of mean frequency of EMG signal during 100 meter maximal free style swimming I. Stirn1, T. Vizintin2, V. Kapus1, T. Jarm2 and V. Strojnik1 1
2
University of Ljubljana, Faculty of Sport, Ljubljana, Slovenia University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
Abstract— Changes in EMG signal spectral parameters of some arm muscles were monitored during 100 m maximum front crawl swimming. MNF linearly decreased at approximately the same extent in all observed muscles during swimming and no plateau of stabilized MNF values was observed at the end of swim. Yet, when normalized with respect to the endurance level MNF value obtained with the 90-second maximal voluntary isometric contraction after swimming, some differences between the analyzed muscles were shown. Keywords— swimming, fatigue, electromyography, mean frequency
I. INTRODUCTION Fatigue presents an important limiting factor for swimming performance. Using an EMG spectral analysis it is possible to monitor the fatigue process continuously from the onset of swimming in different muscles simultaneously. The electrical manifestations of muscle fatigue are presented by the shift of power spectrum parameters (mean (MNF) or median (MDF) frequencies of power spectrum) to lower frequencies [1]. MNF decreases in two phases: the initial steep linear decrease which has been labeled as fatigue phase, followed by a plateau with very small or no further decrease which has been labeled as an endurance level [2]. The two-phase MNF decrease has been observed in different conditions during repeated maximum isokinetic contractions [3], dynamic contractions [4], during treadmill uphill running [5] and cycling [6]. We found only one study that analyzed the MNF decrease during swimming [7]. The subjects had to swim 200 meters in 50 m pool. The IMNF (instantaneous MNF) during the 1st 25 m of the first 50 m dash was compared to the 2nd 25 m of the last 50 m dash. The decrease of MNF in extensor carpi ulnaris and fleksor carpi ulnaris (11.41% and 8.55% respectively) occurred and were attributed to the fatigue of the muscles due to their wrist stabilization role during swimming. The aim of our study was to observe changes in EMG signal spectral parameters of some arm muscles during 100 m maximum front crawl swimming. However, direct comparison of EMG parameters among different muscles is not
possible without proper normalization. To normalize MNF shift, the lower limit (the endurance level) of MNF for individual muscle was assessed during a 90-seconds long maximal voluntary isometric contractions performed after swimming. It was assumed that the muscle with MNF closest to its endurance level or the muscle which reaches its endurance level sooner is the one that exhibits the greatest level of fatigue. II. METHODS Subjects: Eleven swimmers (22.0 ± 2.9 years of age, 184.8 ±8.2 cm, 77.2 ±4.9 kg) volunteered for the study. They were all experienced competitive swimmers involved in swimming for 13,6 ± 3,1 years on average with an average personal best result in 100 m front crawl 53,05 ± 1,72 s although they were not all crawl specialists. 2-3 days before testing they didn’t perform any intensive strength or swim training. Organization of the measurements: The measurements took part in a 25 m indoor swimming pool. After equipped with all necessary equipment subjects performed 10 x 50 m front crawl warm up series at a medium level of effort. After warm up subjects swam 100 m front crawl swim with a maximum effort. Because of the measurement equipment swimmers started with pushing off the side of the pool and were not allowed to perform underwater turns. Each swimmer was recorded with a video-camera pushed along the pool in parallel with a swimmer perpendicular to their head. These recordings were used to calculate “clean” swimming speed, stroke length and stroke rate. After the swimming, 90-seconds maximal voluntary isometric contractions were performed for each of the three observing muscles – pectoralis major (PM), latissimus dorsi (LD) and triceps brachii (TB). Specific body positions which emphasized the functional role of the observing muscle were used on a rescue bed beside the pool. Blood lactate concentration analysis: Blood samples from the earlobe were taken before swimming test, immediately after the test, and 3 and 5 minutes after the test. Blood lactate concentrations were measured using EPPENDORF (Germany) lactatometer.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1002–1005, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Change of mean frequency of EMG signal during 100 meter maximal free style swimming
Collection of the EMG data: EMG signals from m. pectoralis major (PM), m. latissimus dorsi (LD) and m. triceps brachii were collected during both fatiguing protocols – swimming and isometric contractions. These muscles were chosen because of their importance in front crawl swimming. Because pectoralis major and latissimus dorsi are large muscles and separate muscle compartments might be involved in the contractions, EMG was collected from the upper (PM1, LD1) and lower parts of these muscles (P2, L2) separately. The skin was shaved, rubbed with sandpaper and cleaned with alcohol prior to attachment of the electrodes so that inter electrode resistance did not exceed 5 kOhm. Bipolar Ag-AgCl skin electrodes (9 mm diameter, Hellige, Freiburg, Germany) with inter electrode distance of 20 mm, were used. The ground electrode was positioned on cervical vertebrae. Transparent dressings with label (TegadermTM, 6 cm x 7 cm, 3M, USA) were used to cover the electrodes to isolate them from the water. All cables were fixed by an ordinary adhesive tape on several spots in order to minimize their movement and consequently their interference with the signal. In order to additionally prevent the movement of the cables swimmers were dressed in a long sleeve swimming suit. The cables from the electrodes were fixed to a coaxial cable. All coaxial and trigger cables were sheaved and connected to the telemetric EMG transmitter (Biotel 88, Glonner, Munchen, Germany) which was carried above the swimmer on a rod during swimming. Data was recorded using Dasy Lab software (Daisytec, Amherst, NH) with sampling frequency 2000 Hz and afterwards analyzed with MATLAB (2004, The MathWorks, Inc., Natick MA). EMG signals were filtered with a Butterworth band pass filter with a lower and upper cut-off frequencies of 10 Hz and 500 Hz respectively. Analyzing data during swimming: For every muscle and every stroke an envelope of generalized energy was calculated using a sliding 250 ms window. An “active” part of the stroke was then extracted, defined as the part of the
120
MNF(Hz)
110 100 90
X: 1.26e+005 Y: 78.56
80 70
0
2
4
6 8 sample
10
12
14 4
x 10
Fig. 1 MNF during swimming. Each dot presents MNF of one stroke. MNF on a fitted curve at the time of last stroke (MNFends = 78,6 Hz) is marked.
1003
signal where EMG energy was greater than 50% of the maximum energy within the stroke in question. From the extracted segment first Fourier transform and then power spectral density (PSD) was calculated using periodogram method. PSD was evaluated by calculating its mean frequency (MNF). A linear model was fitted to the scattergram of MNF values belonging to individual strokes for each muscle. We used this model to estimate values of MNF at the time of the first and the final stroke within every 100 m swim (see figure 1). Analyzing data during isometric contraction: EMG signal during 90 seconds long maximal voluntary isometric contraction was recorded and afterwards split to 250 ms nonoverlaping intervals. Fourier transform and then power spectral density was calculated using periodogram method for each interval. The power spectral density (PSD) was evaluated by calculating its mean frequency (MNF). Exponent model was fitted to the MNF values and the lowest (plateau) MNF value was then calculated. The Fatigue index was calculated using corresponding MNF values as denoted in equation Findex = (MNFends MNFp) / (MNFbegs - MNFp)*100 where MNFends represents MNF value at the end of swimming, MNFbegs at the beginning of the swimm and MNFp the MNF value at the endurance level (the plateau value). Lower values of Findex indicate a state of relatively grater fatigue. Statistics: Standard procedures were used to calculate means and standard deviations.. The One-Way ANOVA procedure was used to find differences between the parameters of different muscles. Tukey’s post hoc tests was used to find out which means differ. The level of significance was set at P< 0,05 (two-sided tests). III. RESULTS Subjects swam 100 m distance in an average time 62,7 ± 2,4 s. The average blood lactate values were 14,1 ± 2,9 mmol/l and were collected 5 min after swimming. The highest value measured was 17,7 mmol/l. MNF linearly decreased during swimming in all observed muscles. The example of the MNF decrease is presented in Figure 1. The plateau of stabilized MNF was not observed in any of the analyzed muscles during swimming. The comparison of average MNF values at the beginning (at the time of the first stroke) and the end (at the time of the last stroke) of swimming of all muscles are presented in figure 2 (left). The differences among the MNF changes of different muscles were not statistically significant (Figure 2, left). However, the greatest decrease of MNF was detected in TB
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1004
I. Stirn, T. Vizintin, V. Kapus, T. Jarm and V. Strojnik
90,0
**
**
80,0 70,0 60,0 50,0 40,0 30,0 20,0 10,0 0,0 TB
PM1
PM2
LD1
TB
LD2
PM1
PM2
LD1
LD2
Fig. 2 MNF at the end of 100 m swim expressed as a % of the MNF at the beginning of swimming (left) and MNF at the end of swim normalized to the endurance level MNF. Note statistical differences. ** p<0,05. TB – triceps brachii, PM1-pectoralis major, upper part, PM2-pectoralis major, lower part, LD-latissimus dorsi upper part, LD2-latissimus dorsi, lower part.
(from 103.4±8.4 to 78.3±11.3 Hz, eq. 25.1±8.9 %), followed by LD1 (95.0±7.2-70±10.5 Hz, eq. 22.8 ± 8.8 %) and PM1 (88,0 ±17.3-67,8±10.7 eq. 20.2 ± 11.3 %). During the 90 s long isometric contractions the shift of MNF values towards lower frequencies was regularly observed. In almost all trials the lowest MNF (MNFp) value could be calculated (Figure 3). MNFp values were significantly lower than the MNFends values at the end of swimming (table1). When MNFends were normalized to MNFp (Findex), statistically significant differences between indexes of different muscles were found (Figure 2, right). The lowest Findex were found for TB and LD1, 37,3 ± 19,6 and 42,1 ± 24,5 respectively. IV. DISCUSSION
90 80
SMF (Hz)
70 60 50 40 30 20 0
20
40
60
80
100
èa s (t)
Fig. 3 The example of a curve of a 90- second isometric contraction for LD1 muscle (Subject 3). Each dot presents MNF value calculated for every 250 ms interval. Using curve equation y = y0 + ae-δt the endurance level MNF was calculated, at this case MNFp = 36,3 Hz.
Table 1 Comparison between the MNFp and MNFends. TB – triceps brachii, PM1-pectoralis major, upper part, PM2-pectoralis major, lower part, LD1-latissimus dorsi upper part, LD2-latissimus dorsi, lower part. Mean
SD
TB MNFp
58,4
12,4
TB MNFends
75,7
7,6
P1 MNFp
35,0
10,0
P1 MNFends
68,5
11,0
P2 MNFp
41,7
13,9
P2 MNFends
64,3
10,0
LD1 MNFp
53,4
13,3
LD1 MNFends
70,0
10,5
LD2 MNFp
45,1
10,4
LD2 MNFends
64,1
9,5
p
0,003
0,003
0,001
0,005
0,002
The obtained results of 100 m swim were considerable worse in comparison to the swimmers’ best results. This happened due to lack of competitive dive start, executing above water turns instead of underwater turns, the equipment fastened to a swimmer and lack of competitive conditions in general. However, high blood lactate concentrations after swimming revealed that the subjects swam close to their maximum effort since the lactate values were close to those obtained after competition [7]. MNF linearly decreased during swimming in all analyzed muscles. No plateau of stabilized MNF, labeled endurance level, neither the change of the form of declining curve towards the plateau, was observed in any of analyzed muscles during swimming (figure 1) The decreases of MNF were from 16,7% to 25,1% with respect to the initial values which present 15 – 25 Hz in absolute values. No statistically significant differences between the MNF decreases during swimming of different analyzed muscles were obtained. When MNFends were normalized, TB and LD1 approached most to their endurance level and therefore fatigued more than LD2, PM1 and PM2. Changes in MNF occur more rapidly in muscles composed of a high proportion of FT muscle fibers than in muscles composed of a high proportion of ST fibers [1,8,9]. TB is composed of 65 – 75 % of fast-twitch fibers [9], which is the most in comparison to other analyzed muscles. However, the activation of individual muscles during front crawl swimming might be of a crucial importance when evaluating fatigue. Analyzing the rectified EMG signal it is possible to observe very clearly the differences between the activation and the resting periods of the muscles (figure 4). LD and especially TB were activated longer than PM muscle and were therefore resting less time during one stroke. Because we were not able to evaluate the exact
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Change of mean frequency of EMG signal during 100 meter maximal free style swimming
1005
V. CONCLUSSIONS 3d 2a
4a
5b
3b 2b
4b
5c
3c 1b
4c
5a
Fig. 4 Rectified EMG of one arm stroke of one subject. From above, PM1-pectoralis major, upper part, PM2-pectoralis major, lower part, LD1-latissimus dorsi upper part, LD2-latissimus dorsi, lower part, TB – triceps brachii
intensity of contraction for an individual muscle, we could only speculate that the longer activation period during single stroke was the reason for greater fatigue. The greatest activation of the TB happened at the end of activation period (during the up sweep, when elbow extension was executed (figure 4, marks 5). At that moment the forces applied to the hand are the greatest because hand is moving through the water with the greatest velocity and the water resistance is therefore the greatest too [11]. This could be another reason that TB muscle fatigued the most. Significant fatigue of the LD muscle was expected as well. Contraction of LD produces internal rotation, extension and adduction of the shoulder joint which is nearly a description of a crawl armstroke. That is why LD has been labeled as “the swimming muscle” [12]. MNF values decreased more in the upper part of LD in comparison to lower part. This might be related to the swimming technique. LD2 is more activated at the beginning of its involvement (at the downsweep) where the speed of the hand is low. Resistance and therefore force applied by the LD is smaller than at the end of underwater part of the stroke, during the upsweep, where speed of the hand is the greatest. Therefore LD1 contracts against the greater resistance and produce greater forces which lead to a greater extent of fatigue of LD1 in comparison to LD2. As in time domain EMG signal analysis, normalization should be performed in frequency domain EMG analysis as well to directly compare MNF of different muscles. After normalization of MNFends (Findex), some significant differences in muscle fatigue were observed in analyzed muscles. It seems that endurance level of MNF used to describe the lowest border of MNF change is suitable solution for normalization.
MNF decreased at approximately the same extent in all observed muscles during swimming at a maximum effort level. No plateau of stabilized MNF values was observed at the end of 100m swim. Yet, when normalized with respect to the endurance level MNF value obtained with the 90second isometric contraction at the maximum effort level, the differences between the analyzed muscles can be shown.
REFFERENCES 1.
Komi P V, Tesch P (1979) EMG frequency spectrum muscle structure and fatigue during dynamic contractions in man. Eur J Appl Physiol 42: 41-50 2. Gerdle B, Eriksson N E and Hagberg C (1988) Changes in the surface electromyogram during increasing isometric shoulder forward flexions. Eur J Appl Physiol Occup Physiol 57:404-8. 3. Wretling M L, Henriksson-Larsen K, Gerdle B (1997) Interrelationship between muscle morphology, mechanical output and electromyographic activity during fatiguing dynamic knee-extensions in untrained females. Eur J Appl Physiol Occup Physiol 76(6):483-90 4. Ament W, Bonga G J, Hof A L, and Verkerke G J (1993) EMG median power frequency in an exhausting exercise. J Electromyogr Kinesiol (3) 4, 214-220 5. Strojnik V, Jereb B and Colja I (1997). Median frequency change during 60 s maximal hopping and cycling. In: Book of Abstract. Second Annual congress of the European College of Sport Science. Copenhagen, Denmark, 370-371. 6. Caty V Y et al. (2006) Time-frequency parameters of wrist muscles EMG after an exhaustive freestyle test. Rev Port Cien Desp 6: 28-30 7. Bonifazi M, Martelli G, Marugo L, Sardela F and Carli G (1993) Blood lactate accumulation in top level swimmers following competition. The journal of Sports Medicine and Physical Fitness, 33(1) 1318. 8. Larsson B, Karlsson S, Eriksson M and Gerdle B (2003) Test–retest reliability of EMG and peak torque during repetitive maximum concentric knee extensions. J Electromyogr Kinesiol 13: 281-287 9. Moritani T, Gaffney F, Carmichael T and Hargis J (1985) Interrelationships among muscle fiber types, electromyogram and blood pressure during fatiguing isometric contraction. In: Winter D et al. (Eds) Biomechanics. Human Kinetics Publishers,Champaign, Illinois 10. Scheilhauf R E (1979). A hydrodynamic analysis of swimming propulsion. In: Teraud J, Bedingfield E W (Eds.) Swimming III, International Series of Sport Sciences, University Park Press, Baltimore. 11. Behnke R S (2001) Kinetic Anatomy. Human Kinetics, United States. Author: Institute: Street: City: Country: Email:
Igor Stirn University of Ljubljana, Faculty of Sport Gortanova 22 Ljubljana Slovenija
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Repeatability of the Mean Power Frequency of the Endurance Level During Fatiguing Isometric Muscle Contractions I. Stirn1, T. Jarm2 and V. Strojnik1 1
2
University of Ljubljana, Faculty of Sport, Ljubljana, Slovenia University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
Abstract— A criteria to decide whether the sustained isometric contraction was long enough for reliable determination of the so called “endurance level” (MNFp) were tested. It was shown that if the maximal voluntary muscle contraction is sustained for long enough so that the fitted exponential curve fulfills certain predefined derivative criteria as described in the text, the data could be used for reliable estimation of the MNFp without the experimental MNF data actually reaching the plateau level. Using these criteria it was found that the difference in MNFp values of the two consecutive series of maximal voluntary isometric muscle contractions were not statistically significant. Keywords— muscle fatigue, EMG, MNF, endurance level, repeatability
I. INTRODUCTION Surface electromyography (EMG) can be used to detect and monitor the progression of muscle fatigue. For this purpose the EMG signal is usually analyzed in the frequency domain. Power spectral density (PSD) of EMG signal is estimated and the mean and median frequency of PSD (MNF and MDF respectively) are calculated and used as measures of central frequency of the PSD. It is well known that MNF (and MDF) shift to lower frequencies during the progression of muscle fatigue [1,2] when a continuous isometric contraction of the muscle is performed. This decrease of MNF has been predominantly attributed to the decrease in muscle fiber conduction velocity even though there are also other possible causes [2,3,4]. The results of several studies indicate that the observed shift of MNF to lower frequencies is mostly due to biochemical changes in the type II muscle fibers [5,6]. The decrease of MNF as a function of time during longlasting sustained isometric contraction can be efficiently modeled using the exponential model according to equation [7]: −
t
f MNF (t ) = f MNFp + a ⋅ e τ ,
(1)
where fMNFp is the so-called plateau level of MNF sometimes referred to as the endurance level [8]. It has been
hypothesized that both the initial level of MNF at the beginning of the muscle contraction (corresponding to fMNFp+a in equation 1) and especially the final plateau level reached after a long-time sustained isometric contraction (fMNFp) may be relatively insensitive to the actual intensity of muscle contraction over a wide range of intensities. On the other hand, the rate of MNF decrease (time constant τ ) and therefore the time needed to reach the plateau depends greatly on the intensity of muscle contraction (refs.). With the progression of the fatigue the muscle is getting closer to its endurance level represented by fMNFp. If the value of this parameter in the model described by equation 1 can be estimated reliably and if it exhibits a high degree of repeatability over various levels of intensity of contraction, it could be used as the normalization criteria for the extent of muscle fatigue. In order to reach the plateau level of MNF (the endurance level) the muscle in question has to be subjected to long sustained contractions. However, the endurance level may also be estimated by fMNFp from the model in equation 1 without the muscle actually reaching the endurance level, which is less exhaustive for the subject. The main question then is how long does EMG has to be recorded during sustained isometric contraction in order to arrive at the accurate estimate of the endurance level based on the fitted model. The main goal of the present study was therefore to develop a criterion based on which we can decide whether the sustained isometric contraction was long enough for reliable determination of the endurance level. II. METHODS A. Measurements Subjects: Eleven healthy male volunteers (22.3 +/- 2.7 years of age) were involved in this preliminary study. For two days prior to the experiment they were requested to refrain from any intensive strength training. Prior to the experiment they were given a full explanation of the study and they all signed a form of consent to participate in the study. Testing protocol: The protocol consisted of a series of three consecutive isometric contractions of the triceps
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1009–1012, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1010
I. Stirn, T. Jarm and V. Strojnik
brachii muscle at the maximum voluntary effort level. The idea here was to approach the endurance level as quickly as possible based on the assumption that the endurance level (the plateau level of MNF) does not depend on the actual intensity level used to reach it. The duration of each bout of muscle contraction was 75 seconds followed by 60 seconds of rest. A specific body position emphasizing the functional role of the triceps muscle was chosen for the study. The subject was positioned horizontally and face down on a cushioned table with the upper arm abducted by 90 degrees. The forearm was left to hang down off the table freely and a soft hand cuff connected to a fixed length of rope was attached to the wrist. The contraction of the triceps muscle was performed with the forearm flexed by 30 degrees at the elbow. Collection of the EMG data: EMG signals from m. triceps brachii (TB) were collected during an isometric contraction. The electrodes on the long head of TB muscle were placed in accordance with the SENIAM recommendations. The skin was shaved, rubbed with sandpaper and cleaned with alcohol so that inter electrode resistance did not exceed 5 kOhm. Bipolar Ag-AgCl skin electrodes (9 mm diameter, Hellige, Freiburg, Germany) with inter electrode distance of 20 mm, were used. The ground electrode was positioned on the elbow. Data was recorded using Dasy Lab (Daisytec, Amherst, NH) with a sampling frequency 1000 Hz and afterwards analyzed with MATLAB (The MathWorks, Inc., Natick MA). B. Analyzing data Signal processing: The raw EMG signals were first filtered using 5th order Butterworth band pass filter with lower and upper cut off frequencies set to 10 and 500 Hz respectively. The optimal length of data window used for frequency analysis had been previously determined (data not shown) and was set to 250 ms (250 samples). Power spectral density was estimated using the periodogram method on nonoverlapping segments of length 250 ms derived from the filtered EMG signal. Before the calculation of the Fourier transform the segments were zero-padded to the total length of one second. Mean frequency of the PSD (MNF) was calculated for each data segment according to equation:
f MNF
∫ =
fs 2
0
∫
f ⋅ P ( f )df
fs 2
0
(2)
P ( f )df
where P ( f ) represents the PSD estimate and fs is the sampling frequency. By default the MNF values belonging
to the initial 5 seconds of the contraction were discarded to avoid the part of the signal where the maximum muscle force was being established. The remaining MNF values were plotted as a function of time for visualization of the results. A model according to equation 1 was fitted to the whole MNF sequence thus obtained and value of parameter fMNFp was used as the estimate of the endurance level and was labeled MNFp0. Ten sets of MNF data where the plateau level of MNF was clearly reached were selected for further analysis. The model in equation 1 was then fitted to three progressively shortened versions of the original sequence of MNF values and three new values of fMNFp were thus obtained. The cut off points for truncated MNF sequences were arbitrary set based on the value of derivative of the fitted curve from the original non-truncated MNF sequence. These values were 0.03, 0.05 and 0.1 Hz/s. The estimates of endurance level obtained by fitting the exponential model to the truncated MNF sequences were labeled MNFp0.03, MNFp0.05 and MNFp0.1. The error in the estimation of endurance level based on truncated MNF sequence was therefore defined as (for example):
errp 0.05 (%) =
MNFp 0.05 − MNFp 0 MNFp 0
⋅100
(3)
Finally, MNFp values obtained in three consecutive isometric contractions and with respect to the different criterion of selection of the analyzed data, were tested for the repeatability. Statistical methods: Differences between endurance level estimates obtained by different methods and repeatability of the results were evaluated using the paired t-test. The correlations were evaluated by Pearson's correlation coefficient. III. RESULTS AND DISCUSSION Figure 1 presents an example of the original and truncated MNF estimate sequences along with the corresponding fitted curves. In Figure 1a it s shown that the 75 seconds of sustained isometric contraction at maximum voluntary effort level was enough to reach a relatively stable plateau level for the mean frequency of the PSD. The value of this plateau, the so-called endurance level MNFp0 was estimated from the fitted curve and was considered to be the reference (true) endurance level for this subject. Fitting the same exponential model to truncated MNF sequences in Figures 1b to 1d resulted in different curves and thus in different estimates of the endurance level. Table 1 contains the reference values MNFp0 and the error in the estimate of endurance level based on progressively truncated MNF sequences
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Repeatability of the Mean Power Frequency of the Endurance Level During Fatiguing Isometric Muscle Contractions
Table 1 Differences in MNFp with respect to original signal. S1-S10 selected curves. MNFp0 – original fitted curve, MNF003- cut off data at dy(t)/dt ≤ 0.03, err003-error with respect to MNFp , MNF005 -cut off data at dy(t)/dt ≤ 0.05, err005-error with respect to MNFp MNF01-cut off data at dy(t)/dt ≤ 0.1, err01-error with respect to MNFp
90 80
MPF(Hz)
70 60
MNFp0 (Hz)
50
err0.03 %
MNF005 (Hz)
err0.05 %
MNF01 (Hz)
err0.1 %
75,3
74,0
-1,61
73,8
-1,91
68,9
-8,49
S2
48,9
46,1
-5,56
39,7
-18,83
33,5
-31,36
S3
49,8
50,3
0,97
49,4
-0,83
33,7
-32,33
S4
41,7
41,7
0,11
42,6
2,31
44,6
6,99
S5
53,0
53,1
0,21
51,5
-2,95
50,0
-5,78
90
S6
65,5
64,4
-1,79
64,0
-2,28
64,7
-1,35
80
S7
60,7
60,4
-0,51
58,0
-4,38
52,6
-13,38
70
S8
49,0
47,3
-3,50
45,4
-7,38
44,9
-8,29
S9
50,5
47,4
-6,18
42,7
-15,5
45,1
-10,74
S10
53,3
50,5
-5,32
50,0
-6,17
51,0
-4,39 12,31
30 0
20
40
60
80
tim e (s)
MPF(Hz)
MNF003 (Hz)
S1
40
A)
60 50
me
54,8
53,5
2,58
51,7
6,25
48,9
40
SD
9,8
9,9
2,36
10,7
6,14
11,5
30
p 0
20
40
60
0,016
0,014
10,81 0,013
80
tim e(s)
b)
90 80
Table 2 Correlation matrix between the MNFp values of the cut off data. MNFP – original fitted curve, MNF01-cut off data at dy(t)/dt ≤ 0.1, MNF005cut off data at dy(t)/dt ≤ 0.05, MNF003-cut off data at dy(t)/dt ≤ 0.03 MNF01
MNF01
MNFP ,852(**)
MNF005
,956(**)
,880(**)
1
MNF003
,991(**)
,846(**)
,980(**)
70
MPF(Hz)
1011
60 50
MNF005 1
40 30 0
20
40
60
80
tim e(s)
c)
90 80
MPF(Hz)
70 60 50 40 30 0
20
40
tim e(s)
60
80
d)
Fig. 1 MNF plots of original signal (a) and truncated data at dy(t)/dt ≤ 0.1 (b), ≤ 005 (c) and ≤ 0.03 (d)
for 10 subjects. It is clear that if the plateau is not reached during the exercise the estimate of the endurance level deviates from the true value and this deviation becomes greater
with further reduction of the experimental data. Table 2 contains the results of correlation analysis of the endurance level estimates. High level of correlation indicates that endurance level estimates were affected in a similar manner by the truncation of the PSD sequence in most of the subjects. Table 3 shows the comparison of MNFp0 values for the second and third 75 seconds long muscle contractions. Note that the first series of contractions was not considered due to the fact that most subjects did not reach the plateau level of MNF during the first contraction. This prevented a reliable estimation of the endurance level. Significant differences were found between the MNFp values obtained during the 2nd and 3rd series of contractions if contractions (without any restrictions) of all 11 subjects were analyzed. For further comparison of MNFp values we selected the appropriate subjects according to the derivative criteria described above (dy(t)/dt < 0.1, 0.05, 0.03 Hz/s). This led to a reduced number of subjects (n) considered in this part of the study. However, significant differences were no more found, showing that the MNFp values of the two consecutive series of contractions were not different. Note that the difference of
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1012
I. Stirn, T. Jarm and V. Strojnik
Table 3 The comparison of MNFp (Hz) of the 2nd (SE2) and 3rd (SE3) consecutive series of isometric contractions between the original data and truncated data. dy(t)/dt ≤ 0.1, 0.05, 0.03, av - average, DS – standard deviation, n – number of analyzed contractions, p - statistical significance av
SD
SE2
57,3
9,5
SE3
62,9
11,5
SE201
59,3
9,4
SE301
62,1
12,3
SE2005
55,3
6
SE3005
56,8
7,7
SE2003
56,5
5,6
SE3003
57,9
7,8
REFERENCES 1.
n
p
11
0,044
9
0,077
4.
7
0,314
5.
6
0,422
the average absolute values of MNFp of the two series in Hz is progressively smaller as the derivative criterion is more severe.
2. 3.
6.
7. 8.
IV. CONCLUSIONS 9.
In the present study we have shown that if the muscle contraction is sustained for long enough so that the fitted exponential curve fulfills certain predefined derivative criteria as described previously in the text, the data could be used for reliable estimation of the MNFp without the experimental MNF data actually reaching the plateau level. The optimal protocol for data collection would therefore involve the on-line MNF calculation and exponential model fitting. This way the duration of exhaustive muscle contraction would be minimized while retaining the required quality of MNFp estimation.
Komi P V, Tesch P (1979) EMG frequency spectrum muscle structure and fatigue during dynamic contractions in man. Eur J Appl Physiol 42: 41-50 Basmaijan J V, De Luca C J (1985) Muscles Alive: Their function reveald by electromyography. Williams and Wilkins, Baltimore Lindstrom L H, Magnusson R I (1977) Interpretation of myoelectric power spectra: a model and its applications. Proc IEEE 65: 653–662 Masuda K et al. (1999) Changes in surface EMG parameters during static and dynamic fatiguing contractions. J Electromyogr Kinesiol 9: 39-46 Moritani T et al. (1985) Interrelationships among muscle fiber types, electromyogram and blood pressure during fatiguing isometric contraction. In D. Winter et al.: Biomechanics Champaign, Human Kinetics Publishers, Illinois. Gerdle B, Karlsson S, Crenshaw A G, Friden j (1997) The relationship between EMG and muscle morphology throughout fatiguing static knee extension at two force levels in the unfatigued and the fatigued states. Acta Physiol Scand 160: 341-352 Merletti R, Rainoldi A and Farina D (2004) Myoelectric manifestations of muscle fatigue. In: Merletti R, Parker F A (Eds) Electromyography.IEEE Press, New Jersey Gerdle B, Eriksson N E and Hagberg C (1988) Changes in the surface electromyogram during increasing isometric shoulder forward flexions. Eur J Appl Physiol Occup Physiol 57:404-8. Larsson B, Karlsson S, Eriksson M and Gerdle B (2003) Test–retest reliability of EMG and peak torque during repetitive maximum concentric knee extensions. J Electromyogr Kinesiol 13: 281-287 Author: Institute: Street: City: Country: Email:
Igor Stirn University of Ljubljana, Faculty of Sport Gortanova 22 Ljubljana Slovenija
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Telemonitoring of the step detection: toward two investigations based on different wearable sensors? G. Maccioni, V. Macellari and D. Giansanti 1
Dipartimento di Tecnologie e Salute, Istituto Superiore di Sanità, Roma
Abstract— The number of steps per time period is an important ambu-latory measure describing an individuals’ s locomotor func-tion with implications for psychological and physical heath. The step counting is today amply used as a means especially for the prevention of the obesity, in cardiological prevention and rehabilitation and in diabetology. The accuracy of the pedometers has been tested in literature and several limits of accuracy have been found for many of the commercial devices in the case of healthy subjects, furthermore these systems can be confused in the case of subjects with pathologies affecting unbalance such as the parkinson’ s disease . A system for the monitoring the step counting should be a highly accurate and not deter the performance by changing the subjects’ motion ability. In this paper we have intro-duced two novel wearable devices for the monitoring of the step counting. The first was based on an accelerometer, the second on a Force Sensing Resistor for the monitoring of the muscular expansion during gait. Both the two systems were affixed at the calf gastrocnemius level optimal position both for the muscular expansion and as biomechanical position. Preliminary trials showed that the FSR based solution showed better performances. Keywords— steopcounting
telemonitoring,
pedometers,
telemedicine,
I. INTRODUCTION The number of steps per time period is an important ambulatory measure describing an individuals’ s locomotor function with implications for psychological and physical helath. Key applications in neurology, psychiatry, psychophamacology, and sports, behaviour or rehabilitation medicine make it describe to improve step detecting step devices. The step counting is today amply used as a means especially for the prevention of the obesity, in cardiological prevention and rehabilitation and in diabetology. One of the most diffused wearable system designed for this purpose is the pedometer. It permits to count the steps during the walking or the running. Usually it can be worn at the belt level indifferently at the left or at the right side. It is based on a mechanical lever sensible to the movements along the vertical axis; at each step this lever activates a gear that increases the counting. These pedometers have been wide used in literature. The accuracy of the pedometers has been tested in
literature and several limits have been found for many of the commercial devices. Schneider et al. found in a trial over about 10000 steps that the commercial pedometers were not accurate in the case of healthy subjects [4]. In particular the author showed that some pedometers overestimated the measurement of about 45 % and other underestimated the measurement of about 25 %. Furthermore the measurement deters with the increasing of the subjects’ inability, in fact as indicated by Keenan and Wilhelm in the case of the Parkinson’ s disease the not fluid movement may confuse the pedometer [5]. New accurate solutions should then be investigated for the monitoring of subjects with a high degree of inability. The aim of the paper was to investigate new solutions for the telemonitoring of the step counting based on the optimization of the biomechanical position of the affixation and of the sensor to be used II. MATERIALS AND METHODS The study identified: 1) Two different type of sensors for the monitoring 2) An optimized position for the monitoring of the step counting based on these two type of sensors. for the both type of the sensors. A. Sensors solutions used for the monitoring 1) The first solution is base on accelerometers and use the accelerometer (3031-Euro Sensors, US) with the axis orientated in the direction of the motion.(Fig. 1 A). 2) The second solution was based on a device with a force sensing resistor (FSR) (Interlink USA)with a band to be affixed in a muscle to monitor the muscular expansion during gait (Fig. 1 B). Both the solution transmits to a homecare RX/TX unit based on XTR-434H (Aur°el, USA) (fig. 1 B ) B. Position optimized for the affixation As affixation point we referred to the vertical slice individuated by the vertex of the pit between the two gas-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1006–1008, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Telemonitoring of the step detection: toward two investigations based on different wearable sensors?
trocnemius muscles (figure 1 C). This position has yet been used by Lyons et al in [8, 9] for affixing the EMG sensor to assess the muscular activity for the monitoring the calf pump activity. This an optimal position for the accelerometer, in fact it is an optimal biomechanical position for the monitoring of acceleration [11] especially if compared to the position of the belt level used for the commercial pedometers. This is also an optimal position for the monitoring of the muscular expansion by means of the solution based on the FSR. In fact the calf muscular expansion, called pump of triceps is one of the most intense and representative muscular pump activity and also strongly related to the gait-phases; From a functional point of view the triceps function as an aspiration and compression pump. The activity of this pump is directly related to the biomechanics of the joint tibia-tarsal during the flexo-extension and furnish accurate information of the gait by its monitoring the two phases taking the information of the step: A. Muscular distension: The blood is aspired in the venous cavities during this phase B. Muscular contraction: The centripetal emptying of the venous cavities is actuated during this phase.
1007
ACCELEROMETER
Fig. 1 A Sensor unit based on the accelerometer (3031-Euro Sensors, US) UNIT RX/TX
UNIT BASED ON THE FSR
III. PRELIMINAR RESULTS Both the two systems were validated on a healthy subjects with the following protocol and with a visual observation A) 5 trials of 100 steps with a fast gait. B) 5 trials of 100 steps with a normal gait. C) 5 trials 100 steps with slow gait.
Fig. 1 B Sensor unit based on the FSR resistor
(Interlink USA) with the
RX/TX homecare unit
The telemetric system based on the module did not show failures. The system based on the FSR did not show errors. The system based on the accelerometer did not show errors in the normal trials. In the fast trials showed an error lower that 1%. In the slow trail showed an error of about 2 % showing that lower displacements could confound the accelerometer as it could be expected by the theory[ 7]. IV. DISCUSSION The number of steps per time period is an important ambulatory measure describing an individuals’ s locomotor function with implications for psychological and physical heath. The step counting is today amply used as a means especially for the prevention of the obesity, in cardiological prevention and rehabilitation and in diabetology [1-3] . The accuracy of the pedometers has been tested in literature and
Fig. 1 C Position of the affixation several limits of accuracy have been found for many of the commercial devices in the case of healthy subjects [4], furthermore these systems can be confused in the case of subjects with pathologies affecting unbalance such as the parkinson’ s disease [5]. A system for the monitoring the step counting should be a highly accurate and not deter the performance by changing
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1008
G. Maccioni, V. Macellari and D. Giansanti
TEST OF THE DEVICE BASED ON THE FSR SENSOR AGAINST TO A GOLDEN STANDARD
assessed for example by means of methodology well estabilished in literature, such as the tinetti test [10].
REFERENCES 1.
SELECTION OF GROUPS OF SUBJECTS BY MEANS OF THE TINETTI TEST WELL STANDARDIZED IN LTERATURE
Fig. 2
Future Work
the subjects’ motion ability. In this paper we have introduced two novel wearable devices for the monitoring of the step counting. The first was based on an accelerometer, the second on a Force Sensing Resistor. Both the to systems were affixed at the calf gastrocnemius level optimal position both for the muscular expansion and as biomechanical position. Preliminary trials showed that the FSR based solution showed better performances. The next step (figure 2) will be the set-up of an environment for the comparison of this system based on the FSR against a golden standard and on a wide range of subjects. The choice of systems as golden standard should consider both the position of the scientific literature, the accuracy and the standard de-facto. A system useful as a golden standard could be for example the Biometrics (Biometrics, UK) system. In fact it is easily wearable, accurate and not expensive. Furthermore a similar system, the VivoMetrics has yet been successfully used for testing the accuracy of the pedometers [5] demonstrating that such systems still represent a golden standard de-facto. The subjects to be envolved in the test could be selected on the basis of the inability as
Eisenmann JC, Laurson KR, Wickel EE, Gentile D, Walsh D “Utility of pedometer step recommendations for predicting overweight in children” Int J Obes (Lond). 2007 Jan 30; [Epub ahead of print] 2. Dasgupta K, Chan C, Da Costa D, Pilote L, De Civita M, Ross N, Strachan I, Sigal R, Joseph L “Walking behaviour and glycemic control in type 2 diabetes: seasonal and gender differences--study design and methods. Cardiovasc.Diabetol. 2007 Jan 15;6:1 3. lbright C, Thompson DL “The effectiveness of walking in preventing cardiovascular disease in women: a review of the current literature.” J Womens Health (Larchmt). 2006 Apr;15(3):271-80 4. Schneider PL, Crouter SE, Bassett DR “Pedometer measures of freeliving physical activity: comparison of 13 models” Med Sci Sports Exerc. 2004 Feb;36(2):331-5. 5. Keenan DB, Wilhelm FH. “Classification of locomotor activity by acceleration measurement: validation in Parkinson disease.” Biomed Sci Instrum. 2005;41:329-34. 6. Tudor-Locke C, Williams JE, Reis JP, Pluto D. “Utility of pedometers for assessing physical activity: convergent validity.” Sports Med. 2002;32(12):795-808 7. Giansanti D , Maccioni G. “Comparison of three different kinematic sensors assemblies for the locomotion study “ Physiological Measurement, vol. 26, pp. 689-705 (2005) 8. O’Donovan KJ, O’Keeffe DT, Grace PA , Lyons GM “Accelerometer based calf muscle pump activity monitoring “ Medical Engineering & Physics, Volume 27, Issue 8, October 2005, Pages 717-722 9. Lyons GM, Leane G. E., Clarke-Moloney M., O’Brien J. V., Grace P. A. “An investigation of the effect of electrode size and electrode location on comfort during stimulation of the gastrocnemius muscle” 10. Kandel E.R., Schwart RJ ., Jessell T.M “Principi di neuroscienze” (BOOK). Ed. Cea (2000) 11. Cappozzo A., “Head and trunk mechanics in level walking”. Ph.D. Thesis, University of Strathclyde, Glasgow, UK, (1982) Author: Daniele Giansanti Institute: Street: City: Country: Email:
Istituto Superiore di Sanità via Regina Elena 299 00161 Roma Italy
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Influence of Reduced Breathing During Incremental Bicycle Exercise on Some Ventilatory and Gas Exchange Parameters J. Kapus, A. Usaj, V. Kapus, B. Strumbelj Laboratory of Biodynamics, Faculty of Sport, University of Ljubljana, Slovenia Abstract – The purpose of this study was to examine the ventilatory, gas exchange, oxygen saturation and heart rate response to reduced breathing frequency during an incremental bicycle exercise. Eight healthy male subjects performed an incremental bicycle exercise test on an electromagnetically braked cycle ergometer twice: first with continuous breathing (CB), and second with reduced breathing frequency (B10), which was defined as 10 breaths per minute. As work rates increased, significantly higher VE, Vco2 and R were measured during the exercise with SB than during the exercise with B10. Consequently, PETco2 and PETo2 were higher and lower, respectively, during the exercise with B10 than during the exercise with SB at 150 W. In addition, HR was greater during the exercise with SB than during the exercise with B10; significant differences were achieved at 90, 120 and 150W. However, Vo2 showed no significant difference between the exercises in two different breathing conditions. In summary, reduced breathing frequency during the incremental bicycle exercise decreased VE and consequently decreased So2 and increased PETco2. However, it seemed that this degree of breathing reduction did not influence on aerobic metabolism due to unchanged Vo2.
breathing during exercise on other metabolic parameters (O2 consumption – Vo2, CO2 production – Vco2, respiratory gas exchange ratio – R, arterial O2 saturation – So2, heart rate – HR) were insufficient. It seemed that the degree of breathing restrictions, the lung volume during breath holding, the exercise intensity, the type of exercise (swimming, cycling, running) were important factors that determine the subjects response to reduced breathing frequency during exercise. Considering that an incremental exercise (protocol with multiple exercise intensities) and a breathing pattern with significantly influence on subject’s performance [4,7] were used in this study for the restricted breathing conditions. Therefore, the purpose of this study was to examine the ventilatory, gas exchange, oxygen saturation and heart rate response to the reduced breathing frequency during the incremental bicycle exercise.
Keywords – reduced breathing frequency, incremental bicycle exercise
Eight healthy male subjects (age 24 ± 2 years, height 180 ± 5 cm, weight 80 ± 7 kg and Vo2peak 42 ± 4 ml/kg/min) volunteered to participate in this study.
I. INTRODUCTION During front crawl swimming swimmers could manipulate with different breathing patterns. They usually take breaths every second stroke cycle. However, they could reduce the breathing frequency with taking breath every fourth, fifth, sixth or eighth stroke cycle. Swimming training with reduced breathing frequency is often referred to as "hypoxic training". It was thought that, by limiting inspired air, the reduction of oxygen available for muscular work would result and therefore cause muscle hypoxia, similar to that experienced at altitude [1]. Previous studies demonstrated that a reduced breathing frequency during exercise elicits a decrease in pulmonary ventilation (VE) with concomitant increase in tidal volume (VT) [2,3,4,5,6,7] and systematic hypercapnia. The latter was determined by analysing expired air during the exercise [2,3,5,6,7] and by measuring capillary blood sampled during [4] and after the exercise [8]. However, the influences of this kind of
II. METHODS
A. Subject
B. Procedures The subjects performed an incremental bicycle exercise test on an electromagnetically braked cycle ergometer (Ergometrics 900, Ergoline, Germany) twice: first with continuous breathing (CB), and second with reduced breathing frequency (B10), which was defined as 10 breaths per minute. RBF was regulated by breathing metronome. The breathing metronome was composed by gas service solenoid valve (Jaksa, Slovenia) and semaphore with red and green lights. Both were controlled by a micro automation Logo (Siemens, Germany). The subjects were instructed to expire and to inspire during 2 seconds period of open solenoid valve (a green light at semaphore was switched on) and to hold breath using almost all lung capacity during 4 seconds of closed solenoid valve (a red light at semaphore was switched on). Prior the exercise testing each subject familiarized with breathing through the breathing metronome. The protocol of incremental exercise test consisted measuring a base line
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 994–997, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
The Influence of Reduced Breathing During Incremental Bicycle Exercise on Some Ventilatory and Gas Exchange Parameters
followed by progressive exercise where the work rate increased by 30 W every 2 minutes until volitional exhaustion. Pedalling frequency, digitally displayed to the subjects, was kept at ~60 revolutions per minute (rpm). C. Instruments During the incremental bicycle exercise test, the subjects breathed through a mouthpiece attached to a turbine device. The subject’s respired gas was sampled continuously by a VMAX 29 (Sensor Medicis, USA) metabolic cart for a breath-bybreath determination of metabolic and ventilatory variables. The turbine device and the O2 and CO2 analysers were calibrated prior to the test with standard 3-l syringe and precision reference gases, respectively. So2 was estimating using TruStat Oximeter (Datex – Ohmeda, ZDA) pulse oximeter. The ear probe was attached to the earlobe after cleaning the area with alcohol. HR was measured continuously by PE3000 (Polar Electro, Finland). D. Data analysis Exercise values of VE, VT, Vo2, Vco2, R, So2 and HR were calculated from 60-s averages at during the second minute of each work stage. The values were presented as means (M) ± standard deviations (SD). The paired t-test was used to compare the data between incremental bicycle exercises in two different breathing conditions III. RESULTS As shown in Table 1 150 W was the intensity of last work stage at incremental bicycle exercise with B10 that all of subjects completed. Therefore, the data obtained from rest to the work stage at 150 W were used to compare the Table 1 Wpeak obtained at incremental bicycle exercise with CB and with B10 (** - p<0,01) Subjects 1 2 3 4 5 6 7 8 M (SD)
SB
B10 270 300 270 300 300 300 330 240
289 ± 27
180 150 240 180 180 210 180 150 184 ± 30 **
995
exercises in two different breathing conditions. The results of breathing parameters, So2 and HR are given in the following figures (the data up to dash line were included in statistical analysis) in the following figures. Fig. 1 demonstrate that, as work rates increased, significantly higher VE, Vco2 and R were measured during the exercise with SB than during the exercise with B10 (p<0,05; p<0,01). Consequently, PETco2 and PETo2 were higher and lower, respectively, during the exercise with B10 than during the exercise with SB at 150 W (p<0,05). In addition, HR was greater during the exercise with SB than during the exercise with B10; significant differences were achieved at 90 W, 120 W (p<0,05) and at 150 W (p<0,01). However, Vo2 showed no significant difference between the exercises in two different breathing conditions. IV. DISCUSSION In the present study the reduced breathing frequency during the incremental bicycle exercise decreased subject’s performance for 36%. Despite insignificant higher VT throughout the incremental bicycle exercise with B10, the reduced breathing frequency induced significant lower VE from 90W to the end of test in comparison with the spontaneous breathing conditions. A reduction of approximately 43% in VE was measured at 150 W, which was the last statistically analysed work stage. This reduction in VE was in accordance with the results of previous studies in which 10 breaths per minute was used for the reduced breathing conditions [7]. Furthermore, similar reduction in VE was reported when swimmers reduced their breathing frequency during swimming from usual taking breath every second stroke cycle to taking breath every fifth [2] or sixth [5] stroke cycle. Marked hypoventilation at 150 W with B10 induced higher PETco2 and lower PETo2 and So2 in comparison with SB conditions. These results were similar to the results of previous studies in which dry land activities (cycling on cycle ergometer [4,7], treadmill running [9] or an exercise on arm crank ergometer [10]) were used as an experimental exercise. On the contrary, hypoxia (determined by analysing expired air during the swimming [2,3,5] and by measuring capillary blood sampled after the swimming [8]) has not been proofed as a product of reduced breathing during swimming yet. The reason for this phenomenon could be technical limitations for direct measuring So2 during swimming. Recently, Miyasaka, Suzuki and Miaysaka (2002) succeeded to measure So2 during the sprint swimming [11]. Severe arterial desaturation was detected due to the hypoventilation during the swimming.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
996
J. Kapus, A. Usaj, V. Kapus, B. Strumbelj 160
17
SB B10
140
16 15
100 80 60
*
40
PETO (kPa)
* *
* *
*
14 13
2
VE (l/min)
120
12 11 10
20
9
0
8
0
30
60
90
120
150
180
210
240
270
300
SB B10 0
330
30
60
90
120
180
210
240
270
300
330
210
240
270
300
330
210
240
270
300
330
210
240
270
300
330
11
5,0 4,5
SB B10
*
* *
9
*
3,5
8
2
3,0
SB B10
10
PETCO (kPa)
4,0
VT (l/min)
150
Work rate (W)
Work rate (W)
2,5 2,0
7 6
1,5 1,0
5
0,5 4
0,0 0
30
60
90
120
150
180
210
240
270
300
0
330
30
60
90
120
100
45
SB B10
*
98 96
35
So2 (%)
Vo2 (ml/kg/min)
40 30 25 20
94 92 90 88
15 5
180
102
50
10
150
Work rate (W)
Work rate (W)
86
*
SB B10
84 82 0
0 0
30
60
90
120
150
180
210
240
270
300
30
60
90
120
330
150
180
Work rate (W)
Work rate (W)
5,0 4,5 4,0
HR (1/min)
SB B10
Vco2 (l/min)
3,5 3,0 2,5 2,0
*
1,5 1,0 0,5
* *
* *
*
200 190 180 170 160 150 140 130 120 110 100 90 80 70
SB B10
*
0
0,0 0
30
60
90
120
150
180
210
240
270
300
330
Work rate (W)
30
60
90
* *
*
120
150
180
Work rate (W)
Fig. 1 Relationship of VE, VT, Vo2, Vco2, R, PETO2, PETCO2, So2 and HR to work rate during incremental bicycle exercise with CB (open circles) and with B10 (closed triangles) (* - p<0,05; ** - p<0,01)
1,4 1,3 1,2 1,1
R
1,0
*
* *
90
120
* *
0,9 0,8 0,7 0,6
SB B10
0,5 0,4 0
30
60
150
180
Work rate (W)
210
240
270
300
330
Although So2 was lower at 150 W with B10 in comparison with SB, there were no significant differences in Vo2 between different breathing conditions. These results indicated that an aerobic metabolism was not impeded by the reduced breathing frequency during the incremental bicycle exercise. According to these results it could be argued that training with reduced breathing frequency elicit
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Influence of Reduced Breathing During Incremental Bicycle Exercise on Some Ventilatory and Gas Exchange Parameters
aerobic adaptations similar to the high altitude training [4]. On the contrary, in some previous studies the reduced breathing frequency during exercises induced lower Vo2 in comparison with the spontaneous breathing conditions [3,6,7,10]. It seemed that higher degree of a breathing reduction would be sufficient to decrease Vo2. However, it is doubtful if the subjects could maintain more reduced breathing during exercise due to distress that they had already reported at the end of incremental bicycle exercise with B10. The results of lower Vco2 and consecutive lower R at last three completed work stages during the incremental bicycle exercise with B10 were in agreement with the results of previous studies [5,6,12]. According to theirs results Lee, Cordain, Sockler and Tucker (1990) suggested that CO2 was retained in muscle, plasma and erythrocytes during the exercise with reduced breathing frequency [12]. In addition, the results of influences of reduced breathing frequency during exercise on HR were insufficient. The reduced breathing frequency induced increase [7], decrease [3,6] or unchanged HR [2,3,5,6] in comparison with the spontaneous breathing during exercise. Considering that it was suggested that HR targets (typically used for determining training intensity) should be adjusted when this kind of breathing is used during testing for determining training intensity [6]. In summary, the reduced breathing frequency (10 breaths per minute) during the incremental bicycle exercise decreased VE and consequently decreased So2 and increased PETco2. However, it seemed that this degree of breathing reduction did not influence on aerobic metabolism due to unchanged Vo2.
997
REFERENCES 1. 2.
Kedrowski V (1979) Hypoxic training. Swim Tech 13:55–66 Dicker SG, Lofthus GK, Thornton NW, Brooks GA (1980) Respiratory and heart rate responses to controlled frequency breathing swimming. Med Sci Sports Exerc 1:20–23 3. Holmer I, Gullstrand L (1980) Physiological responses to swimming with a controlled frequency of breathing. Scan J Sci Sports 2:1–6 4. Sharp RL, Williams DJ, Bevan L (1991) Effects of controlled frequency breathing during exercise on blood gases and acid-base balance. Int J Sports Med 12:62–65 5. Town GP, Vanness JM (1990) Metabolic responses to controlled frequency breathing in competitive swimmers. Med Sci Sports Exerc 22:112–116 6. West SA, Drummond MJ, VanNess JM, Ciccolella ME (2005) Blood lactate and metabolic responses to controlled frequency breathing during graded swimming. J Strength Cond Res 19: 772–777 7. Yamamoto Y, Mutoh Y, Kobayashi H, Miyashita M (1987) Effects of reduced frequency breathing on arterial hipoxemia during exercise. Eur J Appl Physiol 56:522–527 8. Kapus J, Ušaj A, Kapus V, Štrumbelj B (2003) The influence of reduced breathing during swimming on some respiratory and metabolic values in blood. KinSI, 9:12–17 9. Matheson GO, McKenzie DC (1988) Breath holding during intense exercise: arterial blood gases, pH, and lactate. J Sports Med Phys Fitness 64:1947–1952 10. Stager JM, Cordain L, Malley J, Stickler J (1985) Arterial desaturation during arm exercise with controlled frequency breathing. Med Sci Sports Exerc 17:227 11. Miyasaka KW, Suzuki Y, Miyasaka K (2002) Unexpectedly severe hypoxia during sprint swimming. J Anesth 16:90–91 12. Lee C, Cordain L, Sockler J, Tucker A (1990) Metabolic consequences of reduced frequency breathing during submaximal exercise at moderate altitude. Eur J Appl Physiol 61:289–293 Author: Jernej Kapus Institute: Laboratory of Biodynamics, Faculty of Sport, University of Ljubljana Street: Gortanova 22 City: 1000, Ljubljana Country: Slovenia Email:
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Breast Ultrasound Images Classification Using Morphometric Parameters Ordered by Mutual Information A.V. Alvarenga1, J.L.R. Macrini2, W.C.A. Pereira1, C.E. Pedreira3 and A.F.C. Infantosi1 1
Biomedical Eng. Program/COPPE, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil 2 Instituto de Estudos em Saúde Coletiva – IESC/UFRJ 3 School of Medicine and COPPE-PEE-Engineering Graduate Program/Federal University of Rio de Janeiro, Rio de Janeiro, Brazil Abstract— This work aims to assess the potentiality of Mutual Information (MI) in ordering morphometric parameters, according to its relevance, to classify breast ultrasound images. Seven parameters were calculated over normalised radial length and convex polygons from 246 segmented tumour images. MI was calculated between each one and the outcome, and between each other. MI indicated that two parameters had negligible relevance. A feedforward neural network was implemented with the seven parameters as input to classify the images. The best performance (accuracy=88%) was obtained with the five first parameters, thus confirming the pour relevance of the last two ones. Keywords— Mutual information, breast, utrasonography, morphometric parameters, neural network.
I. INTRODUCTION Mammography is the screening exam for breast cancer early diagnosis, which is essential for increasing the therapy efficacy. Nevertheless, the specificity of the mammography is still questionable, as a considerable number of suspicious solid masses are usually recommended for surgical biopsy [1] although only 10 % to 30 % of them are malignant [2]. Therefore, in addition to this exam, the ultrasound (US) breast image has been used to improve the diagnostics and reduce the number of unnecessary biopsies for patients with palpable mass and inconclusive mammograms [3-5]. Malignant breast tumours tend to present irregular or illdefined contour due the infiltration on surrounding tissue [6]. Thus, from the tumour contour it has been pointed out the possibility of formulating a diagnostic hypothesis [3-6]. With this aim, several morphological parameters extracted from US images have been investigated. However, accurate quantification of such parameters is often quite difficult, due to the nature of the problem and due to limitations intrinsic to the imaging process itself [7]. These can undermine the significance of the clinical findings derived from them. Therefore, the adequate set of parameters is still a problem to be solved. Hence, Multivariate linear statistical methods, such as the Linear Discriminant Analysis or Principal Component Analysis, have been used to evaluate the parameters discrimination potential [8].
In this work, we apply a non-linear approach, based on Mutual Information (MI) [9], to establish the most relevant morphometric parameters, extracted from the Convex Polygon [10] and the Normalized Radial Length techniques [6], for discriminating breast tumours in ultrasound images. The selected parameters are then taken as input of a Neural Network to be used as a classifier for breast tumours. II. MATERIAL AND METHODS A. Database The database consists of 246-breast tumour US images from patients of the National Cancer Institute (Brazil), acquired with a 7.5MHz linear array B-mode ultrasound probe (Sonoline Sienna; Siemens, Erlangen, Germany) with axial and lateral resolution of 0.45 mm and 0.49 mm, respectively. The sonograms depicted 177 malignant and 69 benign tumours that were histopathologically proven. The tumour contour was estimated by the semi-automatic contour procedure (SAC), based on morphological operators [11]. The result of this procedure is illustrated for a malignant breast tumour (Fig. 1). It is worth emphasising that no statistical significant difference was observed between SAC and contour established by radiologists [12]. B. Morphometric Parameters The Normalized Radial Length (NRL) is obtained as [6]: d (i ) =
( x (i ) − X 0 ) 2 + ( y (i ) − Y0 ) 2 max( d (i ))
(a)
,1 ≤ i ≤ N
,
(b)
Fig. 1.Example of (a) malignant breast tumour on US image and its respective contour (b) established by SAC
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1025–1029, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
(1)
1026
A.V. Alvarenga, J.L.R. Macrini, W.C.A. Pereira, C.E. Pedreira and A.F.C. Infantosi
where (X0, Y0) and (x(i),y(i)) are respectively the coordinates of the centroid and the boundary pixel at the i-th location, N is the number of contour pixels and max(d(i)) is the maximum value of the radial length. Three parameters were calculated from d(i): standard deviation (DNRL), area ratio (RA) and the roughness index (R). The DNRL defined as [6]:
1 N ∑ (d (i) − d (i)) 2 , N − 1 i =1
DNRL =
(2)
where d (i ) is the average value of d (i ) . DNRL gives a measure of the contour macroscopic irregularities [6]. The area ratio (RA), defined as: RA =
1 d (i ) ⋅ N
N
∑ ( d (i ) − d (i ) ) i =1
,
d (i) − d (i) = 0 ∀ d (i) ≤ d (i) ,
(4)
computes the percentage of tumour outside the circular region defined by d (i ) . The more irregular is the contour, the higher the value of RA. The roughness index (R) defined as [6]:
1 N
N
∑d i =1
n
(i ) − d n (i + 1)
,
(5)
gives the average distance between neighbour pixels over the entire contour. Irregular contours provide high values of roughness index. Convex polygon [13] is the geometric shape that circumscribes the contour established by SAC. The more irregular is the contour, the more the difference from the convex polygon. This difference can be quantified using two parameters: overlap ratio (RS) and normalised residual mean square value (nrv). The parameter RS is defined by [2]: RS =
Area( S m ) Area( S o ) ,
(6)
with Sm, the binary image determined from SAC and So, the binary image of its respective convex images. The symbols ∩ and ∪ indicate the areas intersection and union, respectively. Therefore, if the areas have the same shape and size, and are in the same position, the overlap ratio is the unity. The parameter nrv application is based on the determination of a residue Sr defined as [10]:
S r = Area(S o ) − Area(S m )
nrv =
(7)
ψ r2 ψ o2 ,
(8)
where ψ is the squared average value of Sr and ψ is the squared average value of contour perimeter Po. We have also tested ψo2 as the squared average value of the area but it resulted in less sensibility [11]. Two other parameters were also calculated: the circularity (C) and the morphological-closing area ratio (Mshape). The former is pointed out as an important parameter for the correct classification of breast tumours [6] and is defined as: 2 r
2 o
C=
(3)
where
R=
If areas are identical (shape and size) and in the same position, Sr = 0, nrv is defined as [10]:
P2 A ,
(9)
where P is the perimeter and A the area of the SACsegmented tumour. The perimeter was measured by summing the number of pixels on the tumour contour, and the area was the number of pixels inside the contour. Mshape is defined as the ratio between the Sm area and its morphological-closing area [10]. This morphological operator allows filling small holes and gaps [14] on SAC-defined contour. By applying this operator, the morphologicalclosing area tends to be greater than Sm area. Hence, the more irregular is the contour, the smaller is Mshape. C. Mutual Information Let us consider two random variables (RV), X and Y. The amount of information X gives about Y is the Mutual Information between these two RVs, expressed as:
I ( X , Y ) = ∑ ∑ p ( x, y ) log x∈χ y∈γ
p ( x, y ) p( x) p( y )
,
(10)
where p(x,y) is the joint probability density distribution of RV’s X and Y. Note that the statistical independence between the variables p(x,y) = p(x) p(y) implies in I(X,Y) = 0. A large (or small) mutual information means that the variables are very (or little) related between themselves. Whereas correlation measures linear relationships, MI is more general, in the sense that it assesses statistical independency. By measuring the MI between an attribute and the outcome of a process (or system), one is evaluating how much (in a non-linear way) the outcome is affected by this particular attribute. In the same way, the MI between two attributes indicates their degree of redundancy. The main setback of this idea is that one way of addressing redundancy would be to verify the MI for all combina-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Breast Ultrasound Images Classification Using Morphometric Parameters Ordered by Mutual Information
tions of the attributes candidates, which, in most applications, is an unfeasible task. A way out comes from an algorithm called MIFS-U (Mutual Information Feature Selector under Uniform Information Distribution) proposed by [15]. It follows a brief description of the MIFS-U algorithm: Let F be the set of all (let’s say n) attribute candidates. Step 1: Initialize a set S as empty and compute the MI between the each of the attributes candidate (i.e. each fi ∈ F, i=1, …, n) and the output label, call it I(fi; OUT). Step 2 – Select the attribute, among all fi ∈ F, that maximizes I(fi, OUT), call it fk. Next, remove fk from F and put it in S, i.e., set F ← F - {fk} and S ← fk Step 3 – Choose, among all features in F, the one that, jointly with the ones already in S, maximizes the MI with the output, i.e. choose an attribute fk ∈ F such that I(fi,S; OUT) is maximized for fi ∈ F. Unfortunately, calculation of I(fi,S; OUT) in step 3 is quite hard since it involves condition probability maximization (find the attribute that maximizes the MI with the output, given that a set of attributes have been already chosen). The MIFS-U algorithm proposes a simplification that gives a good approximation for I(fi,S; OUT) as detailed in [15]. This is done by taking in consideration not only the MI between the attribute and the outcome but also the MI among all attributes in the set S. Here, the attributes and outcome are, respectively, the morphometric parameters previously presented and the histopatological classification of the breast tumours: malignant or benign. Beside, considering the relatively small number of available samples, the leave-one-case-out resampling method was carried out to assure the reliability and effectiveness of the MI approach. D. Artificial Neural Network The parameters selected by MIFS-U were used as inputs of a Feedforward Neural Network (FNN). The choice of the number of hidden units, may decisively impact on performance. Here, we employed Bayesian Regularization, as proposed in [16], which turned out to be quite successful in a number of applications. An objective function is optimized to force the NN to be effectively pruned by vanishing irrelevant weights during the training phase. The fundamental idea is to induce a balance between complexity reflected in the number of parameters (weights) and goodness of fit. We started the Bayesian Regularization algorithm with 1 hidden layer with 10 neurons. As output, we use a single unit layer with zero or one target to represent the two classes. The leave-one-out procedure was performed as an estimative of the out-of-sample performance and results were assessed considering sensitivity (Se), specificity (Sp) and accuracy (Ac) as figure of merit [17].
1027
III. RESULTS AND DISCUSSION Among all studied parameters, the normalised residual mean square value (nrv), which presented the highest MI considering the outcome (0.369) (Table 1), was chosen by MIFS-U as the most relevant in distinguishing malignant from benign tumours. This performance may be associated to the ratio between the residual area (Sr), which is sensitive to the contour irregularities, and the reference contour, defined by the convex polygon. The circularity, the second parameter selected by MIFSU, presented the second higher MI considering the outcome (0.280) (Table 1). In spite of nrv and C presented a large MI (1.17) (Table 3), they were kept as the two main parameters (Table 2), believably due to their high MI value with the output (Table 1). In this sense, despite they share an extensive amount of information, there still exists an appreciable quantity of information in parameter C that is not comprised in nrv. In a previous work, nrv has already been cited as the most relevant in distinguishing malignant from benign tumours, using a smaller database [10]. Besides, tumour circularity (C), ranked as the second best parameter, has been considered as an important parameter for classifying breast tumours in malignant or benign [6]. The overlap ratio (RS) had the third best individual result, taking into account the MI result considering the outcome. However, RS was placed as the fifth most important parameter by MIFS-U ordination. This seems to occur because RS has relative large values of MI with parameters nrv (0.96) and C (1.25) (Table 3). This means that, since the MIFS-U takes the conditional probability in account, the information contained in RS is largely redundant with the information enclosed in nrv and C. On the other hand, Mshape, which presented the fourth best MI value (0.173) (Table 1), was upgraded to the third position by MIFS-U selection (Table 2). This behaviour seems to be explained by its small MI value in relation to nrv (0.67) and C (0.60) (Table 3), indicating that Mshape may contain a considerable amount of information, concerning the outcome, that is not revealed either by nrv or by C. Among parameters calculated from NRL, the best performance was achieved by the roughness (R), which is pointed out by MIFS-U as the fourth best parameter (Table 2). However, taking into consideration MI value, R is just the fifth best one (Table 1). The other parameters calculated from NRL, area ratio (RA) and standard deviation (DNRL), presented the worst performances regardless the ordination scheme applied. In addition to that they have the higher value of MI (2.14) (Table 3) between each other. These results indicate that RA and DNRL seem to present a weak contribution to classification, therefore, suggesting a cut-off in the sixth parameter.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1028
A.V. Alvarenga, J.L.R. Macrini, W.C.A. Pereira, C.E. Pedreira and A.F.C. Infantosi
Five different experiments concerning the order defined by MIFS-U algorithm were performed (Table 4). The best performance was obtained using nrv, C, Mshape, R, and RS, as input parameters, achieving respective average values of 88%, 92.1% and 78.3% of accuracy, sensitivity and specificity. Note that by adding the attributes DNRL and RA not only has not enhanced the performance but marginally worsen it. This behavior agrees with the results point out by MI that indicates these two parameters present low mutual information considering the outcome.
IV. CONCLUSIONS This paper proposes the application of Mutual Information to order the relevance of morphometric parameters calculated from breast ultrasound images. The results indicate that two parameters presented very low contribution to the outcome. A FNN was implemented to classify the images and the best performance was obtained without those two parameters. This initial result suggests that MI can be applied as a non-linear approach to select morphometric parameters.
Table 1 Mean (μMI) and standard deviation (σMI) of MI between parameters and the outcome, obtained using leave-one-out procedure. Parameters nrv C RS Mshape R DNRL RA
μMI 0.369 0.280 0.203 0.173 0.122 0.113 0.064
σMI 0.013 0.005 0.006 0.005 0.003 0.004 0.004
To CNPq for the financial support.
REFERENCES 1.
Table 2 Attributes relevance as ordered by the MIFS-U algorithm. Relevance order
MIFS-U
1 2 3 4 5 6 7
nrv C Mshape R RS DNRL RA
nrv 0.67 0.96 1.17 0.57 0.61 0.72
Mshape 0.67 0.45 0.60 0.55 0.48 0.56
RS 0.96 0.45 1.25 0.43 0.84 0.77
C 1.17 0.60 1.25 0.60 0.89 0.87
R 0.57 0.55 0.45 0.60 0.50 0.50
4. 5. 6. 7.
RA 0.61 0.48 0.84 0.89 0.50 2.14
DNRL 0.72 0.56 0.77 0.87 0.50 2.14 -
Table 4 Results from FNN considering the parameters ordered by the MIFS-U algorithm. The numbers in the first line indicate parameters used as input, according to Table 2. Parameters → Accuracy (%) Sensitivity (%) Specificity (%)
1 to 3 79.3 81.4 73.9
1 to 4 84.2 88.1 73.9
1 to 5 88.2 92.1 78.3
2. 3.
Table 3 Mutual Information between parameters. nrv Mshape RS C R RA DNRL
ACKNOWLEDGMENT
1 to 6 82.5 86.4 72.5
1 to 7 84.6 89.3 72.5
8.
9. 10. 11.
12.
13. 14.
Dennis M A, Parker S H, Klaus A J et al. (2001) Breast biopsy avoidance: the value of normal mammograms and normal sonograms in the setting of a palpable lump. Radiology 219:168–191 Horsh K, Giger M L, Venta L A et al. (2002) Computerized diagnostic of breast lesions on ultrasound. Med Phys 29:157-164 Huber S, Danes J, Zuna I et al. (2000) Relevance of sonographic Bmode criteria and computer-aided ultrasonic tissue characterization in differential diagnosis of solid breast masses. Ultrasound in Med & Biol 26:1243–1252 Rahbar G, Sie A C, Hansen G C et al. (1999) Benign versus malignant solid breast masses: US differentiation, Radiology 213:889-894 Skaane P (1999) Ultrasonography as adjunct to mammography in the evaluation of breast tumours. Acta Radiol. Supplementum 40:1-47 Chou Y H, Tiu C M, Hung G S et al. (2001) Stepwise logistic regression analysis of tumour contour features for breast ultrasound diagnosis. Ultrasound in Med & Biol 27:1493-1498 Maes F, Vandermeulen D, Suetens P (2003) Medical image registration using mutual information. Proc of the IEEE 91:1699-1722 Lafuente R, Belda J M, Sanchez-Lacuesta J et al. (1997) Design and test of neural networks and statistical classifiers in computer-aided movement analysis: a case study on gait analysis. Clin Biomech 13:216–229 Cover T(1991) Elements of information theory. Wiley, New York Alvarenga A V, Infantosi A F C, Pereira W C A et al (2004) Normalised radial length and convex polygons to classify breast tumour contours in ultrasound images. IFMBE News 69:44-48 Alvarenga A V, Infantosi A F C, Azevedo C M et al. (2003) Application of morphological operators on the segmentation and contour detection of ultrasound breast images. Brazilian Journal of Biomedical Engineering 19:91-101 Alvarenga A V, Infantosi A F C, Pereira W C A et al. Contour detection of breast ultrasound tumor images using morphological operators, IFMBE Proc. vol.2, 12th Nordic Baltic Conference on Biomedical Engineering and Medical Physics, Reykjavik, Iceland, 2002, pp 78-79 Castleman K N (1996) Digital image processing. Prentice-Hall International. New Jersey, New York. Soille P (1999) Morphological image analysis. Springer-Verlag, Heidelberg, Berlin
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Breast Ultrasound Images Classification Using Morphometric Parameters Ordered by Mutual Information 15. Kwak N and Choi C (2002) Input feature selection for classification problems. IEEE Trans. Neural Networks 13:143-159 16. MacKay D J C (1992) Bayesian interpolation. Neural Computation 4:415-447 17. General Practice Notebook—A UK medical encyclopaedia on the world wide web. Available from http://www.gpnotebook.co.uk/, accessed 29 August 2005
1029
Author: Antonio Fernando Catelli Infantosi Institute: Biomedical Eng. Program/COPPE, Federal University of Rio de Janeiro Street: PO Box 68510 City: Rio de Janeiro Country: Brazil Email:
[email protected] [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Obtaining completely stable cellular neural network templates for ultrasound image segmentation M. Lenic, D. Zazula and B. Cigale University of Maribor, Slovenia Abstract— Cellular neural networks (CNNs) have been successfully applied to image segmentation problem. Nevertheless, the main difficulty remains in the process of creating appropriate templates to solve a segmentation problem. In this paper we present machine learning approach to obtain completely stable CNN templates and compare the obtained results to unconstrained machine learning approach. Despite introduced constraints of templates stability the results are comparable to unobstructed ones.
ing. By introducing this constraint, the search space of the learning problem is reduced and focused only on acceptable solutions. In this paper, we introduce an idea of how to utilize SVMs to obtain the information needed in the process of the CNN template optimization and utilizing the theoretical background of the CNN stability. II. CELLULAR NEURAL NETWORKS
Keywords— Cellular neural networks, machine learning, symmetric templates, ultrasound, segmentation. I. INTRODUCTION
Various applications of cellular neural networks (CNNs) on complex image processing tasks raise questions about an appropriate selection or calculation of template elements that determine the CNN's behavior. There are two major possibilities: either to resort to the existing and published templates suitable for the problem under consideration or to construct the templates by one of well-known training methods, such as genetic algorithms, simulated annealing, support vector machines (SVMs), etc. Obtaining the templates by machine learning can be uncertain because the target concept of the transformation is not necessarily captured in the localized information. A typical example is the hole-detector template where the final result and evaluation of presence of the hole in original image is determined from the edge states of whole internal CNN state, independently of the hole location in the original image. Such results are obtained trough multiple iterations of CNN, where information can traverse the whole CNN. Therefore, such concept is difficult to capture by machine learning techniques that utilize local-operator-based information. This is especially true in initial learning steps, where target concept cannot be always captured and therefore convergence of the learning process cannot be assured. In the case of segmentation, information about edges and regions of interest is usually localized. Therefore, the class of interesting templates for segmentation is a subset of all available templates. We are especially interested in the templates that produce completely stable results, since we are not able to capture unstable solutions by machine learn-
In 1988, Chua and Yang [1] introduced a special network structure which resembles the continuous time Hopfield ANN, but it implements strictly local connections. The proposal was influenced both by neural networks [2] and by cellular automata [3]. This new construct was called cellular neural networks (CNNs). An initial learning phase is also needed for CNNs, as it is needed for any other neural network. This adapts their properties to the identification problem to be resolved. In general, learning approaches for CNNs are based on genetic algorithms or simulated annealing. The implementation of the two methods has a rather unattractive characteristic: their convergence may be questionable, while the learning period always takes a lot of time (several hours on today's PCs with best performance). While training the CNNs means iterative adaptation to positive and negative examples, this procedure could be much shorter if the examples were preselected in two most discriminant groups. The fundamental building block of the CNNs is a cell designated by C(i,j), where i and j stand for the coordinates of the cell's position in a 2D representation of the net. Each cell is coupled only to its neighboring cells C(k,l) in the rneighborhood Nr(i,j). The cell's state can be described by the following equation: st (i, j ) =
∑ A (i, j; k,l )o (k,l )+ ∑ B (i, j; k,l )u (k,l )+ I ) t
t
(k,l )∈ N r (i , j ) t (k,l ∈ N r (i , j )
t
(1)
t
where st(i,j) is the inner state of cell C(i,j), and ot(k,l) the output and u(k,l) the input of cell C(k,l). Subscript t stands for the observed time instant. At(i,j;k,l) and Bt(i,j;k,l) are
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1013–1016, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1014
M. Lenic, D. Zazula and B. Cigale
called feedback and control parameters, respectively. Parameter It is a bias which is added to each inner state of the cell. In this paper, parameters At(i,j;k,l), Bt(i,j;k,l), and It are time variant, in contrast to the traditional definition where these parameters are time invariant. In the sequel, we assume the cell's output is defined by the following non-linear equation: ot +1 (i, j ) = f ( st (i, j )) =
1 ((st (i, j ) + 1) − ( st (i, j ) − 1)) 2
(2)although other non-linear transformations (i.e. sigmoid function) can also be used. Parameters At(i,j;k,l) are collected in template (matrix) At(i,j) and parameters Bt(i,j;k,l) in template Bt(i,j). In this paper it is assumed that all the CNN cells have equal templates at specific time t. The spatial coordinates i and j can therefore be omitted, so At(i,j) becomes At and Bt(i,j) becomes Bt. Templates At, Bt, and It completely define the behavior of the network with given input and initial condition (the initial inner state of the cells). III. SUPPORT VECTOR MACHINES AND CELLULAR NEURAL NETWORKS
SVMs implement a general algorithm based on statistical learning theory [4], commonly applied in pattern recognition. They have been applied with success in different research fields for various machine learning problems with highly-dimensional data, such as text categorization, text mining, various classification tasks, face recognition, etc. SVMs solve classification problems by determining a decision function which separates samples from different classes with highest sensitivity and specificity. A learning set must be given with positive and negative samples for each class, so the samples that belong to the observed class and those that belong to other classes, respectively. Assume a two-class case and a hyperplane which separates positive from negative samples, x, defined as x·w+b=0, where w is the hyperplane normal and b describes its distance from the origin. Each sample xi must be given a classification (target) value yi ∈{-1,1} where the value of 1 designates positive and of -1 negative samples. In linearly separable cases, support vector machines simply look for the hyperplane with the largest possible margin, i.e. the hyperplane whose distance to both the positive and negative samples is maximum. It can be formulated as:
y i =< x, xi > +b ≥ 1 where <,> denotes scalar product.
(3)
In [5] the authors show that SVM can be used to obtain CNN templates, by vectorizing the templates and adjusting the margin of learning problem. The main advantage of the approach [5] is that the templates can be learned from positive and negative samples. In the learning process every pixel of the image is transformed into a learning sample which contains the pixel’s neighborhood and CNN internal state. This can result in a large number of features and huge number of learning samples. For example, in the case of template size 15 the number of learning features is 450, while in the case of image size 352x288 pixels, the 101376 of learning samples can be generated per image. IV. CNN STABILITY
The notion of stability of CNNs is very important from application point of view, especially in the domain of image segmentation, since it is imperative to produce deterministic and constant performance independent of an input image with completely stable solutions [6]. With application of SVMs to CNNs, the templates can be produced that do not necessarily meet stability requirements, therefore the modification of learning problem is necessary to generate only completely stable solutions. It has been proven that CNNs with reciprocal templates are completely stable [7]. This defines a class of acceptable templates for the segmentation problem, where different restraints that conform to the findings in [7] can be established. By applying constraints on the learning problem to symmetric templates, the main advantage is a lower number of template independent coefficients that have to be computed. This means a search space reduction obtained by the lower number of features for learning instances and, consequently, by the smaller size of learning problem. It is also important to notice that with the introduction of constrains expressive power of CNN is reduced and the acceptable set of templates cannot outperform the CNN with no constraints. To constrain the learning problem, the constraints have to be introduced in the learning phase of SVM. This requires a modification of the scalar product calculation, and the corresponding alpha coefficients and weight vector of SVM. Since SVMs with linear kernels are utilized, and the CNNs are based on the same scalar product as linear SVMs (Eq. (1), Eq. (3)), the same effect can be achieved by transforming the input vectors and extracted output weight appropriately. New learning features are constructed by summing up the corresponding values that are multiplied by the same coefficient. The number of learning features is reduced to twice the number of independent coefficients (one set for every CNN
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Obtaining completely stable cellular neural network templates for ultrasound image segmentation
template), so different constraints can be used for individual templates. Coefficients, extracted from the weight vector w, are then simply placed on the appropriate position in the CNN templates. In this paper, we utilize the constraint of symmetry, based on Euclidean distance from the template center. The same coefficient multiplies the pixels that have the same distance from center, producing circular symmetric and completely stable solutions. V. SEGMENTATION OF OVARIAN ULTRASOUND IMAGES USING SYMMETRIC CNN TEMPLATES TRAINED BY SVM
A complete understanding of ovarian follicle dynamics is crucial for the field of in-vitro fertilization. The main task is to succeed in determining the dominant follicles which have a potential to ovulate. For credible results, a doctor must examine patients every day during their entire menstrual cycle. Examination is usually done with ultrasonography. Because of the tedious and time-consuming nature of manual follicle segmentation, which is, on the other hand, also very demanding and inaccurate, an automated, computerbased method is desirable. Segmentation of ultrasound images using the CNNs has been discussed in [8, 9] and the application of SVMs to this problem in [5].
Fig. 1. (a) Original ultrasound image, (b) segmented by leading expert In order to test our optimization method we tried to obtain the CNN templates for a rough detection of ovarian follicles. Our learning set consisted of 4 images, while the testing set consisted of 28 images, randomly selected from a database of 1500 ultrasound ovarian images. The selected images belong to 12 different patients. A leading expert manually annotated the positions of follicles and ovary in every image (an example of segmentation is depicted in Figure 1). The real unfiltered ultrasound images, with left and right top black regions removed, were used as an input to our method, both in learning and testing phase. Ultrasound images were sampled from the VHS tape using the MiroVideo DC30+ video card. A full resolution sampling to
1015
720x540 pixels and highest possible quality of JPEG movie (compression 2.66:1) were performed. However, it should be stressed that the VHS system is interlaced, where every image resolution is only 352x288 pixels for PAL. After sampling all images were converted in 256 gray levels. From each image in our experiment it is possible to generate 101376 learning samples. Bigger learning sets can reduce errors in testing sets, but can also be quite time consuming, especially in the case of non-consistent samples. These always appear in the annotated images, because even the leading experts cannot precisely identify all the follicles. Therefore we decided to subsample the images according the CNN template size. Thus, the size of a single learning sample (number of features) varies with the template size. For this experiment we selected templates of size 7, which then generated 2185 learning samples. In the case of fully independent coefficients 98 features (pixels) for every template were generated. Applying a symmetric Euclidean distance template reduced the number of features from 98 to 20. Learning process took less then a minute for every template on an average today's PC hardware. Nevertheless, the speedup for constrained template calculations compared to unrestrained was 1.77. We verified the follicle recognition quality by using the so called ratios ρ1 and ρ2 [10]. This metrics measures sensitivity and specificity of an image recognition algorithm. It calculates the intersections of the recognized and referential (annotated) image regions. ρ1 stands for the ratio between the areas of intersection and annotated follicle, while ρ2 for the ratio between the areas of intersection and recognized region. If the recognized regions entirely cover the annotated regions, then both ratios ρ1 and ρ2 are 1. In general, the closer the values of ρ1 and ρ2 to 1, the better is the matching of the regions being compared. We validated our proposed learning and detection algorithm by observing only ρ 2. If the recognized, i.e. segmented, region of a follicle gives ρ2, we consider it a proper detection of the corresponding annotated follicle. This criterion warranties that more than a half of any recognized follicle region overlays a particular annotated follicle. To asses our new method with completely stable templates, a comparison between fully independent and constrained calculation of template coefficients was made. For each iteration, a new set of templates is obtained for the constrained and unconstrained approach. The resulting CNN with multiple time-variant templates can be seen as stacking ensemble classification approach in the field of machine learning, but with major difference of exploiting the internal state information about the confidence level, and not only the final classification.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1016
M. Lenic, D. Zazula and B. Cigale
74%
contrast to the conventional ones whose parameters are stationary. Despite the reduction of the expressive power of CNN by introducing the constraints that produce completely stable solutions, first iterations of segmentation results are comparable to the ones obtained with no restriction. Better results of unconstrained approach are probably the result of application CNN with time variant templates. We expect better results with the approach that utilizes time invariant templates.
73% 72% 71% 70% 69% 68% 67% 66% 65% 64% 63% 62%
Sy m m e t r ic
61%
F u ll
60% 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Fig. 2. True positive rate with respect to the number of iterations/templates
REFERENCES We compared identified follicles to annotated ones by observing true positive and false positive follicle identifications with respect to previously defined criteria. This comparison was made for different number of iteration and the corresponding number of time-variant templates. The true positive rate is presented in Fig. 2 It can be seen that the true positive rate of constrained templates is comparable to the unconstrained templates and shows a slightly better performance for certain iterations. On other hand, the detection rate of follicles increases and in many cases other structures on ultrasound images are recognized. Therefore by increasing the number of iterations also the false positive rate increases in both constrained and unconstrained templates as shown in Fig. 3. It has to be noted that all the templates were obtained with the same learning parameters despite the different problem sizes. By adjusting this SVM parameters, better results might be obtained compared to the unconstrained approach. 60% 58% 55% 53% 50% 48% 45% 43% 40% 38% 35% Sy m m e t r ic
33%
F u ll
30% 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Fig. 3. False positives rates with respect to the number of iterations/templates
VI. CONCLUSIONS
A novel approach based on SVM for the CNN template optimization with completely stable templates was presented in this paper. Because of different way of learning, the CNN templates obtained this way are time-variant, in
1.
Chua L. O. and Yang L. (1988), Cellular Neural Networks: Theory, IEEE Transactions on Circuits and systems, Vol. 35, Num. 35, pp 1257-1272 2. J. J. Hopfield (1982), Neural Networks and Physical Systems with Emergent Computational Abilities, Porc. Natl. Acad. Sci., Vol. 79, Num. 8, pp 2554-2558 3. M. Hänggi and G. S. Moschzty (2000), Cellular Neural Network: Analysis, Design and Optimisation, Kluwer Academic Publishers, Boston, USA 4. V. N. Vapnik (1995), The nature of statistical learning theory, Springer, New York 5. B. Cigale, M. Lenic, D. Zazula (2006), Segmentation of ovarian ultrasound images using cellular neural networks trained by support vector machines, Knowledge-based intelligent information and engineering systems, pp 515-521 6. Gabriele Manganaro, P. Arena, L. Fortuna (1999), Cellular Neural Networks: Chaos, Complexity and VLSI Processing, Springer, Berlin, Germany 7. L. O. Chua, L. Yang (1988), Cellular Neural Networks: Theory, IEEE Transactions on Circuits and Systems 35, pp 12571272. 8. D. Zazula and B. Cigale (2006), Intelligent Segmentation of Ultrasound Images Using Cellular Neural Networks, Intelligent Processing Paradigms in Recognition and Classification of Astrophysical and Medical Images, In press 9. B. Cigale and D. Zazula (2004), Segmentation of Ovarian Ultrasound Images Using Cellular Neural Networks, IJPRAI, Vol. 18, Num. 4, pp 563-581 10. B. Potocnik and D. Zazula (2002), Automated Analysis of a Sequence of Ovarian Ultrasound Images. Part I, Imag. Vis. Comput., Vol. 20, Num. 3, pp 217-22 Author: Institute: Street: City: Country: Email:
Mitja Lenic University of Maribor Smetanova 17 Maribor Slovenia
[email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Segmentation of 3D Ovarian Ultrasound Volumes using Continuous Wavelet Transform B. Cigale and D. Zazula University of Maribor, Slovenia Abstract— A novel algorithm for the segmentation of 3D ultrasound images of ovary is presented in this paper. The algorithm is based on continuous wavelet transform (CWT) and consists of two consecutive steps. In the first step, the centers of follicles are determined by tracing the local maxima from higher to lower scale in the wavelet transform of input images. The center of follicle appears as local maximum near value 0 when the size of the follicle corresponds to the scale of CWT. In the second step, the shape of the follicle is outlined. This is done by casting the rays in different directions from the center of the follicle in order to find its border. The position of border is connected with the wavelet scale and the position of the first local minimum on each ray. The method was tested on a small set of real 3D ultrasound images. The results were evaluated visually, since we do not have manually annotated images. Keywords— 3D ultrasound images, ovarian follicles, continuous wavelet transform, segmentation.
I. INTRODUCTION A complete understanding of ovarian follicle dynamics is crucial for the field of in-vitro fertilization. The main task is to succeed in determining dominant follicles which have a potential to ovulate. There are theories that suggest that the dominant follicle can be determined by studying the dynamics of follicle growth. To support or to reject this theory many examinations should be done on the same patient through the period of several days using an ultrasound machine. Such procedure is laborious and error prone, thus an automated procedure is desired. One of the steps to achieve this goal is automated segmentation of follicles in the images. In our case we used 3D ultrasound images. There are many algorithms which enable automated or semi-automated segmentation of 2D ovarian ultrasound images [1, 2, 3], but only a few of them can be extended to 3D images that provide more information. In this paper we present a novel approach based on continuous wavelet transform (CWT). In Section II, where we make a small introduction to the CWT, we also make some observations about its behavior at abrupt changes in 1D lines. The observations are then extended to an algorithm which firstly detects the follicle in the image and then determines its shape. In section III, we interpret the results of our algorithm on real images. The
obtained results are discussed in section IV, which also concludes the paper. II. SEGMENTATION OF FOLLICLES A. Homogeneous areas and CWT In this subsection, we are making a short introduction to the CWT and also presenting some interesting characteristics, which will be used later in the derivation of our algorithm. Wavelet transform is actually a convolution between the signal f(x) and the function ψ(x) called wavelet, defined as
W ( a, b ) =
1 a
∞
∫ f (x ) ⋅ψ
−∞
x−b⎞ ⎜ ⎟ ⋅ dx. , ⎝ a ⎠
*⎛
(1)
where a stands for scale and b for a time shift. The wavelet function ψ(x) must be localized both in time and frequency, and should be admissible, which, for an integrable function, means that its average should be zero [4]. A wavelet can be dilated using the scale parameter a and translated by parameter b. Since wavelets are zeromean, a wavelet transform measures the variation of function in a neighborhood of b whose size is proportional to a. Thus, sharp signal transitions create large amplitude wavelet coefficients. Mexican hat (MH) is a wavelet widely used in image processing for edge detection [5]. The function is symmetrical and the axis of symmetry of the 1D MH wavelet is at x=0. Strictly speaking, the MH wavelet is not localized in time but it can be easily observed that it will only deviate significantly from zero in the vicinity of x=0. This interval is called the effective support and is usually limited to the interval [-5,5] as in [6]. We define localized Mexican hat wavelet as
(
)
2 ⎧ 2 −x ⋅ x 2 − 1 ⋅ e 2 , if x ≤ 5, ⎪ ψ (x ) = ⎨ 4 π 3 ⎪ 0, otherwise. ⎩
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1017–1020, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
(2)
1018
B. Cigale and D. Zazula WHa=2,m=5L
changes to a local minimum. This clarifies that the wavelet at scale a is not appropriate for the search of the centers of objects narrower than 2a. Let us assume that m>5a. It becomes clear that W(a,0)>0, since the MH wavelet defined as in (2) has slightly positive average. If the support of the wavelet were not limited, then W(a,0) would be a little bit less than 0.
0.6 0.4 0.2 -10
-5
5
10
b
-0.2 -0.4
B. Detection of the centers of follicles
-0.6
Fig. 1 The result of wavelet transform when m<5a and m≥2a WHa=3,m=5L 0.75 0.5 0.25 -10
-5
5
10
b
-0.25 -0.5 -0.75 -1
Fig. 2 The result of wavelet transform when m<2a
Note that such wavelet has an average slightly larger than 0, since some of the negative part is deleted. By introduction of scale parameter a and translation parameter b, the axis of symmetry of the MH wavelet changes to x=b and the interval of effective support to the interval [b-5a, b+5a]. Zeros of the function are at {b-a, b+a}, and the function is positive in the interval (b-a, b+a). Eq. (1) can thus be approximated by W ( a, b) =
1 a
b + 5a
∫
b − 5a
⎛ x−b⎞ f ( x ) ⋅ψ * ⎜ ⎟ ⋅ dx , ⎝ a ⎠
(3)
since MH-wavelet is virtually zero elsewhere. An edge in the image can be defined as a rapid change in the level of grayness. We will introduce the behavior of the wavelet transform by the following model, in which we have two different homogenous areas of a 1D signal, as ⎧0.1, if x < m, f (x ) = ⎨ ⎩ 1 otherwise.
(4)
Let us assume that m<5a and m≥2a and that a is a positive integer value. Then the obtained wavelet coefficients, when translated by b, would appear similar to Fig. 1. It can be easily seen that two local minima are close to the points m+a and m-a, while three local maxima lie near the points 0, -m-a, and m+a. If m<2a, then just two local maxima remain at points close to -m-a and m+a, and the local maximum at b=0
For an automated segmentation of ovarian ultrasound images it is crucial that only follicles are detected. This could be a demanding task since other objects with similar characteristics exist in the image (e.g. veins). Since follicles have spherical shape in 3D, they are very suitable for the detection by the 3D Mexican hat wavelet, which is isotropic and, therefore, has spherical shape. The behavior of 3D MH is similar to the 1D MH case, presented in the previous subsection. We model ovarian follicles as homogenous regions whose average grayness is darker than the surroundings. The follicle surroundings in real image are usually very diverse and this could lead to unreliable results, since every sharp signal transition creates large wavelet coefficients. Therefore, we try to reduce the heterogeneity by applying the operation in which all voxels brighter than certain threshold T are set equal to the grayness T. Most voxels in follicles should have grayness bellow T. The most suitable scales for the detection of follicles with radius r, are those fulfilling the inequalities r<5a and r≥2a. At those scales, we expect local maxima at the position of the centers of follicles in the obtained wavelet transform. The scales act like a magnifying glass and we see more details with the lower scale (like high-pass filter), while higher scales act as a low-pass filter. Detecting the follicles at lower scale has at least two advantages: the outlining of the detected follicle is more accurate and elimination of other maxima not representing the follicles is easier. However, if the scale is too low (i.e. a=1), the results are unreliable and affected by the noise which is present in every ultrasound image. Therefore, we would like to find the lowest scale in which the wavelet overlays the whole follicle. We find such a scale for each follicle by tracking the local maxima from scale to scale. The correct scale is the lowest scale in which the local maximum is negative (recall the effect mentioned in previous subsection, where W(a,0) becomes slightly positive if m>5a. Our detection procedure starts at the highest scale ab, in which we try to detect the largest follicles, and ends at scale ae, where we try to detect the smallest ones. The coordinates of all local minima at scale at are stored in Ct. Each minimum in the set Ct is firstly checked for adequacy by check-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Segmentation of 3D Ovarian Ultrasound Volumes using Continuous Wavelet Transform
1019
ing the voxel in original image at the same coordinates. If the grayness of the voxel is not below threshold T, the minimum is dropped from the set Ct. Then, for each coordinate kit in the set Ct the nearest coordinate kjt+1 in the set Ct+1 is found. The coordinate kjt+1 and scale t+1 are added to the set of local minima M if Euclidian distance |kit - kjt+1|
0. C. Outlining the shape of follicles After the follicle centers are detected and stored to the set M, the follicle shapes must be outlined. Our method for outlining was again inspired by the study of 1D wavelet behavior described in Subsection II.A. It can be seen from Fig. 1 that if we travel from point b=0 in either direction, we eventually hit the local minimum. It has been mentioned that these minima are positioned at the points b=-m+a and b=m-a. Taking just the positive b, the distance of the edge from the minimum position can be calculated as m=b+a. This phenomenon gives a great tool for the calculation of the follicles border points. We calculate the border points on the wavelet transform in the scale at which the center was found. We generate 132 radial rays in homogeneous orientation distribution from each detected center. We define for the transform values along the rays at integer distances n, taken from the center kjt+1 in the direction of spherical angles θ and φ; so, rθ,φ(n) equals the average grayness of voxels in a volume at radius n in the direction θ and φ. Each averaging volume becomes bigger with the distance from the center and more voxels are averaged. In such a way, a graph similar to one depicted in Fig. 1 is obtained. We calculate the distance of the follicle border points in given directions as the distance to the first minimum on rθ,φ(n) incremented by the scale factor a. All border voxels are organized into the triangle mesh, which is eventually drawn onto the screen. If some border points are not correct, the mesh can anyhow be averaged using some standard algorithms.
Fig. 3 One slice from the segmented image. There are two recognized follicles and one phantom follicle (most probably vein) in the upper left corner of the image. It can be also seen that both follicles were detected a few times. The result in 3D is depicted in Fig. 4. We were especially interested in the number of recognized follicles and also in their shape. Our approach successfully detected the position of all follicles. We noticed that most of the follicles were detected more than once – usually the whole follicle was at first detected at higher scale, then also parts of it were detected at lower scales. This can be explained by local heterogeneity inside the follicles which are amplified by lower scales. Lower scales also contributed to all false positives – the areas misinterpreted as follicles appeared to be the veins or similar structures with similar ultrasound characteristics as follicles. The outlined shape of follicles roughly correspond to the real shape of follicles. If the follicle is spherical then the shape corresponds very well, but some other forms (i.e. banana-like) are usually not expanded enough – parts of the follicle remain annotated as the surroundings. This can be explained by the shape of the wavelet. Some parts of such follicles are then detected and outlined at lower scales. The method performs quite well on the follicles which are lying close together and the border between them is very dim, in some extent even non-existent. Such follicles were rarely joined and if they were, the joint could be easily broken by utilizing some morphological operators (i.e. erosion). This can be explained by the nature of wavelet transform.
III. RESULTS The preliminary results of our approach were obtained on real 3D ultrasound images of ovary. Firstly, we transformed all the data from the spherical coordinates, obtained from a Voluson 730 ultrasound device, into the Cartesian coordinates. Then we linearly scaled the grayness of voxels in the image to the interval [0, 1]. The threshold parameter T was set equal to 0.15. We were searching for follicles from the scale 7 down to scale 1. Since the images were not annotated by an expert, the results were checked visually.
Fig. 4 The result of the segmentation from Fig. 3 presented in 3D.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1020
B. Cigale and D. Zazula
IV. DISCUSION AND CONCLUSIONS A novel approach for the detection and outlining of follicles in 3D ultrasound images, based on continuous wavelet transform, was presented in this paper. The preliminary results are promising, especially the follicle detection, however more extensive testing of the method must be performed in the future. We are planning tests with simulated 3D ultrasound images and also with a larger set of real ultrasound images. Even when the detection of follicles performs well, the method may have problems with the shape outlining. The heaviest problem is with the follicles of non-spherical shape. To overcome this problem, we plan to utilize other algorithms, such as active contours and also level-set methods, where their initial state will be given by the described algorithm. The level of false positives on preliminary results is modest and could be easily corrected manually, however, we speculate that erroneous regions could be automatically eliminated in a post-segmentation step, where additional statistics of the regions can be calculated. All false positives were added at lower scales, therefore, it would also make sense to make the condition for adding the follicle center more strickt. In the future, we are planning to perform the follicle tracking from image to image, which can only be done by a reliable segmentation algorithm. On the other hand, the tracking would give a very elegant solution to the detection of false positives, since they would appear only on one image and not on the other.
ACKNOWLEDGMENT We would like to thank Prof. Veljko Vlaisavljević from the Teaching Hospital of Maribor for provided images and valuable help.
REFERENCES 1.
2. 3. 4. 5. 6.
B. Cigale, D. Zazula, Segmentation of Ovarian Ultrasound Images Using Cellular Neural Networks, International Journal of Pattern Recognition and Artificial Intelligence 18 (4) (2004) 563-581. A. Krivanek, M. Sonka, Ovarian Ultrasound Image Analysis: Follicle Segmentation, IEEE Transaction on medical imaging 17 (6) (1998) 935-944. B. Potocnik, D. Zazula, Automated analysis of a sequence of ovarian ultrasound images. Part I, Segmentation of single 2D images, Image and Vision Computing 20 (3) (2002) 217-225. S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, 1999. J-P. Antoine, The Continous Wavelet Transform in Image Processing, http://citeseer.ist.psu.edu/60699.html. J. J. Rushchitsky, C. Cattanim, E. V. Terletskaya, Wavelet Analysis of a Single Pulse in a Linearly Elastic Composite, International Applied Mechanics 41 (4) (2005) 374-380. Author: Institute: Street: City: Country: Email:
Boris Cigale University of Maribor Smetanova 17 Maribor Slovenia [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Selected Applications of Dynamic Radiation Force of Ultrasound in Biomedicine A. Alizad, J.F. Greenleaf, and M. Fatemi Department of Biophysics and Biomedical Engineering, Mayo Clinic College of Medicine, Rochester, USA Abstract— Ultrasound imaging has been used for decades for medical and industrial imaging. Ultrasound in the range of 1-10 MHz has also been used for evaluating tissue properties characterization for many years. Recently, use of low frequency (audio range) vibration for medical diagnosis and evaluation of tissue properties has attracted increased attention. It has been shown that such vibration can reveal important information about tissue mechanical properties that are related to tissue pathology. We use the radiation force of ultrasound to vibrate tissue at low (kHz) frequency range, and record the resulting acoustic response to produce images that are related to the stiffness of the tissue. This method is tested on human breast, liver, heart valve, and arteries. Results show that small microcalcifications can be detected in human breast, calcium buildups can be seen in arteries, and mass lesions can be detected in liver tissues. In these tests, the vibration frequency ranged from 5 to 50 kHz. Another application of the radiation force method is studying solid structures through modal analysis. This method is used to induce vibration in bones and measure their resonance frequencies. The above experimental results suggest that the radiation force method may be a clinically useful tool for detection of pathology in soft tissue and for bone evaluation. Keywords— Radiation force, Ultrasound, Vibro-acoustography, Imaging
I. INTRODUCTION During the past few decades, ultrasound pulse-echo technique has been widely used to image various types of organs and identify pathologies in soft tissue. In some cases however, the echo pattern and the morphological structure of the lesion are not specific enough for differential diagnosis or even lesion detection. This problem has prompted investigators to try other noninvasive methods to visualize tissue in terms of various tissue properties, including its viscoelastic characteristics. Recently, the attention has been paid to the lowfrequency portion of the spectrum. It is speculated that the properties of the tissue at audio frequencies (a few hundred Hertz to a few tens of kHz) would add additional information that are not available from the ultrasound methods. To study the object by low frequency vibration, one may use various methods of excitation, including inducing vibrations directly to the object. Recently, use of the radiation force of ultrasound for inducing low-frequency vibration in
biological tissues has been studied for a number of applications [1-5]. In this paper, we present the principles and applications of low-frequency vibration methods based on radiation force for medical imaging and evaluation of bone. Hertz to a few tens of kHz) would add additional information that they are not available from the ultrasound methods. To study the object by low frequency vibration, one may use various methods of excitation, including inducing vibrations directly to the object. Recently, use of the radiation force of ultrasound for inducing low-frequency vibration in biological tissues has been studied for a number of applications [1-5]. II. METHODS, EXPERIMENTS AND RESULTS A. Low Frequency Vibration Methods Based on Radiation Force Use of the low frequency radiation force of focused ultrasound produces a localized radiation force on the object. By modulating the intensity of the ultrasound beam the resulting radiation force is made to oscillate at a desired frequency, normally in kHz range. Object vibration in response to this force, at each point, produces an acoustic field that is recorded by a hydrophone and is used to determine the brightness of a corresponding point on the image. The main feature of this method is that the image is a representation of the viscoelastic properties of the object [1-5]. This method has been successfully used to image objects that include regions with high stiffness values, such as calcifications in- soft tissue background. Results show that small microcalcifications can be detected in human breast, calcium buildups can be seen in arteries, and mass lesions can be detected in liver tissues. In these tests, the vibration frequency ranged from 5 to 50 kHz [6-10]. We recently developed a vibro-acoustography system for in vivo breast imaging. This system is integrated in a clinical stereotactic mammography machine (Fischer Imaging, Mammotest™ system). The combined system is designed in such a way that it enables us to produce matching (from the same view angle) vibro-acoustography and mammography images of human breast. Results of our in vivo breast VA study show the ability of VA in identifying calcification and lesions. [11, 12]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1021–1024, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1022
A. Alizad, J.F. Greenleaf and M. Fatemi
Fig. 2. (A) Breast tissue X-ray and (B) related V.A. scan show very good images of a calcified artery marked by arrows. Fig. 1. X-ray and vibro-acoustography images breast tissue sample. (a) X-ray image of breast tissue specimen. (b) Vibro-acoustography of the specimen. Microcalcifictions are seen as bright spots in both images.
B. Tissue imaging B.1. Detection of microcalcifications in breast tissues: Tissue experiments were conducted on post-surgical excised human breast tissue specimens. Tissue specimens were cut in approximately 3x3x0.5 cm pieces and imaged using a high-resolution specimen radiography machine. The radiographs were then read to identify presence of microcalcifications. Positively identified samples were each mounted on a scanning bracket and placed in the water tank for vibro-acoustography. The scanning bracket is a frame with a thin latex sheet. Each sample was carefully glued (only at few spots close to the sample edge) to the latex sheet. Then low frequency vibration was applied with the ultrasound beam being perpendicular to the latex sheet; hence, we obtained the images from the same view angle as the x-ray images. The confocal transducer used in the experiment had a focal distance of 70 mm and a focal depth of about 10 mm. The scanning bracket was placed about 70 mm from the transducer such that the sample was in the focal region and within the focal depth of the transducer. Thus, the resulting image represents the information from the entire thickness of the sample. Image points were recorded at spatial increments of Δx = Δy = 0.2 mm in rows and columns. Each x-ray image was used as a reference for comparison with the corresponding vibration image. The experimental results on 74 breast tissue samples imaging by this method confirmed 78.4% of microcalcifications previously detected by x–ray (Figure 1). The result of a breast tissue vibro-acoustography is shown in Figure 1. [6]. B.2.Detection of breast tissue arterial calcifications: Our study on detection of breast microcalcification by low frequency vibration application has demonstrated that this imaging method is highly sensitive to even small pieces of calcification in breast (mc reference6), suggesting that vibro-acoustography may be a suitable modality for detec-
tion of BAC. These results led to our study on breast arterial calcification, which described the appearance of BAC by this method and explored the potential of this imaging modality for BAC detection (BAC reference8). Experiments were conducted on ten post-surgical excised human breast tissue samples prepared in the fashion described in the previous section. The appearance of the artery in the vibroacoustography image was highly correlated with its distinctive x-ray appearance (Figure 2). B.3. Imaging of arteries and heart valve tissues: Heart valve and carotid artery tissues have been scanned by vibro-acoustography to detect calcification.. Photography, x-ray and low frequency vibration images of a calcified
Fig. 3. Photograph, x-ray and VA of calcified heart valve are shown in this figure. Note the close resemblance of the VA image to the corresponding x-ray image of this heart valve tissue. The arrow points to a small piece of calcification, about 1 mm in diameter that is seen in both x-ray and vibro-acoustic images
Fig. 4. VA scan of two carotid arteries. A calcified plaque is seen at the bifurcation of artery no.2 on the right side. On the left side a normal carotid artery is shown for comparison. Reproduced with permission [1].
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Selected Applications of Dynamic Radiation Force of Ultrasound in Biomedicine
1023
Fig. 5. VA scan of a liver tissue shows perfect image of tissue structure.
aortic valve are shown in Figure (3). A very small size calcium deposit in hear valve is shown in this figure. This is because calcium and soft tissue have distinctive characteristics with respect to both x-ray and mechanical vibrations [heart-valve9]. The result of vibro-acoustography of a carotid artery is shown in Figure 4 and demonstrates the ability of VA in identifying calcification [1]. B. 4. Imaging soft tissues: Another potential application of low frequency vibration is imaging soft tissue. This study was conducted on the breast tissue samples and liver (breast study paper, and liver paper7, 11). The study showed promising results of tissue structure and detection of mass lesions. Out of 74 scanned samples, 45 (61%) had Good structure appearance and 29 (39%) had Fair structure appearance. On liver tissue imaging by low frequency vibration, demonstration of tissue structure (Figure 5) and detection of mass lesion was excellent (Figure 6). B. 5. In-vivo breast imaging: In in-vivo study, patient lies in prone position on the examination bed with a breast hanging down through the hole. The breast is sandwiched between the back panel (X-ray detector), and a sliding compression panel that keeps the breast slightly compressed and fixed for mammography and/or vibro-acoustography scanning. The transducer is
Fig. 6. A: x-ray of a liver tissue, borders of the lesion can be seen. B: VA scan of this tissue at 41 kHz frequency shows the tumor almost similar its x-ray.
Fig. 7. (A)X-ray shows a round lesion with a“∼” shape biopsy clip in it. (B,C,D) are VA scans in 1, 2 and 2.5cm depth respectively. (F, G) are VA Scan at 2 cm depth but in reverberation images. The round lesion can be seen in all VA images and the clip at 2.5cm depth.
Fig. 8. A: x-ray, B: ultrasound, C: VA at 1.5cm depth. The corresponding xray shows two big lesions on the upper left top corner. Two lesions can be seen at same location on VA images but not very clearly.
moved away during mammography. Acoustic gel is applied to ensure proper acoustic coupling. Figure 7 belongs to a subject with a biopsy proven fibroadenoma with a “∼” shape biopsy clip in it in her right breast at the site of biopsy. X-ray and VA images in the coronal position are shown. The image covers a 5x5 cm area at the depth of 2.5 cm from the skin. Tissue structures are seen with remarkable contrast. The image was acquired at frequency Δf = 50kHz. The scan time was about 7 minutes. This preliminary result shown in figures 7 and 8 demonstrates the potential of vibro-acoustography for in-vivo breast imaging and identifying the lesion.
Fig. 9. (Left ) x-rays shows a calcification at the top, (Right) VA scan at 2.cm depth and 50KHz shows the bigger calcification at the top as dark spot and the very small one at the center as white dot., this one is very small which can not be seen in x-ray.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1024
A. Alizad, J.F. Greenleaf and M. Fatemi
The result in Figure 9 demonstrates the potential of vibro-acoustography for in-vivo breast imaging and detecting small size calcifications. B.6. Bone evaluation: Bone structural integrity may be studied by vibration method. In this method, a vibrating force is applied to the bone and the resulting response is recorded and used to evaluate bone properties. Two continuous-wave ultrasound beams at slightly different frequencies are positioned to focus at the same spot on the object. The interference of the two beams produces a small radiation force at the difference frequency (df) on the object. This force vibrates the objects and the resulting motion is detected by a laser vibrometer and recorded. Experiments were performed on excised rat femurs. The experiments demonstrate that the resonance frequency is an indicative of bone fracture and healing, and the radiation force method can be used as a remote and noninvasive tool for monitoring bone fracture and healing [13].
3. 4. 5. 6. 7.
8.
9.
III. CONCLUSIONS The above experimental results suggest that the radiation force method may be a clinically useful tool for detection of pathology in soft tissue and for bone evaluation.
10.
11.
ACKNOWLEDGMENT The authors are grateful to the following individuals for their valuable work during the course of this study: Randall R. Kinnick for laboratory support and scanning tissues, Joyce Rahn for her help in scanning the patients, and Jennifer Milleken for secretarial assistance. This research was supported in part by Grant EB00535 from NIH and grant BCTR #0504550 from Susan G. Komen Breast Cancer Foundation.
REFERENCES 1. 2.
12. 13.
Fatemi M., Greenleaf J. F. (1998) Ultrasound stimulated vibro-acoustic spectroscopy. Science 280; 82-85. Fatemi M. Greenleaf J. F. (2000) Imaging and evaluating the elastic properties of biological tissues. BMUS Bulletin (The British Medical Ultrasound Society) 8(4):16-18. Fatemi M. Manduca A., Greenleaf J. F. (2003) Imaging elastic properties of biological tissues by low-frequency harmonic vibration. Proc. of IEEE (91(10): 1503-1517. Fatemi M. Wold L. E.., Alizad A., Greenleaf J. F. (2002) Vibro-acoustic tissue mammography. IEEE Transactions on Medical Imaging, 21(1): 1-8. Alizad A., Fatemi, M., Wold L. E., Greenleaf, J. F. (2004) Performance of vibro-acoustography in detecting of microcalcifications in excised human breast tissue: a study on 74 breast tissue samples. IEEE Transactions on Medical Imaging, 23(3): 307-312. Alizad A. Fatemi M., Whaley D. H., Greenleaf J. F. (2004) Application of vibro-acoustography for detection of calcified arteries in breast tissues. Journal of Ultrasound in Medicine 23: 267-273,. Alizad A., Fatemi M., Nishimura R. A., Kinnick R. R ., Rambod E., Greenleaf J. F. (2002). Detection of calcium deposits on heart valve leaflets by vibro-acoustography: an in vitro study. Journal of the American Society of Echocardiography 15(11): 1391-1395. Alizad A., Wold, L. E., Greenleaf J. F., Fatemi M. (2004) Imaging mass lesions by vibro-acoustography: Modeling and experiment. IEEE Transactions on Medical Imaging 23 (9): 1087-93. Alizad A., Greenleaf J. F., Fatemi, M. (2005) Potential Applications of Vibro-acoustography in Breast Imaging. Technol Cancer Res Treat 4(2): 151-8.; (Invited Paper). Alizad A., Whaley, D.H., Greenleaf, J. F., Fatemi, M. (2006) Critical Issues in Breast Imaging by Vibro-acoustography. Ultrasonics, e217-220. Characteristics of Fracture and Fracture Repair of an Excised Rat Femur. Journal of Biomechanical engineering 128(3): 300-308
Author: Institute: City: Country: Email:
Azra Alizad Mayo Clinic College of Medicine Rochester USA [email protected]
Fatemi M., Greenleaf J. F. (2000) Probing the dynamics of tissue at low frequencies with the radiation force of ultrasound. Phys. Med. Biol. 45: 1449-1464. Fatemi M., Greenleaf J. F. (1999) Vibro-acoustography. An imaging modality based on ultrasound stimulated acoustic emission. Proc. Nat. Acad. Sci. USA 96:6603-6608.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Cataract Surgery Simulator for Medical Education R. Barea1, L. Boquete1, J. F. Pérez1, M. A. Dapena2, P. Ramos3, M. A. Hidalgo4 1
Department of Electronics. University of Alcala. 28871 Alcalá de Henares. Spain. 2 Department of Surgery. University of Alcala. 28871 Alcalá de Henares. Spain. 3 Department of Mathematics. University of Alcala. 28871 Alcalá de Henares. Spain. 4 Departament of physics. University of Alcala. 28871 Alcalá de Henares. Spain. Abstract— This work shows the initial results obtained in the development of a cataract surgery simulator for education and medical training. A 3D human eye has been developed with haptic feedback and the necessary surgical equipment to carry out the cataract surgery. The developed system provides the students and surgeons a tool of interactive learning that can be used for the anatomy study and physiology of the eye, diagnoses training or planning of ocular surgery. Keywords— Virtual reality, haptic feedback, surgical simulator.
I. INTRODUCTION The Continuous Medical Education (CME) consists on a group of educational activities that allow to maintain, to develop and to improve the basic medical knowledge and the necessary clinical practice for the professionals or people that provide services in the health sector. On the other hand, the surgical training consists of the acquisition of the knowledge supplemented with the practical observation during the surgery, and later on, the realization of procedures surgical under supervision. In a similar way, the surgeons also need training to improve their knowledge or to maintain these in procedures or non routine operations. One of the areas in those virtual reality (VR) he/she has made more important taxes it is in the training and medical education. This technology allows the users to interact with three-dimensional environments (3D) and with on-line generated objects. This way, the surgical pretenders that combine visual information (graphics 3D) and tactile (feedback of force) they can be a great tool for the training and medical education. These systems can provide the students and surgeons a tool of interactive learning that includes a precise representation of the anatomy and human physiology that it can be used for the anatomy study, training of diagnoses or planning of operations. A pretender of RV can reinforce the coordination hand-eye, to help to the mental formation of the pattern in 3D, to simulate the behavior of different pathologies, to foresee different events, etc (Neumann et to the one., 1998). It also allows to measure and to quantify the yield of their users, to quantify the increase in their level of dexterity and the made errors and realimentar this on-line information so that the errors can be corrected;
and lastly this technology allows to model and to simulate different models of patient, in such a way that models can be obtained adapted each patient (Li et to the one., 2002) In the last years the necessity of surgical pretenders is increasing considerably since it avoids that he/she has to train with "models of plastic", with "patient" or even with cadavers, and they allow diverse configurations and to train how many times an operation is wanted. With these systems it is possible to overcome the initial lack of experience (avoiding risks or damages to patient) and to plan technical or procedures not routine or complex. Architectures of pretenders can it turns in (Niemeyer et to the one., 2004) and (Panchaphongsaphak et to the one., 2006). Among the diverse applications they are necessary to mention neurosurgery (Wanga et to the one., 2006), insert of catheters (Zorcolo et to the one., 2000), lumbar punction (Barea et to the one., 2005), acupuncture (Heng et to the one., 2006), cricotiroideostomia (Liu et to the, 2005), surgery of waterfalls (The-Far et to the one., 2005) (Agus et to the one., 2006) or even later capsulotomía (Webster et to the one., 2004), etc. In this work the preliminary results are presented obtained in the casting of the human eye and in the development of a surgical pretender of the operation of waterfalls. The developed system allows to study the process of the operation of waterfalls by means of the study of the anatomy and physiology of the eye, as well as to simulate the diverse stages of the same one and the necessary equipment to carry out it. It is also possible to program different incidences that can be presented during a real operation of these characteristics. II. CATARACT SURGERY - PROCEDURE The cataract is the loss of transparency of the crystalline lens. The crystalline lens is a transparent lens located behind the pupil and that it is good to focus the objects sharply. For a series of circumstances, illnesses or more frequently due to the step of the years, the crystalline lens can go losing its natural transparency and to become an opaque lens. The treatment of the waterfalls is fundamentally surgical. The operation of waterfalls consists on the extraction of the part of the crystalline lens that is opacificada and its substitu-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1038–1042, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Cataract Surgery Simulator for Medical Education
tion for an artificial lens that is placed in the same place that the original crystalline lens (lens intraocular), restoring the vision that had gotten lost as a consequence of the waterfalls. One of the most modern techniques to operate the waterfalls is the facoemulsificación. This procedure allows the extraction of the crystalline lens through an incision of only 3mm. The facoemulsificación ("faco") it usually uses an ultrasound probe or laser to fraction the crystalline one mechanically and then to aspire it. Finally a lens intraocular is implanted that replaces the crystalline lens. In most of the cases suture is not required the incision since it is the sufficiently small thing to be sealed by itself. In the last times a novel technology has been developed for the extraction of of waterfalls, denominated technology AquaLaseTM that uses impulses of water to destroy the crystalline opacificado (Alcon, 2006). On the other hand, it has been observed that the operations of waterfalls can cause changes in the bend of the horny one and therefore to cause astigmatism (Merriam et to the one., 2003). In this sense the corneal deformations are also studying (mainly changes in the bend) in function of the size and place of the incisions carried out in the surgery, the force exercised by the muscles of the eye and the intraocular pressure. III. SYSTEM ARCHITECTURE This section shows the hardware and software architecture of the cataract surgery simulator. A. Hardware The developed system is based on a team inmersivo of virtual reality in 3D with tactile feedback - Reachin Display 2A - (Reachin, 2006). This team has a monitor (CRT) that generates virtual images in 3D of the patient or virtual organ, in this case, of the human eye and its structures. The double image generated by the monitor is reflected in a mirror dicroico. The user of the system, by means of some threedimensional glasses with choke. CrystalEyes. it perceives the three-dimensional image. The tactile sensation is perceived through a device Phantom (Sensable, 2006) located inside the visual space. Therefore, an user working in this environment can feel and to see virtual objects. The defendant of the information is carried out in a computer P-IV with double nucleus to 3 GHz, 2Gb by heart and a graphics card 3DLabs Wildcat. This system obtains a soda visual superior to 20 Hz and háptico of 1KHz. The figure 1 sample an image of the system hardware.
1039
hardware described previously. This way, models can be carried out 3D and later on to export them to the pretender. The utilized programming language is Virtual Reality Model Language (VRML) for the models and Phytom to configure certain stocks. The bookstores Reachin API allows to add physical estates to the objects designed in WRML like rigidity, elasticity, texture, etc. is Also also possible to design the instruments used in the operation, this way, the pointer can adopt the form of the instrument used in each step of the process. •
Model of the eye
To carry out a pretender of surgery of waterfalls it is necessary to have a model of the eye and of the surgical instruments the most precise thing and possible realist. For it, he/she has left of the pattern commercial "exchange3D" (Exchange3d, 2006) and he/she has modified to obtain a model the most realistic thing possible (you detail, it forms, size, color, texture, etc). The attributes hápticos of each one of the structures that form the ocular globe have been implemented by means of a supporting process and error, directed by surgeons with wide clinical experience. The figure 2 sample the pattern of the eye visualized in meshes mediates you the 3D Studio Max and the figure 3 when they are added textures. •
Instrumental surgical
The necessary surgical instruments in the operation have been modeled by means of 3D Studio Max. For the design of the instruments (it forms, size, texture and operation) one has worked in collaboration the ability of medicine of the University of Alcalá. The figure 4 sample the developed instruments, they have been modeled a syringe to apply anesthesia, a keratome to cut the cornea and a phacoemulsification tool to break and to extract the crystalline lens.
B. Software The utilized software is the Reachin API. These API´s allows to carry out a visual and tactile "rendering" in the
Fig. 1 Haptic virtual reality system
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1040
R. Barea, L. Boquete, J. F. Pérez, M. A. Dapena, P. Ramos, M. A. Hidalgo
a) Needle
Fig. 2 Eye model
b) Keratome
c) Phacoemulsification tool
Fig. 4 Virtual surgical instruments IV. RESULTS The available results until the moment are being evaluated by surgeons in active with wide experience; when he/she has the complete functionality of the implemented platform, several learning levels will be programmed and it is sought that the system is validated through the test realization. Fig. 3 Eye model developed
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Cataract Surgery Simulator for Medical Education
A tool has been designed that to study the process of the operation of waterfalls by means of the study of the anatomy and physiology of the eye, as well as to simulate the diverse stages of the same one and the necessary equipment to carry out it. This way the user of the system can go commuting among the different options that it allows the application. The figure 5 samples the main menu where you can select among an anatomical study of the eye and among the simulation of the operation of waterfalls. If the anatomical pattern is selected the different parts they can be visualized that has the eye and to see its behaviour. The figure 6 samples the crystalline lens but the cornea, the iris, the crystalline lens, the esclera, the coroides, the retina and the optic nerve can be studied. If on the contrary, the operation of waterfalls is selected, the complete pattern of the eye is loaded and he/she leaves indicating the user the process to continue, instrumental necessary in each step and how they should be carried out the cuts in the cornea or the extraction of the crystalline lens and later placement of the lens intraocular. The figure 7 samples an image of this process. If on the contrary, the operation of waterfalls is selected, the complete pattern of the eye is loaded and it leaves indicating the user to continue the process, instrumental necessary in each step and how they should be carried out the cuts in the cornea or the extraction of the crystalline lens and later placement of the lens intraocular. The figure 7 shows an image of this process.
1041
Fig. 5 Main menu
V. CONCLUSIONS The first relative results have been presented to a virtual pretender of the human eye that allows to carry out the operation of waterfalls. The developed system provides the students and surgeons a tool of interactive learning that includes a precise representation of the anatomy and physiology of the eye humando and that it can be used for the anatomy study, training and planning of the operation of waterfalls. In these moments he/she has the designs practically 3D definitive, with their included tactile estates. The answer of the system you can consider in real time. As other works to develop in the future you can include the possibility to generate in way automatic virtual models starting from the images of a certain patient and the possibility of allowing the access to the formation system for internet, using is this case a low cost haptics device. It is also considered the possibility to adapt the realized program, to predict the effect of laser operations on the ocular system.
Fig. 6 Anatomic model
Fig. 7 Cataract surgery procedure
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1042
R. Barea, L. Boquete, J. F. Pérez, M. A. Dapena, P. Ramos, M. A. Hidalgo
ACKNOWLEDGMENT The authors would like to express their gratitude to the Fundacion Medica Mutua Madrileña for their support through the project “3D Virtual Eye Model with Haptics Feedback for Training and Medical Education in Ocular Surgery” (MMA-2006-002) and Comunidad de Madrid for their support through the GATARVISA Net (P-DPI000235-0505). .
REFERENCES 1. 2.
3.
4. 5.
6.
Neumann, P. F., Sadler, L. L. and Gieser, J. (1998) Virtual Reality Vitrectomy Simulator, MICCAI´98, LNCS 1496, pp. 910-917. Li, Z., Chui, Y-C., Amrith, S., Got, P-S., Anderson, J. H., Teo, J., Liu, C., Kusuma, I., Siow, Y-S., and Nowinski, W. L. 2002) Modeling of the human orbit from MR images, MICCAI´02, LNCS2489, pp.339-347. Niemeyer, G. Kuchenbecker, K. J. Bonneau, R. Mitra, P. Reid, A. Fiene, J. (2004) “THUMP: An immersive haptic console for surgical simulation and training”. In Proc. Medicine Meets Virtual Reality, 272-274. Panchaphongsaphak, B. Burgkart, R.and Riener, R. (2006). 3D Touch Interface for Medical Education. IEEE Transactions on Information Technology in Biomedicine. Wanga, P. Beckera, A.A. Jonesa, I.A. Gloverb, A.T. Benfordb, S.D. Greenhalghb, C.M. and Vloeberghsc, M. (2006). A virtual reality surgery simulation of cutting and retraction in neurosurgery with force-feedback. Computer methods and programs in biomedicine, 11–18. Zorcolo, A. Gobbetti, E. Zanetti, G. and Tuveri, M. (2000) “A volumetric virtual environment for catheter insertion simula-
7.
8.
9. 10.
11. 12.
13. 14. 15. 16. 17.
tion,” in Proc. EGVE’00 6th EurographicsWorkshop on Virtual Environments,Amsterdam, Neatherlands. Barea, R. Boquete, L. Valle, A. López, E. Dapena, M. A. Fraile, E. García_Sancho, L. (2005) “Simulador de punción lumbar mediante realidad virtual con sensación táctil” Actas del XXIII Congreso de la Sociedad Española de Ingeniería Biomédica CASEIB’05 (ISBN: 84-7402-325-4) Madrid, España. Heng, P. Wong, T. Yang, R. Chui, Y. Xie, Y. M. Leung, K. and Leung, P. (2006). Intelligent Inferencing and Haptic Simulation for Chinese Acupuncture Learning and Training. IEEE Transactions on Information Technology in Biomedicine. Vol 10. January 2006. Liu, A. Bhasin, Y. and Bowyer, M. (2005). A haptic-enabled simulator for cricothyroidotomy. Medicine meets virtual reality. El-Far, N. R. Nourian, S. Zhou, J. Hamam, A. Shen, X. and Georganas, N. D. (2005). A Cataract Tele-Surgery Training Application in a Hapto-Visual Collaborative. IEEE International Workshop on Haptic Audio Visual Environments and their Applications. Ottawa, Ontario, Canada, October 2005. Agus, M. Gobbetti, E. Pintore, G. Zanetti, G. Zorcolo, A. (2006). Real-time Cataract Surgery Simulation for Training. Eurographics Italian Chapter Conference (2006). Webster, R. Sasanni, J. Senk, R. Zoppetti, G. (2004). Simulating the continuous curvilinear capsulorhexis procedure during cataract surgery. International conference modelling and simulation. (Alcon, 2006). http://www.alconlabs.com. 20/01/2007. Merriam, J., Zheng, L., Merriam, J., Zaider, M. (2003). The effect of incisions for cataract on corneal curvature. Ophthalmology 110 (2003) 1807–1813 (Reachin, 2006). http://www.reachin.se. 20/01/2007. (Sensable, 2006). http://www.sensable.com. 20/01/2007. (Exchange3d, 2006). http://www.exchange3d.com. 20/01/2007.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of Alcohol craving induction and measurement system using virtual reality: Craving characteristics to social situation S.W. Cho1, J.H. Ku1, J.S. Park1, K.W. Han1, Y.K. Choi2, K. NamKoong2, Y.C. Jung2, J.J. Kim2, I.Y. Kim1, and S.I. Kim1 2
1 Department of Biomedical Engineering, Hanyang University, Seoul, Korea Institute of Behavioral Science in Medicine, Yonsei University Severance Mental Health Hospital, GyeongGi-Do, Korea
Abstract— Alcoholism is a disease that affects some part of brain that control emotion, decisions, and behavior. Sometimes, people experience situation that have to drink alcohol in their social life. Alcoholic needs cognitive behavior therapy to develop restraint on alcohol craving. Alcohol craving could be triggered when exposure to object, environment, or social pressure situation which is related with alcohol. But cognitive behavior therapy have defect in which does not provide social pressure situation properly. Virtual reality (VR) is humancomputer interface that computer expressed an immersive and interactive three-dimensional virtual environments like real space in world. In this study, we developed alcohol craving induction system using virtual reality to provide social situations in which avatar asks to drink together. Nine males and one female (age from 21 to 27 years) who do not have any history of alcohol related disease were recruited for this experiment. In the results, in situation without social pressure, more alcohol craving was induced in situation with alcohol than that of a situation without alcohol. That means our alcohol craving induction system shows same result as conventional study shows. In situation with social pressure situation, induced alcohol craving were not difference between in situation with alcohol and in situation without alcohol. In situations with alcohol and without alcohol, there were significant difference between situation with avatar (social pressure) and without avatar (situation with alcohol p=0.01, situation without alcohol p=0.001). In cognitive behavior therapy, social pressure situation is needed to alcohol craving, because people get more alcohol craving in social pressure situation which is stressful and negative. It could help to more effective drinking refusal training. Particularly, alcohol craving induction system developed in this study using VR is able to use in alcohol refusal training for alcoholism therapy. Keywords— virtual reality, alcohol craving, social pressure situation, avatar
I. INTRODUCTION Alcoholism is a disease that affects some part of brain that controls emotion, decisions, and behavior [1]. Alcoholic can not control what their drinking alcohol. In conventional study of alcohol craving induction factors, alcoholic has induction of alcohol craving by unconditional reaction
with alcohol. And lack of opposition ability in social pressure situation could be one of causes of alcohol abuse [2]. Alcoholic needs cognitive behavior therapy to develop restraint on alcohol craving. Cognitive behavior therapy is very important to evoke one’s craving by exposing them into alcohol related stimulus, then let patients recognize and cope to the state of one’ craving. Alcohol craving could be triggered when exposure to object, environment, or social pressure situation which connected with alcohol [3]. One factor of alcohol craving induction is social pressure situation. Alcohol craving is more induced in social pressure situation which is more stressful and negative than other situations [4]. Therefore, cognitive behavior therapy in alcoholism treatment must use social pressure situation or objects which connected with alcohol for alcohol craving induction. In conventional study of alcohol craving induction factors, pictures were used by stimulus that contain objects (liquor bottle and glass), pub, and social pressure situation. This study have defect that difficulty to measure alcohol craving when people in social pressure situation. And in cognitive behavior therapy slides, videotapes, pictures, or unrealistic set were used for stimulus which induce alcohol craving [5]. It also have defect which people have difficulty to experience social pressure situation for alcohol craving induction. Virtual reality technique can be used to provide social pressure situation. Virtual reality (VR) is a technique which can provide immersive three-dimensional environment and dynamic social interaction as real world [6, 7]. Virtual reality has potential for neuropsychological assessment and cognitive rehabilitation treatment [8]. In virtual reality, avatar can have speech and behavior as human. And, avatar could continuously express social pressure situation for alcoholic, therefore we hypothesis that it is possible to social pressure situation using virtual reality technique. Using virtual reality, it is possible to induce alcohol craving with social pressure situation. In this study, we developed alcohol craving induction system using virtual reality to provide social pressure situations and measured induced alcohol craving. An experiment was performed with people to investigate whether the VR system and avatar could evoke one’s craving to alcohol.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1034–1037, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Development of Alcohol craving induction and measurement system using virtual reality: Craving characteristics to social situation
1035
recruited for this experiment. Participants were asked socidemographic information questionnaire of participant before experiment. In the result of socidemographic information questionnaire of participant, participant’s averages drinking capacity was one bottle of soju (Korean liquor which is around four times stronger than beer) in two times a week. Fig. 1 Production procedure of VR contents which is composed virtual avatar and real space picture.
II. METHOD & EXPERIMENT A. VR System The 360° background panorama pictures were obtained with camera(pointgrey) for virtual reality. Obtained images were mapped into a surface of sphere that composes virtual reality space. View point was located at center of the sphere. Avatar and objects location are regulated in background image of virtual reality contents that express social pressure situation to alcohol craving induction.
D. Procedure Figure 3 shows experiment procedure. 1. Alcohol craving is measured before performing virtual task. 2. Participant perform virtual reality task after information explanation about virtual reality task which is provided in a random order. 3. After experience of virtual reality task, alcohol craving is measured. 4. Participants take enough relaxation to decrease alcohol craving. When participant think that alcohol craving is decreased, participants perform next virtual reality task. 5. Experiment is continuously progressed until eight virtual reality tasks are performed.
B. VR Contents Virtual reality contents are composed as Figure 2. For investigating the carving by social pressure, 2 X 2 (alcohol related place X social pressure) experimental paradigm was used. Environments with alcohol are composed of barbecue restaurant and pub. And environments without alcohol are composed of office and street. Totally eight contents composed with four environments with avatar and four environments without avatar. C. Participants Nine male and one female (age from 21 to 27 years) who do not have any history of alcohol related disease were
Fig. 2 Situations of virtual reality tasks which composed situations
Fig. 3 Experimental procedure using developed alcohol craving induction
with/without alcohol and avatar (social pressure).
system using virtual reality.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1036
S.W. Cho, J.H. Ku, J.S. Park, K.W. Han, Y.K. Choi, K. NamKoong, Y.C. Jung, J.J. Kim, I.Y. Kim and S.I. Kim
Fig. 4 Alcohol craving measurement panel which ask ‘How much do you want to drink?’. Participants check amount of alcohol craving using mouse click. Alcohol craving is measured to observe alcohol craving induction. Alcohol craving is measured using measurement panel with mouse. Participants click mouse where amount of alcohol craving on red bar in measurement panel. In the alcohol craving measurement panel, minimum value (left side) is 0% that means ‘I never want to think.’ and maximum value (right side) is 100% that means ‘I unbearably want to drink.’. Induced alcohol craving by VR experience is calculated using amount of alcohol craving which measured before and after performing VR task. III. RESULTS Figure 5 shows alcohol craving induction after virtual reality tasks in each four situations (situation with alcohol and avatar, situation with alcohol and without avatar, situation without alcohol and with avatar, and situation without alcohol and avatar). In situations without avatar (without social pressure situation), more alcohol craving is induced in situation with alcohol than situation without alcohol (p=0.021). In situations with avatar (social pressure), induced alcohol craving were not difference between in situation with alcohol and in situation without alcohol (p=0.908). In situations with alcohol, more alcohol craving is induced in situation with avatar (with social pressure situation) than situation without avatar (p=0.01). Also in situations without alcohol, more alcohol craving is induced in situation with avatar (with social pressure situation) than situation without avatar (p=0.001).
Fig. 5
Induced alcohol craving in four situations (situation with alcohol and avatar, situation with alcohol and without avatar, situation without alcohol and with avatar, and situation without alcohol and avatar).
IV. DISCUSSION & CONCLUSIONS In this study, we develop alcohol craving induction system using virtual reality which present social pressure situation using avatar. In the results of this paper, more alcohol craving is induced in situation with alcohol than situation without alcohol (without social pressure situation) (p=0.021). This is coincident with order as studies which represents that alcohol related picture could evoke craving [9]. In comparison of induced alcohol craving in situation with avatar (with social pressure situation) (p=0.908), there was no significant difference between situation with alcohol and without alcohol. It can be explained that social pressure has much more effect than alcohol stimulus to alcohol craving induction. And avatar can present social pressure situation. In the results, using social pressure situation is going to be more important than using alcohol stimulus directly to induce alcohol craving, because social pressure situation could give more stress or negative emotion than simple alcohol stimulus. Considering results, providing social pressure is more important to evoke craving and it could provide a rational of using VR technique and we found that VR could induce one’s craving by providing alcohol related environment with social pressure. Alcohol craving induction is possible using developed virtual reality system. Therefore, alcohol craving induction using virtual reality could be used tool for drinking refusal training for alcoholic.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of Alcohol craving induction and measurement system using virtual reality: Craving characteristics to social situation
ACKNOWLEDGMENT This work was supported by grant No. (R01-2006-00010533-0) from the Basic Research Program of the Korea Science & Engineering Foundation.
REFERENCES 1. 2.
3. 4. 5.
Richard D, Blondell M. (2005) Information from your family doctor. Alcoholism—what should I know about it? Am Family physician 71:509-510 Monti P, Rohsenow D, Abrams D, Zwick W, Binkoff J, Munroe S, Fingeret A, Nirenberg T, Liepman M, Pedraza M, et al. (1993) Development of a behavior analytically derived alcohol-specific role-play assessment instrument. J Stud Alcohol 54:710-721 Litt M, Cooney N. (1999) Inducing Craving for alcohol in the Laboratory. Alcohol Res Health 23:174-178 Sabine M. Grusser S, Chantal P. Morsen C, and Herta F. (2006) Alcohol craving in problem and occasional alcohol drinkers. Alcohol & Alcoholism 41:421-425 Janghan L, Jeonghun K, Kwanguk K, Byoungnyun K, Inyoung K, Byung-Hwan Y, Seokhyeon K, Brenda K. Wiederhold, Mark D. Wiederhold, Dong-Woo P, Youngsik L & Suni
6. 7. 8.
9.
1037
K. (2003) Experimental application of virtual reality for nicotine craving through cue exposure. CyberPsychology & Behavior 6:275-280 Denise R. (2002) Virtual Reality and the Person-Environment Experience. CyberPsychology & Behavior 5:559-564 Rodney L. Myers, Teresa A. Bierig. (2000) Virtual Reality and Left Hemineglect: A Technology for Assessment and Therapy. CyberPsychology & Behavior 3:465-468 Jeonghun K, Wongeun C, Jae-Jin K, Avi P, Brenda K. Wiederhold, Mark D. Wiederhold, Inyoung K, Janghan L, & SinI K. (2003) A virtual environment for investigating schizophrenic patients' characteristics: assessment of cognitive and navigation ability. CyberPsychology & Behavior 6:397-404 Lee E, Namkoong K, Lee C, An S, Lee B. (2006) Differences of Photographs Inducing Craving between Alcoholics and Non-alcoholics. Yonsei Medical Journal 47:491-497 Author: Institute: Street: City: Country: Email:
Jeonghun Ku Ph.D. Dept. of Biomedical Engineering College of Medicine Seoul Korea [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of Knee Control Training System Using Virtual Reality for Hemiplegic Patients and Feasibility Experiment with Normal Participants J.S. Park1, J.H. Ku1, K.W. Han1, S.W. Cho1, D.Y. Kim2, I.Y. Kim1, and S.I. Kim1 1
2
Department of Biomedical Engineering, Hanyang University, Seoul, Korea Department of Physical Medicine and Rehabilitation, Yonsei Medical School, Seoul, Korea
Abstract—Patients with hemiplegia have an asymmetric gait symptom due to a hemiplegic paralysis. Hip, knee, angle joint angle are measured for suitable gait training of hemiplegic gait disorder. Measured angles are used to grasp the difference between normal and abnormal sides. In particular, knee joint is more important than the other joints to hemiplegic patients. So, we developed marker based knee angle measurement system which is able to measure angle of knee joint using PC and PC camera. Then we integrated VR rehabilitation training system with the marker based knee angle measurement system. VR tasks were constructed with crossing stepping-stones to practice knee flexion and extension during walking. To validate feasibility of VR system, we performed experiment using a treadmill with a speed of 2.0 km/h and various target angles (40 degree, 50 degree, and 70 degree). The purpose of this study ascertained that normal participants make a success of task with the usual target angle (50 degree) as well as the unusual target angle (40 and 70 degree). Four healthy males (age from 24 to 28 years) were recruited for this experiment. And we measured average of trial numbers for success in each period. Training periods divided into the first stage, middle stage, and last stage for the analysis. In the result of trial number to succeed each task, trial number at 50 and 70 degrees showed a tendency of decrease. Questionnaire result represents that gain positive rate in performing knee control task using VR. In conclusion we ascertained that VR system help with gradually increasing control ability of knee joint of the participants at 50 and 70 degrees. Also, we expect that the developed VR rehabilitation training for knee control could be possible to use for practical gait training for patients with hemiplegia if they will felt such as normal. Keywords— gait training, virtual reality, hemiplegic gait disorder, knee control, rehabilitation
I. INTRODUCTION Stroke occupies third place after heart disease and cancer as a cause of death [1, 2]. The ratio of it is gradually increasing because the average of the life is increasing. 10 percent of the patients with stroke recover spontaneously. But 80 percent of the patients with stroke require rehabilitation [3]. In particularly, 50% of stroke survivors exhibit some hemiparesis and require rehabilitation about the motor deficits [4]. Patients with stroke have gait disorder and frequently falling down because of the lack of the stability. It can cause a
wrist fracture, a pelvis fracture, the lower limb fracture, and the other body parts. Moreover if this condition is kept for a long time, the patients with stroke have a ‘Fear of Falling Gait’ phobia which is afraid of walking [5]. Therefore in order to improve the stability and to prevent second injury caused gait disorder the rehabilitation training is required. There are various causes of the gait disorder such as physical impairment, deformity, amputation, and neurological impairments [6]. Also the symptoms of the gait disorder come out various conditions [7]. Therefore the patients with stroke performed the suitable training among force exercise, reduction of spastic, gait symmetry, utilization of equilibrium reflexes, stepping automation, endurance training, repetition of rhythmic movements, etc [8]. In particular, gait disorder of the hemiplegic patient has an asymmetric gait symptom due to a hemiplegic paralysis [9]. One leg is stiff and is swung out and around because the lack of the control ability to joint angle of the abnormal side [7]. Therefore the training of the joint control ability is required in order to walk like a normal. In particular, training the control of the knee angle is very important to walk like a normal because knee joint link up hip joint and ankle joint. In this study, we developed a marker based knee angle measurement system using PC and PC camera for applying knee angle to rehabilitation system. Also we used the VR technology to present the visual feed-back and to apply realtime interaction using measured result. Real-time interaction is necessary to distinguish between success and failure of task. Developed VR rehabilitation training system composed crossing stepping-stone task which is proper for the target of the training for the patients with hemiplegia. Also we had a feasibility test whether the developed rehabilitation training system will be able to apply it to practical rehabilitation training. II. MATERIALS AND METHOD A. System composition Marker based knee angle measurement system for knee control training was composed a Personal Computer (PC), an IEEE1394 camera, and IEEE1394a interface card.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1030–1033, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Development of Knee Control Training System Using Virtual Reality for Hemiplegic Patients
1031
mum knee angle at 2 km/h was 51.85(SD=5.66). They never look at the computer screen and only walked
Fig. 1 Hardware composition of marker based system We created two trees, a background, and three steppingstones using A6 Game Studio to apply into VR rehabilitation training system. The stones putted with difference length each target angle to present more efficiency visual feed-back. For example length between two stepping-stones is short when a low target angle is required and length between two steppingstones is long when a high target angle is required. The developed VR rehabilitation training system was composed of simple task which requires to step the stone down by flexion and extension like walking motion. The red-cross image in VR moves according to knee angle which is measured in real-time. The participants can cross over next stepping-stone when they succeed maintaining target angle during the time of 0.17 seconds. If they make target angle of knee angle, the red-cross image changes into foot image on the stepping-stone. Subsequently, if they maintain target angle during 0.17 seconds on the stone, the steppingstone color changes into the yellow. Subsequently, if they pull the stepping-stone to blue line, they succeed a step.
Fig. 2 Display of stepping-stones: target angle is 20 degrees (left top), target angle is 40 degrees (right top), target angle is 50 degrees (left bottom), target angle is 80 degrees (right bottom)
B. Experiment Four normal male subjects (age from 24 to 28 years) was participated in feasibility experiment. They performed task with same speed (2.0 km/h) of treadmill and various target angles (40 degree, 50 degree, and 70 degree). The speed was established by a rehabilitation specialist. The speed is usually used for rehabilitation training with patients with hemiplegia being able to train using treadmill. In the event each participant performed task three times totally.
Fig. 3 VR task composition: The sequence of task processes clockwise from the left upper corner.
C. Factor extraction experiment We required understanding of knee angle of normal at 2 Km/h to apply suitable factor to the VR task for the rehabilitation training. 2 km/h is used in patients with hemiplegia. We measured knee angle because we concerned about knee control. For this measurement five normal males (age from 24 to 29 years) was participated. The participants walked approximately 10 gait-cycles. Mean of the maxi-
Fig. 4 Environment of experiment: extension state (left), flexion state (right)
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1032
J.S. Park, J.H. Ku, K.W. Han, S.W. Cho, D.Y. Kim, I.Y. Kim and S.I. Kim
Table 1 System Feasibility Questionnaire
Table 2 Mean and standard deviation of the System Feasibility Questionnaire
Item Number No. 1 No. 2 No. 3 No. 4
Question
Keyword
It is difficult to control knee angle at the first time after starting walk. In the process of repeating performing, I could readjust knee angle to target angle. I felt that VR task is difficult.
readjust
I thought that speed of treadmill is fast.
speed
control
readjust
Task
Speed
MEAN(SD)
MEAN(SD)
MEAN(SD)
MEAN(SD)
40 degrees
3.31(1.37)
3.00(1.15)
2.75(1.50)
2.00(1.41)
50 degrees
2.75(0.96)
3.75(0.50)
2.00(0.81)
1.75(0.50)
70 degrees
4.50(0.58)
3.50(1.00)
3.25(1.50)
2.25(1.26)
control
task
Experiment procedure was following: • •
Attach two markers to left knee of the participants. Input information about participants, the region of measurement, movement mode, target angle, and successful number of times. • Perform a practice. • Perform the practical task : The success condition is that participants maintain the target angle during 0.17 seconds. The failure condition is that they don’t maintain the target angle during 0.17 second. • Try repeatedly until successful number will be 30 times. We measured number of trials until participants succeed each stepping-stone. We divided each task into three periods such the first stage, middle stage, and last stage. Participants are asked to answer to the System Feasibility Questionnaire. After performing VR task the questionnaire scale was composed four items and five scale from 1 to 5 with responses “Strongly No,” “No,” “Maybe,” “Yes,” and “Strongly Yes.”. III. RESULTS In this experiment, we ascertained the feasibility of the developed VR rehabilitation training system for knee control ability.
Fig. 5 The average number of trials for each period
This result table showed the trial number to succeed the first stage (from first to tenth stepping-stone), middle stage (from eleventh to twentieth stepping-stone), and last stage (from twenty first to thirtieth) each task. In this result figure 5, trial number of 50 degrees which is regarded as normal target angle is 3.48 times at the first stage and is 1.83 times at the last stage. At the result, trial number decreased 1.65 times. Trial number of 70 degrees which is regarded as unusual target angle is 5.00 times at the first stage and is 2.80 times at the last stage. At the result, the variation of trial number decreased 3.20 times. On the contrary, trial number of 40 degrees which is regarded as unusual target angle is 2.45 times at the first stage and is 4.68 times at the last stage. At the result, the variation of trial number increased 2.23 times. In the result of questionnaire, the mean score of the 70 degree is the highest and the 50 degree is the lowest in the first item. Also the mean score of the 50 degree is the highest and the 40 degree is the lowest in the second item. Also the mean score of the 70 degree is the highest and the 50 degree is the lowest in the third item. Also the mean score of the 70 degree is the highest and the 50 degree is the lowest in the forth item. IV. DISCUSSION AND CONCLUSION We developed knee angle measurement system using two markers. Also, we developed VR rehabilitation training system for knee control ability using developed measurement system and VR technology. Before system feasibility experiment, we needed to know the suitable knee angle of normal to apply into VR rehabilitation training system. Therefore we measured the knee angle of normal through repetitive experiment. In the result of this experiment, we know that maximum knee angle at 2 km/h is 51.85 degrees. It is similarly with the result of Zoltán Bejek’s study (MEAN=52.8 degrees, SD=7.8) [10]. In other words, we could assume that 50 degrees target angle is normal. Also, it is agree with result of first item of questionnaire. In the result of trial number to succeed each task, we know that of the trial number at 50 and 70 degrees decreased. We could think that the ability of knee control is
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Development of Knee Control Training System Using Virtual Reality for Hemiplegic Patients
increased by VR knee control training. The other side trial number at 40 degrees increased. There is no training effect of VR knee control training. We could find the reason of this result in the system feasibility questionnaire. Score of the second item of questionnaire to ask effectiveness of knee control training was 3.0 point. It is the lowest point among the different target angles. The meaning of 3.0 point in the second item is that the effect of knee control training was flat. Therefore, we could expect that knee control ability would maintain present state or would decrease after VR task in 40 degrees. The purpose of rehabilitation training is increasing the performance of motor ability. Therefore, in knee control training, the target angle set up such as present ability or more. So, we interested in 50 and 70 degrees target angle. In this study, it is limitation that we perform experiment with normal participants. This is very important factor to apply developed system to practical rehabilitation training because developed system will be used by patients with hemiplegia in hospital. Even if normal participants reported that they felt that developed VR rehabilitation system for knee control ability helps to increase their task performance, they never represent patients with hemiplegia. In spite of this limitation, if patients with hemiplegia feel such as normal, we will expect that developed VR rehabilitation training for knee control is possible to use in practical gait training for their rehabilitation training. Therefore, we will need a clinical test to apply this system to patients with hemiplegia in hospital.
ACKNOWLEDGMENT This work was supported by grant No. (R01-2006-00010533-0) from the Basic Research Program of the Korea Science & Engineering Foundation.
1033
REFERENCES 1.
Adams R. D., Victor M (1989) Principles of Neurology 4th Ed. Mc Graw-Hill Company, New York 2. Garison S. J., Rolak L. A. (1993) Rehabilitation of stroke patients. Rehabilitation Medicine: Principles and Practise. Ed. DeLisa J.A. 2nd Ed. JB Lippincott Company. Philadelphia 3. Asuman D, Guldal F. N., Meryem D. A., Ayse Z. K., Nese O (2004) The Rehabilitation Results of Hemiplegic Patients. Turkish Journal of Medical Sciences 34:385–389 4. Bambi R. B. (2006) Visual Feedback Manipulation for Hand Rehabilitation in a Robotic Environment. The Robotic Institute Carnegie Mellon University Pittsburgh, PA 15213 5. Roger K (2005) Fear of Falling Gait. Cognitive and Behavioral Neurology 18:171–172 6. Neil B. A., Allon G (2005) Gait disorders: Search for multiple causes. Cleveland Clinic Journal of Medicine 72: 588–560 7. Deanna J. F. (1997) Pathology Forum: Characteristic Gait Patterns in Neuromuscular Pathologies. Journal of Prosthetics & Orthotics 9:163–167 8. Mauritz K. H. (2002) Gait training in hemiplegia. European Journal of Neurology 9:23–29 9. Carr, J. H., Shepherd, R. B. (1985) Investigation of a new motor assessment scale for stroke patients. Physical Therapy 65: p:175-180 10. Zoltán B, Robert P, Arpad I, Rita M. K. (2006) The influence of walking speed on gait parameters in healthy people and in patients with osteoarthritis. Knee Surgery, Sports Traumatology, Arthroscopy 14:612-622 Author: Institute: Street: City: Country: Email:
Jeonghun Ku, Ph.D. Department of Biomedical Engineering, Hanyang University Seoul Korea [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Clinical Engineering Initiative within the Irish Healthcare System toward a Safer Patient Environment P.J.C. Pentony1, J. Mahady2 and R. Kinsella2 1
2
Connolly Hospital /Department of Medical Physics and Clinical Engineering, Dublin, Ireland Adelaide and Meath Hospital Dublin, incorporating the National Children’s Hospital /Department of Medical Physics and Clinical Engineering, Dublin, Ireland
Abstract— Clinical Engineering within Ireland has developed significantly in recent years. This Engineering discipline has proven an essential component of the modern healthcare environment. Nationally, Clinical Engineers participate as key members within multidisciplinary teams of every Clinical specialty. The Clinical Engineering membership has developed to include participants from the areas of Healthcare institutions, community medicine, academia and the medical device industry. A national voluntary registration scheme is now established and will further contribute to the professional development of Clinical Engineering. Central to the profession of Clinical Engineering is Patient Safety and pivotal to this responsibility are the many elements of effective medical device management. Medical devices are both manufactured and utilized within Ireland under three different European Directives, the Medical Devices Directive, 93/42/EEC, 90/385/EEC for active implantable medical devices and 98/79/EC for in-vitro diagnostic medical devices. These directives are enacted into Irish Law. To fully ensure compliance with these directives a Clinical Engineering Department within an organization or community must be aware of all the Medical Devices within its remit. Only then can the medical devices be effectively managed, ensuring a medical device is safe for both patient and clinical user and the treatment being administered or diagnosis being acquired is both safe and accurate. The Adelaide and Meath Hospital, Dublin, incorporating the National Children’s Hospital, (AMNCH), instigated a Medical Device Audit, as part of a Six Sigma Process Improvement initiative. This audit was an element of a larger Clinical Engineering process improvement initiative. The results indicate the importance of this initiative in establishing the identity and quantities of medical devices within an organization. It is only with this essential knowledge that medical devices can be effectively managed and an organization can meet its statutory responsibility. Keywords— Clinical Engineering, Patient Safety.
I. INTRODUCTION The Adelaide and Meath Hospital, Dublin, incorporating the National Children’s Hospital [1] has embraced a process improvement methodology called Six Sigma (6σ). GE describes Six Sigma as Six Sigma – “A vision of quality,
Figure 1. DMAIC [3]
which equates with only 3.4 defects per million opportunities for each product or service transaction. Strives for perfection” [2]. AMNCH have engaged Rubicon [4] to foster the Six Sigma methodologies within AMNCH. A critical component of Rubicon’s position within AMNCH was the support of senior hospital management. DMAIC is a Six Sigma Methodology. The DMAIC acronym is 1. 2. 3. 4. 5.
Define Measure Analyze Improve Control
DMAIC is defined as “a structured, disciplined, rigorous approach to process improvement consisting of the five phases mentioned, where each phase is linked logically to the previous phase as well as the next phase”[5]. When analyzing a problem there is a natural instinct to have a “Problem – Solution” thinking process. For every problem that occurs, a solution is immediately offered. Six
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1055–1057, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1056
Sigma challenges this mode of thought and instead of offering solutions, examines every element and influence within the specific process, seeking a solution based on the DMAIC methodologies. A key element within this process is the establishment of the Process Team. Team members are chosen to reflect the areas associated with the process being examined. A process cannot be fully investigated without the key participants present. Furthermore, the success of this program was inextricably linked to the performance dynamic of the team. Rath and Strong’s Six Sigma TEAM Pocket guide [6] states “Having excellent technical skills and the best technical solution is not enough to ensure successful completion of your Six Sigma projects”. They identify the real challenges as “gaining cooperation and support from various stakeholders, getting data from people, getting team members to show up for meetings and maintaining momentum on the team and keeping the team focused”. The Process Team members were from: • • • • •
Clinical Engineering Medical Physics Nursing Materials Management Internal Audit Office (Observation)
The title and subject of the chosen process for investigation was the Medical Device Sourcing Process. Medical Devices are utilized for the treatment or diagnosis of patients. A Clinical Engineer is "a professional who supports and advances patient care by applying engineering and managerial skills to healthcare technology." [7] Medical devices are such Healthcare Technology. The management of these medical devices is an important role of Clinical Engineers. Clinical Engineering in AMNCH use is a Medical Device Database, a software-based Medical Device Asset Register as part of the Equipment Management System. This asset register software system is called HECS (Hospital Equipment Control System) and is supplied by ECRI (Emergency Care Research Institute) [8]. Each medical device has a unique identification number. Associated with each medical device number is all the relevant data required for the effective management of that medical device. Information such as original cost, supplier data, service agent, hospital location, all associated maintenance, both preventative and corrective, frequency of calibration, life expectancy and current status are just a number of the key data points associated with each device. In the event that a medical device is not registered on HECS, it is not possible for Clinical Engineering to manage this device correctly or ensure the device is maintained.
P.J.C. Pentony, J. Mahady and R. Kinsella
There are many risks associated with this fact. The primary risk is that a medical device, used for the treatment, diagnosis or resuscitation of a patient, unless correctly managed and maintained, will not function correctly, will not provide true data or will not deliver essential therapeutic treatment when required. Equally, a medical device may function inappropriately causing harm or even death to patient or clinical user. In addition, device history, detailing all relevant maintenance and calibration details, will be incomplete. Medical Device records are essential from a number of perspectives. In the event of an adverse event or a serious adverse event [9,10] relating to a medical device incident full documentation regarding the history of a Medical Device is sought by Medical device manufacturers or their agents, Medical Device agencies, Healthcare Agencies and Administrators, Insurance Companies and Legal representatives. This includes all aspects of maintenance and both user training and engineering training. II. METHODOLOGY The process team identified three key requirements following their deliberations, by utilizing scoping templates provided by Rubicon. They were • 100% of ALL Medical Devices are made available to MPCE Department prior to any use within AMNCH • All medical devices are correctly labelled • All medical devices require lifetime service history To assess the level of compliance with these criterions a hospital – wide audit was completed over a three day period. The reference material required to assess compliance was generated from a report generated from HECS. Table 1 shows the quantification of the medical devices and the quantities in each category. In addition to the quantification of the medical devices, HECS also contained a large amount of information specific Table 1 Medical device categorization from HECS Medical Device Status
Quantity
In Service
4870
Out of Service
90
Replaced / Service Exchange
90
Retired
331
Not Found
30
Transferred / In Service
346
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Clinical Engineering Initiative within the Irish Healthcare System toward a Safer Patient Environment
to each device. The accuracy of this data must also be checked during the audit and equally for new devices encountered during the audit, similar information must be recorded. The medical device data captured prior to the audit was: • • • • • • • • •
Control Number (Unique Identifier) Serial Number Model Type Date Device Acquired Cost Status Location Supplier
The authorization was received to complete the audit, with a letter of Introduction from the Hospital C.E.O. for access to all clinical areas with full security clearance and key access to all areas. Five teams completed the survey. Each team received the complete listing of the medical devices in their area and a detailed method statement on how to complete the audit. Additional medical devices encountered were recorded and later entered in HECS. The time taken to complete this audit was approximately 214 hours. III. RESULTS The audit was completed in the allocated time. No logistical difficulties were encountered. Table 2 shows the results formulated after data analysis. Table 2 Medical Device audit results Medical Device Status
Quantity
Total Allocation Total Devices on HECS recorded New Medical Devices Recorded Miscellaneous Medical Devices Not found
5216 3958 463 8 787
1057
IV. CONCLUSION The HECS database was initiated seven years previously and this was the first instance such an audit had been performed on the system and the whole hospital, with the exception of Diagnostic Imaging and Pathology. The quantity of new devices was increased due to the inclusion of two new categories of medical devices, agreed during the audit process. Further analysis is being performed on the quantity and nature of medical devices not found. In addition to the valuable information obtained during this audit, there was equally a greater understanding from an organizational perspective, regarding the importance of medical device management.
ACKNOWLEDGMENT The Authors wish to acknowledge the support of Rubicon and the management, Process Team Members and Medical Physics and Clinical Engineering departmental colleagues of the Adelaide and Meath Hospital Dublin, incorporation the National Children’s Hospital, for their constant support and tireless efforts during this process.
REFERENCES 1. 2.
AMNCH at www.amnch.ie http://www.ge.com/en/company/companyinfo/quality/glossary.htm Accessed 4th September 2006 3. www.rubicon.uk.com 4. http://www.dmreview.com/editorial/dmreview/200410/200410_044_ 2.gif Accessed 23 September 2006 5. Rath and Strong’s Six Sigma Pocket Guide. 6. Rath and Strong’s Six Sigma TEAM Pocket Guide 7. American College of Clinical Engineers at www.accenet .org 8. ECRI at www.ecri.org 9. http://www.medicine.manchester.ac.uk/arc/BSRBR/health professionals/whoadverse/ Accessed 27th January 2007 10. http://www.who.int/patientsafety/events/05/Reporting_Guidelines.pdf Accessed 27th January 2007 Author: PJC Pentony Institute: Street: City: Country: Email:
Connolly Hospital Blanchardstown Dublin Ireland [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Pervasive Computing Approach in Medical Emergency Environments J. Thierry, C. Hafner and S. Grasser Carinthian University of Applied Sciences, Department of Medical Information Technology (MedIT), 9020 Klagenfurt, Austria Abstract— The main priorities of an emergency physician are to rescue lives and to limit the damage to life and limb of patients as much as possible. Thus, the analysis of the situation at the accident scene and the determination of first aid methods as well as follow-on medical techniques at the hospital are the result. In order to allow an electronic and hands-free data acquisition process, CANIS – the Carinthian Notarzt (emergency physician) Information System – aims at the establishment and optimization of the information stream between the emergency rescue vehicle (ERV) and/or emergency rescue helicopter (ERH), and the receiving hospital, as well as the development of a speech-based electronic Emergency Patient Care Report Form (sEPCRF) . To achieve these challenging requirements, a pervasive hardware setup must be established. Keywords— Emergency Systems, Emergency Patient Care Report Form, Hardware, Setup, Speech Recognition, Wireless Data Transmission, Mobile and Wearable Devices.
I. INTRODUCTION The documentation of medical emergency events in Austria - as presumably in most European countries – are largely paper-based using an Emergency Patient Care Report Form (EPCRF). After the emergency physician has arrived at the accident scene, he/she documents and reports the assessments and emergency treatments performed, and, subsequently, sends the EPCRF to the dispatching hospital together with the patient, where both are handed over to the emergency department simultaneously. Obviously, immediate electronic transmission of patient data and diagnoses would put the receiving institution in a more favourable position, since admission of the expected patients could be prepared much better in advance due to these data. However, in order to introduce an electronic emergency response information system into extreme environments such as an accident scene, considerable analyses about suitable data recording and transmission devices must be accomplished After all, the major responsibility of emergency physicians is to save the lives of their patients, which must by no means be hampered by the important, but subordinate task of data entry into a mobile device.
II. MOTIVATION This paper focuses on a pervasive hardware setup to support hands-free data entry for electronic emergency medical response systems with the emergency physician on-site as the primary actor. Therefore, speech recognition as a nonstandard data acquisition technology as well as computer monitor technologies as a feedback alternative will be evaluated. Our research is part of the CANIS project (www.fhkaernten.at/canis), and concentrates on the analysis of the current state of the emergency medical response system in the province of Carinthia (Austria), as well as the development and implementation of an electronic Emergency Patient Care Report Form (eEPCRF), which relies on mobile clients for on-site data acquisition and wireless data transmission to the receiving hospital. The project employs portable clients which consist of rugged Tablet PCs as well as smaller and handier Personal Digital Assistants (PDAs). Additionally, a combination of both in order to optimally support the emergency physician in any potential working scenario is provided. Features like an automatic GPS determination of the accident location, or even the capability of reading out data from the patient’s eCard, represent important privacy issues and concerns, and will therefore be handled with care. III. SYSTEM ARCHITECTURE Interoperability with other predefined components in the healthcare sector is a major requirement for the CANIS application. As Figure 1 illustrates, the components involved in the communication flow within CANIS can be divided into two categories. On the one hand, these components are independent systems, which are integrated in the overall IT management of a hospital (e.g. Hospital Information System (HIS), Medical Information System (MIS)), or medical devices which are operated by the emergency rescue team (e.g. electrocardiogram (ECG), or blood pressure meter). On the other hand, the CANIS communication architecture consists of co-dependent as well as independent institutions, like the Emergency Services Call Center (ESCC), or the Social Health Insurance (SHI).
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1058–1061, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A Pervasive Computing Approach in Medical Emergency Environments
1059
Table 1. CANIS voice commands [1]
Fig. 1. CANIS communication flow and the important components of a medical emergency event
Additionally, Austria’s eCard is an optional component for the future, when eCard holders will be able to store their individual medical information on the card itself. Because of the sensitive nature of patient’s data and the legal restrictions in regard to data security and protection, this concept is currently not realistic. The standardized open interface Health Level 7 (HL7) forms the basis for communication between the existing clinical systems. IV. SPEECH RECOGNITION The Tablet PC as well as the PDA can be equipped with speech recognition for enabling the emergency physician to perform emergency related tasks and data acquisition simultaneously [1]. The standard scenario of how data are entered implicates the user holding the input device and filling in data manually while having constant eye-contact with the device’s display. When performing speech data entry, the major advantages are the ability of filling in data without holding the device and without the need of having constant eye-contact to the screen, as spoken input will be mapped to the correct data field automatically. The speech-based application of CANIS is being developed in order to ease and speed up data acquisition in emergency events. Therefore, the user should not be limited to a certain command or instruction when handling the application [1]. The user should not have to adapt to fit the application rather the application should be adapted to fit the user and his or her tasks. This
is an important rule of usability [4]. The resulting voice commands that are being integrated into the CANIS speech-based application are a composition of the preferred voice commands gained from various user tests, as well as the commands recommended by the ETSI [5]. See Table 1 for a more detailed overview. However, one main drawback - when entering data via voice - is the lack of feedback which the user has, as he/she does not have eye-contact to the screen. Did the speech recognizer understand me correct? Has a word been recognized at all? Is the speech recognizer even “listening”? Which data field has been selected and filled in? These questions might arise when direct visual feedback is absent, as the user does not receive information about the application’s progress. According to Shneiderman [2] who describes the importance of the continuous representation of objects of interest and the immediate visibility of operations that have an effect on these objects, and to Tanimoto [3], who introduced the term “liveness” in order to describe the immediacy of semantic feedback that should be automatically provided, we address this problem when introducing the VGA computer monitor technology in the setting of speech-based data acquisition. The monitor on top of the left eyeglass in Figure 2 can be used for left or right eye viewing and displays the screen of the attached device. The SV-6 PC Viewer is able to view a high resolution input on the 640x480 display and additionally accepts an 800x600 input as well. No scroll bars will be added on the viewer’s display as the input information will be reformatted to fit the SV-6 screen [6]. Introducing this device in the environment of medical emergency events, the emergency physician will be able to input data via voice without the need to forgo direct visual feedback.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1060
J. Thierry, C. Hafner and S. Grasser
Table 2. Hardware requirements for CANIS Device
HTC P3300
Dimension
Standard eyeglass dim.
Camera OS Device
108 x 58 x 16.8 mm 2.8” TFT Touch Screen (240 x 320) 130g incl. batteries USB 2.0, SDIO/ MMC GSM/GPRS,EDGE, Bluetooth 2.0, WLAN 2.0 Megapixel Windows Mobile 5 Bluetooth Headset
Dimension Weight Connection Device
Info not available 14g Bluetooth RFID Card Reader
70 x 15 x 98 mm approx. 70g USB Wireless CMOS Camera
Dimension Weight
60 x 22 x 10 mm approx. 30g
Display
no
24 x 25 x 24mm approx. 20g Sensor: 628 x 582 pixel CMOS
Connection
SD Card slot
Display
Fig. 2. MicroOptical’s SV-6 PC Viewer V. HARDWARE REQUIREMENTS Tablet PCs and PDAs have to meet certain criteria when introducing them to the field of medical emergency events. Because of the rough working environment, the devices have to pass standards such as the Ingression Protection (IP) Rating, the Military Standard (MIL-STD), the Military Specification (MIL-SPEC), or the National Electrical Manufacturer Association (NEMA). An IP Rating of IP54 represents the minimum requirements of medical devices (resistant against water, dust, shock and vibrations). But when talking about wearable or ubiquitous computing, these military standards cannot always be accomplished, as “miniaturized” devices are limited in their size and/or weight, they sometimes have to suffer the loss of their rugged features. In the environment of medical emergency events we put our main focus on the weight factor. Every part of the equipment has to be as light as possible which sometimes includes the loss of the rugged feature. Most parts of the equipment (e.g. PDA [7], card reader [8]) are going to be sewed inside the emergency physician’s suit, in order to not hamper the user while performing first aid measurements. Therefore, the overall weight of the entire electronic equipment is an important factor. A detailed overview of possible and, above all, available devices can be seen in Table 2. Using the equipment illustrated in Table 2, a fully equipped emergency physician would not have to carry more than additionally 0.8 kg.
Weight Connection Wireless
MicroOptical SV-6
VGA, 640 x 480, 60 Hz < 230g (Headset) DB-15 VGA Connector no no no Smart Card Reader
VI. PERVASIVE HARDWARE SETUP As has been stated before, the equipment needs to be attached to the emergency physician carefully as to not hamper his/her work. Therefore, Figure 3 illustrates the setup which has been defined. Wireless Camera: A tiny camera which is mounted on the user’s helmet enables the emergency physician to take photos of the accident scene as well as of the patient, which increases the value and quality of documentation.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A Pervasive Computing Approach in Medical Emergency Environments
PC Viewer: Device that is attached to a pair of protection glasses that enables the emergency physician to receive an image of the current PDA display. Using speech recognition, the viewer provides significant visual feedback. Bluetooth Headset: The wireless headset supports the user while entering data in the sEPCRF via voice. Smart Card/RFID Card Reader: As both the emergency physician and the patient carry identification cards (Professional Health Card – PHC, insurance card – eCard), they can use these cards to identify themselves with the application. However, the use of the eCard for the purpose of identification is not yet allowed (see Section III). But as the identification of the patient is substantial, an RFID-tagged wrist band is placed on the patient which will be linked with his/her protocol. PDA: As the PDA is small and handy, it fits easily into a pocket of the emergency physician’s suit or is mounted on the suit’s sleeve. As the application is voice-featured, there is no need to operate the PDA manually. The following process steps of an emergency event can be improved by a pervasive hardware setup consisting of the components that have previously been presented. • • •
Identification: Smart card reader (emergency physician), RFID reader/writer and wrist band including RFID tag (patient) Data acquisition: simultaneous to the performance of medical treatments, via voice and wireless microphone, feedback via monitor, snapshots via camera Information delivering to hospital: wireless, as soon as all important data have been acquired
1061
It should be noted that wearing a helmet is not mandatory in Austria, but would increase the safety of the medical emergency team substantially. VII. CONCLUSIONS The environment of medical emergency events is affected by vitally important and time-critical processes that are in need of supportive methods concerning necessary but subordinate tasks. The emergency physician as the primary actor has to focus on patient-related measurements as well as on their seamless documentation. Speech recognition enhances the user’s mobility. This is accomplished in combination with visual feedback which is not received from the input device, but from monitorequipped protection glasses. It presents an all-around pervasive setup that eases data acquisition to a very high extent. Furthermore, improvements in the value and quality of the documentation can be achieved when introducing small electronic devices like a card reader and/or a wireless camera.
REFERENCES 1. 2. 3. 4. 5.
6. 7. 8.
Hafner, C. Evaluation of Mobile Data Acquisition Modalities in the Context of Medical Emergency Events. Diploma Thesis, Klagenfurt University, Klagenfurt, Austria, January 2007. Shneiderman, B. Designing the User Interface: Strategies for Effective Human-Computer Interaction, Addison-Wesley, Reading, MA, USA, 1992. Tanimoto, S. VIVA: A Visual Language for Image Processing. Journal of Visual Languages and Computing 2(2), Jun. 1990, 127-139. Cooper, A. The Inmates are Running The Asylum, SAMS, A Division of Macmillan Computer Publishing, 1999. ETSI, ETSI Standard (ES) 202 076 v1.1.2 (2002-11), ETSI Standard, Human Factors (HF); User Interfaces; Generic spoken command vocabulary for ICT devices and services, ETSI Technical Committee Human Factors (HF), France, 2000. The MicroOptical Corporation, VGA Monitor Applications, http://www.microoptical.net/Products/vga.html#SV6 2004-2006 High Tech Computer Corp. (HTC), http://www.htc.com, 1997-2007 Tripod Data Systems, http://www.tdsway.com, 1997-2007 Gemplus smart card solutions, http://www.gemplus.com, 1988-2007 Author: Jürgen Thierry, Carmen Hafner, Simon Grasser
Fig. 3. Pervasive emergency physician setup
Institute: Street: City: Country: Email:
Carinthian University of Applied Sciences Primoschgasse 10 9020 Klagenfurt Austria [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A preliminary setup model and protocol for checking electromagnetic interference between pacemakers and RFID (Radio Frequency IDentification) R. Tranfaglia1, M. Bracale1, A. Pone1, L. Argenziano2 , L. Pecchia1 1
University Federico II, Department of Electronic Engineering and Telecommunication, Biomedical Engineering Unit, Naples, Italy 2 Clinica Villalba, Napoli, Italy
Abstract— In the last few years, the enormous growth in electronic equipments which communicate a radiofrequency have only emphasized the problem of so-called electromagnetic smog. At the same time, the continuous attempts to improve the quality of life of the patient (because of the research of new technology) has permitted the emergence of devices to control and cure the health conditions especially for the electrical problems of the heart: pacemakers. The main conflict between these two evolutionary processes is the interference of the elements with the pacemaker. Pacemakers are composed of electronic circuits and can sense and elaborate the electrical signals from the heart, as well as conduct electricity into it. Therefore, they are very sensitive to electromagnetic interaction. The RFID (Radio Frequency Identification) is the most important example of using radio frequency signals to exchange different types of data at a distance. The aim of this paper is to create a setup and protocol for the measurements of the interactions between pacemaker and RFID technology. An electrical model has been realized to simulate the way in which the pacemaker regulates the cardiological activity. In addition, we created a test protocol to evaluate the interaction between them. Keywords— Pacemaker, RFID, electromagnetic interferences, Safety.
I. INTRODUCTION There are many phenomena in the external environment that can cause problems for correct operation of the pacemaker and its components. These influences are classified based on the action that they have on the pacemaker. One must consider the mechanical actions that are the strength to which the pacemaker and electrodes are subjected. There’s also a chemical action due to the perfusion of the organic liquid inside of the pacemaker. Overall, the action caused by the electromagnetic field is the principal effect that the model and the protocol created want to investigate [2]. The electromagnetic interferences may change the operation of the implanted device [3]. The main pathways of EM interferences are the pacemaker’s catheter, the antennae for telemetry, the magnetic switch and the other sensors. All of these internal components may be coupled to the external environment.
The internal electronic circuits in principle are preserved from interference because they are located in a metallic case. The introduction of the sensing function of the pacemaker increases the problems. The input circuits of the sensor are protected by hardware and software that ensure immunity to the external noise. An example of this protection is the ceramic EMI-filter that blocks the undesirable signals to the pacemaker. Anyway, it is impossible to have a complete immunity from interferences. The main effects of electromagnetic interferences are the inductions of electromagnetic forces that create spurious signals that may be recognized by the logic of the pacemaker as physiological signals[3,4]. This means that the pacemaker may be forced into a functional situation which is not really requested by the pathology. For example, if we are in presence of a VVI the spurious signal may block the stimulation with the full risk of the patient. In a more complex pacemaker like a DDD, the functionality of the implanted device may be completely stopped [5,6]. Recently the RFID technology has been used in many applications, often without a warning for the patient with a pacemaker. This means that unconsciously the patient may enter into an unpredictable environment and conditions, due to the electromagnetic interference that are created by RFID technology [12]. In the present paper we have tried to simulate this critical situation in order to verify the correct functionality of a pacemaker that is subjected to electromagnetic interference. A setup model and a protocol of the measurements has been created to evaluate the functionality of the pacemaker in the presence of the RFID device. II. MATERIALS
AND METHODS
A. RFID SYSTEM RFID technology allows one to identify objects and persons using radiofrequency signals. A RFID system is composed of two main elements: tags and writer/reader. Tag is a microchip that stores the data in its internal memory. They can be active or passive.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1066–1069, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A preliminary setup model and protocol for checking electromagnetic interference between pacemakers and RFID
Passive RFID tags have no internal power supply. The minute electrical current induced in the antenna by the incoming radio frequency signal provides just enough power for the CMOS integrated circuit in the tag to power up and transmit a response. Unlike passive RFID tags, active RFID tags have their own internal power source which is used to power any integrated circuits that generate the outgoing signal. Active tags are typically much more reliable (e.g. fewer errors) than passive tags due to the ability for active tags to conduct a "session" with a reader. Active tags, due to their onboard power supply, also transmit at higher power levels than passive tags, allowing them to be more effective in "RF challenged" environments like water (including humans/cattle, which are mostly water), metal (shipping containers, vehicles), or at longer distances. There is no global public body that governs the frequencies used for RFID. In principle, every country can set its own rules for this. Low-frequency (LF: 125 - 134.2 kHz and 140 - 148.5 kHz) and high-frequency (HF: 13.56 MHz) RFID tags can be used globally without a license. Ultra-highfrequency (UHF: 868 MHz-928 MHz) cannot be used globally as there is no single global standard. In North America, UHF can be used unlicensed for 902 - 928 MHz (+/-13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power. In Europe, RFID and other lowpower radio applications are regulated by ETSI recommendations EN 300 220 and EN 302 208, and ERO recommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865-868 MHz [13]. B. The analogical physical model Pacemakers have to respect some requirements of electromagnetic compatibility. The European rule CEI-EN 45502-2 regulates the characteristics of pacemakers and describes all the tests that they have to pass regarding the electromagnetic interference [7,8]. The range of frequency considered in this law are from 16,6 Hz to 3 GHz. Following the guidelines of this rule, an electrical model has been created to observe the interaction between pacemaker and RFID. The analogical physical model is composed by a bicameral Pacemaker (GUIDANT, INSIGNA I AVT), two catheters are connected and two resistors of 470 Ω, which are connected to the electrodes in order to simulate the equivalent electrical load of the atrium and the ventricle of the heart. As it is possible to see in the figure 1, the model respects the normal distances of a pacemaker system. All the components have been fixed on a Plexiglas plane. The electrodes are connected to a digital oscilloscope in order to analyze the stimulation signal of the pacemaker.
1067
Fig. 1 Analogical physical model composed by the pacemaker, the electrodes and the equivalent heart load.
The RFID reader is positioned on another Plexiglas plane and it is connected to a Personal Computer that controls it. The two Plexiglas planes are inserted in a system of slots that permits to change the distance between them. A temporary external pacemaker (VITATRON MEP 3000) is used to simulate the natural activity of the heart and it’s connected to the electrodes. CEI EN 45502-2 prescribes the use of a triangular signal, with a duration of 15 ms and an amplitude depended by the configuration of the pacemaker, in order to simulate the natural electrical cardiac activity. In order to approximate this signal, we decided to use a temporary pacemaker that produce a rectangular impulsive signal. We tested that this signal is able to inhibit the pacemaker. So it can be taken as a simulation signal of natural electrical activity of the heart. A pacemaker programmer (GUIDANT, Zoom Latitude Programming System 31/20) was used for the setup of pacemaker configuration and stimulation values. Moreover, the programmer can read the internal memory of episode diary of the pacemaker. Pacemaker records intracardiac ElectroGraM (EGM) when there are particular episodes of arrhythmia. These episodes can be caused by a pathology of the patient or by a noise signal at the input stage of sensing circuits. Pacemaker change its modality of stimulation in accordance with the interpretation of this data. C. Measurement Protocol The measurement protocol has two phases. The first tests the model and the correct simulation of pacemaker system. The second phase tests the system in presence of RFID interferences.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1068
R. Tranfaglia, M. Bracale, A. Pone, L. Argenziano, L. Pecchia
.
Fig. 2 Temporary external pacemaker VITATRON MEP 3000 (on the left). Pacemaker programmer GUIDANT, Zoom Latitude Programming System 31/20 (on the right)
Each phase is divided into four steps in accordance with the configuration of pacemaker stimulation. Bicameral pacemaker must be tested at least in two typical configurations that are DDD and VVI. Moreover the steps must be repeated for at least two different frequencies of stimulation, from 35 bpm to 70 bpm. These values have been selected for two practical physiopathological rates: a typical normal rate (70 bpm) and in bradicardic patient with a sinus heart rhythm. The sensibility of pacemaker must be set at minimum value for both atrium and ventricle, in order to operate in the worst case of possible interference on sensing circuits. Our setting has been 0,15 mV for the atrium and 0,25 mV for ventricle. The steps of the protocol are: 1) Absence of simulated cardiac signal 2) Simulation of cardiac signal on ventricular channel that doesn’t inhibit pacemaker stimulation. 3) Simulation of cardiac signal on ventricular channel that inhibits pacemaker stimulation. 4) Simulation of cardiac signal on atrial channel in order to synchronize the artificial stimulation in the ventricular cavity. Of course the last step is not possible for a VVI or for other monocameral configurations. All the steps of the protocol have been monitored by an oscilloscope and by a pacemaker programmer. In the following figure a representation of the measurement system and results are showed, in particular in a simulation of A-V synchronization (fourth step of the protocol). The temporary pacemaker simulates a natural atrial stimulation at 70 bpm. The implanted pacemaker is programmed for a stimulation of 40 bpm. So we simulate a synchronization between atrial and ventricular stimulation. The second phase of the protocol was made in the presence of the RFID system. All the steps have been repeated at differ-
Fig. 3 Equivalent physical model
Fig. 4 A-V synchronization at 70 bpm ( Channel B = Atrium, Channel A = Ventricle) ent distances between the pacemaker and the RFID reader. The minimum distance has been of 0,5 cm with steps of 1 cm until 3,5 cm. The maximum distance has been decided in accordance with the range of operation of RFID reader on the frequency of 125 kHz. For example RFID device at 13,56 MHz must be tested until a distance of about 1 m. if the frequency of RFID changes, also the relative distances must be modified related to the range of operation. We recorded with the oscilloscope every phase of the test and we analysed the internal memory of the pacemaker with the programmer in order to find abnormal event in the EGM signal. III. RESULTS The result of the present work is the realization and the validation of an equivalent physical model which simulates the system: heart, pacemaker and leads. The measurement protocol was very useful to test the validity of the model. This has been developed considering the main operational situations of a pacemaker. Preliminary tests have been done regarding the interaction between pacemaker and RFID, with an operating frequency of 125 kHz.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A preliminary setup model and protocol for checking electromagnetic interference between pacemakers and RFID 2.
3.
Fig. 5 Tests in presence of RFID interference at minimum distance of 0,5 cm. The pacemaker stimulates atrium and ventriculum at 40 bpm (on the left). Temporary pacemaker inhibits implanted pacemaker with a simulate heart rate of 70 bpm. Pacemaker is programmed for a stimulation on 40 bpm (on the right).
The measurement have not evidenced any interaction between the two devices. In the figure 5 some results of interference measurement are showed by screenshots of oscilloscope. It’s easy to recognize the regular stimulation of atrium and ventricle. We observe a noise on the two channels of oscilloscope caused by radiofrequency waves but the stimulation is regular in each conditions. Moreover the intracardiac EGM, recorded in the internal memory of pacemaker, has not showed arrhythmia episodes during all the tests of the protocol. IV. CONCLUSIONS We can finally conclude that the equivalent physical model and the measurement protocol are able to evaluate the interaction and the functionality of a pacemaker in presence of electromagnetic interferences. The preliminary test with a RFID at operating frequency of 125 kHz started an approach for the studying the interactions between pacemakers and radiofrequency devices. The future work will evaluate the interactions between pacemakers and RFID operating at higher frequencies.
ACKNOWLEDGMENT The authors thank the administration of the Clinical Villalba of Naples for the support and the hospitality in the practical activities for the preparation of the work.
REFERENCES 1.
Decreto Legislativo 14 dicembre 1992, n. 507: “Attuazione della direttiva 90/385/CEE concernente il ravvicinamento delle legislazioni degli Stati membri relative ai dispositivi medici impiantabili attivi”.
4.
5. 6. 7.
8.
9. 10. 11.
12.
13. 14.
1069
Bracale M. “Il controllo degli stimolatori cardiaci impiantati. Risultati clinici e misure di laboratorio”. Università degli Studi di Napoli Federico II. Dipartimento di Ingegneria Elettronica e delle Telecomunicazioni. Corbucci G, Riva U, Sciotto F, Venturini D. “Gli stimolatori cardiaci impiantabili e le interferenze elettromagnetiche”. Giornale Italiano di Aritmologia e Cardiostimolazione 2001; 1: 180-189. Barbaro V, Bartolini P, Cappellini A. “Immunità elettromagnetica dei pacemaker alle stazioni radio-base per telefonia GSM: distanze di sicurezza sulla base di normative attuali”. Rapporti ISTISAN 01/21. Pinski S, Trohman R. “Interference in implanted cardiac devices”. Pacing And Clinical Electrophysiology 2002; 25: 1367-1381. Ergodan O. “Electromagnetic interference on pacemakers”. Indian Pacing and Electrophysiology Journal 2002; 2: 74-82. CEI EN 45502-1. “Dispositivi medici impiantabili attivi. Parte 1: Requisiti generali per la sicurezza, la marcatura e le informazioni fornite dal fabbricante”. Comitato Elettrotecnico Italiano; 2000. CEI EN 45502-2. “Dispositivi medici impiantabili attivi. Parte 2: Prescrizioni particolari per i dispositivi medici impiantabili attivi destinati a trattare la bradi-aritmia (pacemaker cardiaci)”. Comitato Elettrotecnico Italiano; 2005. Hekmat K, Salemink B, Lauterbach G, et al. “Interference by cellular phones with permanent implantable pacemakers: an update”. Europace 2004; 6: 363-369. Altamura G, Toscano S, Gentilucci G, et al. “Influence of digital and analogue cellular telephones on implanted pacemakers”. European Heart Journal 1997; 18: 1632-1641. Mc Ivor M, Reddinger J, Floden E, et al. “Study of pacemaker and implantable cardioverter defibrillator triggering by electronic article surveillance devices”. Pacing And Clinical Electrophysiology 1998; 21: 1847-1861. Mugica J, Henry L, Podeur H. “Study of interactions between permanent pacemakers and electronic antitheft surveillance systems”. Pacing And Clinical Electrophysiology 2000; 23: 333-337. R. Firenze. “Studio dei sistemi RFID e loro caratterizzazione sperimentale”. Tesi di Laurea in Ingegneria Elettronica. Università degli Studi di Genova. Angeloni A, Barbaro V, Bartolini P, et al. “Simulatore di attività cardiaca per lo studio dell’interferenza tra sistemi radiomobili e dispositivi cardiaci impiantabili attivi”. Rapporti ISTISAN 02/33. Author: Institute: Street: City: Country: Email:
Riccardo Tranfaglia University Federico II, Department of Electronic Engineering and Telecommunication, Biomedical Engineering Unit Via Cludio 21 Naples Italy [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A prototype device for thermo-hygrometric assessment of neonatal incubators P. Bifulco1, M. Romano1, A. Fratini1, G. Pasquariello1, and M. Cesarelli1 1
Biomedical Engineering Unit – Dept. of Electronic end Telecommunication Engineering, University "Federico II", Naples, Italy
Abstract— Beside other Clinical Engineering activities to reduce risks in hospitals, functional assessment of medical devices is becoming more and more extended. Once evaluated the common risk such those electric and mechanical, specific risk are connected to the device performance: functional assessment aims to verify the specific proper operations of a medical apparatus. In particular, neonatal incubators have the primary aim to maintain an adequate microclimate for the newborn, especially those premature and smaller, who may not be able to regulates body temperature. To prevent newborn heat loss, incubator mainly stabilizes the temperature of the baby compartment to a selected value, reducing the risk of hypothermia. As suggested by the Particular Standards for safety of baby incubators, temperature have to be measured in 5 specific points inside infant chamber: the difference between measured temperatures and that selected on incubator must be limited to fractions of °C. Also the time needed to reach a given temperature must be limited. Other tests should be performed to verify humidity, air velocity, noise level, CO2 concentration, etc.. A prototype device was developed to automatically perform functional assessment of neonatal incubators. A microcontroller continuously performs a 12-bit analog to digital conversion of voltage signals generated by 5 thermal probes and a humidity sensor and sends those data to a PC, where a software process received signal and provide tests. The HW device is of little dimensions, battery powered and capable to continuously transmit data wirelessly, the SW allows real-time signal display, average values, optic and acoustic alarms for limits exceeding, data storage, etc. The device can be easily used to perform periodical functional tests on neonatal incubators. Further analysis of recorded data can be used to evaluate more specific features such as modalities of temperature control, etc. Keywords— Clinical Engineering, neonatal incubator, maintenance, function assessment.
I. INTRODUCTION The newborn infant regulates his body temperature, which have to be in the range 36.5-37.5°C, much less efficiently than adult and loses heat more easily [1]. The smaller and more premature the baby, the greater the risk. Just after birth, the newborn starts losing heat and unless heat loss is prevented, hypothermia will develop. Hypothermia of the newborn occurs throughout the world and in all climates and is more common than believed. This condition is harmful to newborn babies, increasing the risk of
illness and death. The temperature of the environment during delivery and the postnatal period has a significant effect on the risk to the newborn of developing hypothermia. In general, newborns need a much warmer environment than an adult. The smaller the newborn, the higher the temperature needs to be. Obviously, also hyperthermia can be harmful. Prolonged hypothermia is linked to impaired growth and may make the newborn more vulnerable to infections. Moreover, hypothermia, even if moderate, is associated with an increased risk of death in low birth-weight newborns. Sick or low birth weight babies admitted to neonatal units with hypothermia are more likely to die than those admitted with normal temperatures. Preterm newborn babies are less likely to die if cared for in warm environments. Neonatal incubators are medical devices [2] that provide a controlled environment with opportune temperature and humidity in a confined space (usually a transparent cabinet), to allow survival of ill or premature newborns until they reach acceptable maturity parameters. Indeed, a warm environment minimizes the energy expended for metabolic heat production to attain thermal homeostasis. Usually, incubators are convectively heated with a thermoregulatory control system. Heat production is usually performed by forced circulation of air warmed by an electrical heater. The output of the heater is controlled manually by the operator or indirectly by thermostatic servocontrol to maintain either abdominal skin temperature or incubator air temperature at a constant level. Most incubators have a passive system for providing supplemental humidification: ambient humidity is increased by water evaporation [3]. There are also transportable baby incubators equipped with autonomous power supply and/or adapted for ambulances to allow newborn transportation. Neonatal incubators, as a medical electrical equipments, should comply with the general requirements for safety IEC 60601-1 [4]. In particular periodical assessment of electrical and mechanical safety are prescribed. In addition, they should comply also with the Particular Standard IEC 606012-19: Particular requirements for the safety of baby incubators [5] and the IEC 60601-2-20: Particular requirements for the safety of transport incubators [6], if applicable. A prototype device for thermo-hygrometric assessment of neonatal incubators is here described; the device was designed to automatically perform functional assessments prescribed by Particular Standards and, more in general, to evaluate incubator performances.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1096–1099, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A prototype device for thermo-hygrometric assessment of neonatal incubators
The device consists of a hardware part including sensors that is battery powered, of little dimensions, and capable to continuously transmit data wirelessly to a receiving station (e.g. PC or PDA), where a software (C++) provides, in realtime, display of signal time course, average values, optic and acoustic alarms for exceeding limits, data processing and storage. II. MATERIALS AND METHODS Particular standards state specific requirements and assessment tests in order to support the safe (minimize patient’s and operator’s risks) and effective operations of the device. Since incubators control the environment of the baby primarily by heated air within the baby compartment, the main functional tests concern the air temperature measurements. Temperatures have to be measured in five specific points inside the baby compartment, at the center and at centers of 4 main quadrants in a plane parallel to and 10 cm above the mattress surface (see figure 1: points are labeled A,B,C,D, E). For example, during steady temperature condition the incubator temperature shall not differ from the average incubator temperature by more than 0.5 °C (@ 32-36 °C over at least 1 h); with the incubator working as an air controlled incubator and the control temperature set at any temperature within its range, the average temperature in each of the above mentioned points shall not differ from the average incubator temperature by more than 0,8 °C in normal use; in any position of the tilted mattress it shall not differ by more than 1.0 °C. There are similar requirement for incubator working in the baby controlled mode (skin temperature sensor). The warm-up time of the equipment has to be also measured. Beside temperature, further test concern humidity, air velocity (e.g. have to result less than 0.35 m/s), sound level (shall not exceed an A weighting of 60 dB), concentration of carbon dioxide (CO2) and eventually oxygen (O2), etc. are also prescribed. Therefore, in general, design of a functional test device must include different specific sensors able to achieve prescribed measurement, opportune data display and storage, threshold limit verification, etc. B infant mattress
C
A
D E
10 cm
Fig. 1 Placement of the 5 temperature sensors with respect to infant mattress in accordance with the EN 60601-2-19 Particular Standard
1097
For temperature measurements, integrated semiconductor sensors were preferred with respect to other kind of sensors (thermocouples, thermistors, wire and thin film thermoresistive elements, etc), mainly for their intrinsic linearity, accuracy, simplicity of use, low current absorption. Silicon temperature sensors are based on the property of forward-biased PN junction current to be dependent from temperature: a differential design provides a output voltage linearly dependent upon temperature. We tested LM35 and AD22100 components. LM35 is directly calibrated in °Celsius (Centigrade); sensitivity: + 10.0 mV/°C; 0.5°C accuracy guaranteeable; −55° to +150°C range; voltage supply 4-30V; less than 60 µA current drain; low self-heating, 0.08°C in still air; non-linearity: ±0.25°C. For relative humidity measurements both capacitive and resistive sensors have been tested. Finally, a capacitive polymer relative humidity sensor (MK33 by IST) was utilized (dimensions: 3.81 by 10.8 by 0,4 mm; humidity operating range: 0-100% RH; operating temperature range: 40/+190 °C; capacity: 300pF ±40pF (at 30%RH); sensitivity: 0.45pF/%RH (20-95% RH); loss factor: <= 0.01 (at 90%RH); linearity: ±2.0% RH (20-95%RH); hysteresis: < 2.0%RH; recovery time: <10s). A conditioning circuit, operating at about 5 kHz, was designed to provide a linear voltage output: it mainly consists of a oscillator, whom frequency linearly depend on sensor capacitance, followed by a linear frequency to voltage converter. Sensor outputs voltage signals are multiplexed (currently, up to eight channels), amplified and then sampled at 100 Hz utilizing a 12-bit Analog to Digital Converter. The microcontroller also provide data stuffing and serial transmission (currently set to 38400 bps), which feed a commercial Bluetooth transmitter module. The wireless transmission can be easily received within a 10 meters rage by any computer equipped with a Bluetooth transceiver: an inexpensive USB Bluetooth adaptor was used. A dedicated software was developed (utilizing the Labwindows/CVI environment) to receive and process data. The software provides serial data interfacing, channels shuffling, conversion from voltage to appropriate measurement units, real-time data presentation and continuous storage on file. Moreover, because of the relatively high sampling frequency (all measured parameters are largely oversampled) data averages are performed, increasing measurement precision. A strip-chart display presents in real-time parameters evolution. Current numerical values of all sensors are also displayed and updated every 20 ms. The software provides also continuous limit values threshold comparison and consequent opto-acoustic alarms activation. For example, continuous comparison are performed between each temperature (A, B, C, D and E) and the temperature set and displayed onto the incubator (control and displayed tem-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1098
P. Bifulco, M. Romano, A. Fratini, G. Pasquariello and M. Cesarelli
perature have to be manually inserted); temperature homogeneity, warm up time measurement are also performed. To obtain accurate and precise measurement all the measuring device was calibrated. First, all the analog channels were calibrated using reference voltages generator (very little differences between channels were measured because HW uses a single ADC). Linear regression analysis was used to estimate single gains and offsets (R2 correlation coefficient scored always above 0.99). After, sensors were connected and successive adjustments were performed by comparing obtained values with a reference accurate instrumentation (little adjustments of tenths of °C degree were performed for temperature sensors; temperature homogeneity was also obtained; adjustment of many %RH was required for the humidity sensor). Linear regression analysis was also utilized to obtain definitive calibration within the range of normal device functioning. A specific SW module is dedicated to perform the calibration procedure, which should be renewed at fixed dates, to ensure performance maintaining over time. III. RESULTS The realized prototype device consists of a hardware part including sensors and a remote software. The hardware was enclosed in a plastic case (IP44) 15 by 9 by 6 cm in dimensions and about 200 grams in weight, it is battery powered (9V disposable or rechargeable cell) providing few hours autonomy. Sensors have to be connected with the main hardware and manually arranged to obtain the configuration suggested by the Particular Standards (design of a plastic adjustable support is scheduled).
The overall resolution achieved resulted better than 0.05°C the for temperature measurements and better than 0.1%RH for the relative humidity. Figure 2 presents a screen-shot of the realized software while measurement are running. On the left is visible the continuous strip-chart like signal display (left y-axis scaled in °C and right y-axis scaled in %RH, x-axis is calibrated in seconds). At the right top can be manually inserted the control temperature and other eventual parameter set on the incubator; below are visible the numerical instantaneous value of each parameters and, to the right, the set of visual alarms activated by threshold limit overtake; the circular gauge displays relative humidity. IV. DISCUSSION AND CONCLUSIONS Availability of a device to simply and automatically perform functional assessment of neonatal incubators may help in periodic safety test. Neonatal incubators are devices that guarantee life support to newborns; it also worth mentioning that, in general, newborns cannot communicate clearly a threatening situation. Therefore, assessment of incubators has a greater importance whit respect to other medical electrical equipments. A hot-wire anemometer is being to be added to sense air velocity; a circuit (an amplified electret microphone followed by a true RMS IC) for sound level measurement. Future development will include a CO2 sensor to perform a prescribed concentration test and also to estimate air exchange rate within the baby compartment.
ACKNOWLEDGMENT Author gratefully tanks Analog Device, Innovative Sensor Technology and GE sensing, which kindly to provided sensor samples. Special thanks goes to R. Barbieri and M. Cammarosano, who helped so much in the realization and testing of the realized prototype. Thanks goes to Prof. G. Riccio for helpful discussions and suggestions.
REFERENCES 1.
2.
Fig. 2 Software module screenshot: five thermal and one humidity signal are plotted in the same graph; digital indicator and gauges show the actual values recorded by each sensor.
Various authors (1997) Thermal protection of the newborn: a practical guide. Maternal and newborn health/Safe motherhood unit Division of Reproductive health (Technical support), World Health Organization Various authors (1998) Neonatal and Neonatal Transport Incubators - Premarket Notifications. FDA (Food and Drug Administration), Center for Devices and Radiological Health, General Hospital Devices Branch
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A prototype device for thermo-hygrometric assessment of neonatal incubators 3. 4. 5. 6. 7.
Bouattoura D, Villon P, Farges G. (1998) Dynamic programming approach for newborn's incubator humidity control IEEE Trans. Biomed. Eng. 45(1): 48-55. International Standard (1998) IEC 60601-1: general requirements for safety for medical electrical equipments. Particular Standard (1998) IEC 60601-2-19: Particular requirements for the safety of baby incubators. Particular Standard (1999) IEC 60601-2-20: Particular requirements for the safety of transport incubators. Degree thesis by R. Barbieri (2006) ‘Development of a system for functional assessment of neonatal incubators’ Biomedical Eng. Unit - D.I.E.T. University "Federico II" of Naples.
8.
1099
Degree thesis n. by M. Cammarosano (2007) ‘Development of a prototype for thermo-hygrometric assessment of neonatal incubators’ Biomedical Eng. Unit - D.I.E.T. University "Federico II" of Naples. Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Paolo Bifulco University ‘Federico II’ of Naples Via Claudio, 21 (I-80125) Napoli Italy [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A QFD-based approach to quality measurement in health care F. Dori1, E. Iadanza1 and D. Bottacci1, S. Mattei1 1
Department of Electronics and Telecommunications, Università di Firenze, Firenze, Italy
Abstract— The general problem of process control requires a big committment in terms of technologies and specific capacities about plenty of aspects that must be controlled, according to trial complexity and particularity of the health structure. This demand drove us to plan and define a methodological tool able to be applied to a general process in a health structure. Requirements for this kind of tools are related to the possibility to produce numerical, synthetic, and objective indexes, according to the idea that a numerical index has the intrinsic property to give a synthetic and comparable kind of information, especially when it’s linked to a qualitative definition. Hence, we propose a methodological tool based on QFD (Quality Function Deployment) approach and characterized by a “semi-quantitative” and at the same time objective approach to quality measurement in health care structures. Such an instrument may be applied to several processes of the health care area or, given a “target process”, more times to the same process in distinct moments (for example before and after particular changes on critical aspects), to assess the contribution supplied by this improvements on process performances. Keywords— QFD, quality measurement, health process, methodological tool, index.
I. INTRODUCTION In this work we propose a methodological tool for use in control and measurement of health processes. First core step in approaching quality measurement is that it seems necessary to find exactly, in the design phase of the tool, the function or service provided, and to value all necessary means for the realization of that target. In this phase all elements have to give their contribution in a correct way, according to regulatory standards and well defined schemes. One more requirement that comes out, is the faculty to produce numerical, synthetic, and objective indexes; this need agrees with a reduction of necessary information to control, and shows in an evident way which are the aspects that actually have to be improved. So this instrument could be defined as a “semi-quantitative” and at the same time objective tool. From another point of view, to realize a tool “as objective as possible”, means to define it as an instrument able to work fine irrespective of who is using it. Downstream of this preliminary consideration, the arrival point for the activity is to get a method that just requires to
associate numbers (input) to process elements that depend on the type of measures that we are going to achieve. In short, this control and measure instrument is intended: a) to be “objective”; b) to produce indexes that allow to value the supply service; c) to be as simple as possible to apply; d) to be versatile; Such a tool that may be applied to processes of different type or more times to same process in distinct moments (for example before and after particular changes on critical aspects), to assess the contribution supplied to this improvements on process performances. Last feature, due to the interchangeability of the scenario that we could link to the tool, makes it clear that a process or a service involved in a health structure life-cycle is a very suitable candidate for the application of the tool itself [1]. In the following of this paper we’ll describe the main characteristics of the method developed to explain how deep the interest in designing this kind of instruments for health care could be. II. MATERIALS AND METHODS Several typologies of methodological tools have been individuated, able to estimate process quality at different levels of examination . These tools are both qualitative (that is, the measure is related to descriptive terms) and “semiquantitative” (that is, the process measure is supplied to numerical and synthetic indexes which are linked to objective definitions). Nowadays a widespread approach to the control of quality is based on the idea that such a control has to be effected in every single phase of service, not just at the end, enabling the reduction of the costs of progressive reintervention in the improvement of the process; this leads also to a preventing action in “not working” service realization, meaning with this term the lack of agreement with client requirement that must have priority importance. Many of these control techniques are anticipatory, meaning that they study the process before, or in an “a priori” point of view. To define this “universe” of methodological tools, it can be used the TQM (Total Quality Management), that is one of the earliest methods in
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1102–1106, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
A QFD-based approach to quality measurement in health care
the field of process management directed to the client; it tries to find earliest possible faults and mistakes, allowing to give to client product and service at the lowest possible cost, therefore implying more client satisfaction, less rejects and faults, and an improvement of staff commitment and motivation. It’s based on a series of roles selected for the staff and it uses several support techniques. Main goal is to follow clients and their needs, providing them fine goods and services; the organization is increased thanks to mix of innovation and continuous and graduals improvement to products and processes with the help of all staff levels. Very important is the information role for the horizontal coordination between workers, and continual staff formation. The correct TQM implementation brings some advantages: • • •
more client satisfaction; reduction of faults and rejects; improvement of staff care and motivation.
TQM uses, among others, techniques like Cause and effect diagram, the Pareto diagram, the Quality test, the FMEA/FMECA and the QFD. After a detailed analysis of these methodologies, our attention has been absorbed by a particular technique, Quality Function Deployment (QFD), that is itself a complete design, control and improvement technique. A. QFD introduction
1103
B. Description of QFD based approach In spite of increasing success of the methodology in Japan and in USA, in Italy there are – at our knowledge few companies that use this methodology to improve new products/processes or to develop the existing ones. Indeed, one of the main problems is the team, often due to lack of good communication between the staff of different departments. Another drawback is that many companies may consider the application of QFD as a pointless method, thus underestimating that often a sure amount of working time is employed for error corrections: errors come out because of the loss of contact between requirements and characteristics. In Clausing’s opinion QFD resolve three problems of occidental industries: 1. 2. 3.
inattention to client’s “voice”; lack of information during development cycle; different interpretations of specification by many departments involved.
QFD is based on the setting-up of the so-called House of Quality (fig 1). Fundamentals steps are: i. individuation of Customer Requirements; ii. individuation of Product/Engineering Design Requirements; iii. setting-up of Relationship Matrix; iv. planning and deployment of expected quality (through Competitive Benchmarking);
A new definition of the QFD is given from the ASI (American Supplier Institute): ”a system able to traduce the client demand in suited specific techniques internal at the company”. Therefore the QFD represents an instrument that is able to orient the design and the service toward the actual demand of the user [2], [3]. A requirement is an express or implicit demand from the user or the customer. The characteristics described in the method are defined directly from requirements. There are a lot of categories of these and they can be divided in functional characteristics (execution time, uses), time characteristics (that is how long these characteristics hold in time, and so we talk about reliability, maintainability, safety) and ergonomic characteristics (ease of use). As a design and evaluation tool regarding quality, QFD has the characteristic to give as an output a multidimensional quantity; so the instrument must involve for his implementation all concern staff, from “supervisor” level to all workers involved in the activities of the business area of the structure. Fig. 1: Scheme of House of Quality
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1104
F. Dori, E. Iadanza and D. Bottacci, S. Mattei
v. technique comparison (Technical Importance Ranking of characteristics); vi. analysis of correlation between characteristics (Correlation Matrix).
Table 3: Numeric or symbolic Scale of interaction between characteristics
In the scheme of Fig. 1 we can identify: • •
In part 1: customer requirements, obtained with personal interview, groups of interview, qualitative techniques, techniques of service/product analysis. In part 2: the importance given them by customers; it is fundamental to explain correspondent qualitative meaning of numeric values (Table 1). Table 1: Numeric Scale of importance given to attributes Numeric value 1 2 3 4 5
• •
Qualitative meaning Not important Preferable Important Very important Fundamental
In part 3: Engineering Characteristics. In part 4: correlation matrix between requirements and characteristics that explain the influence of characteristics on expected quality depending on its satisfaction degree. These influences are illustrated by quantitative and semi-qualitative components, known as intensity of correlation rij (high, medium, faint, doubt or nonexistent) and encoded throw alphabetic character, numbers or symbols in matrix crossing point. We use a range scale of four levels (Table 2). Table 2: Numeric (or symbolic) scale of correlation Numeric value
•
Qualitative meaning
9
High correlation
6
Medium correlation
3
Faint correlation
0
Doubt or nonexistent correlation
In part 5: indexes of interaction of the •characteristics. These symbols represent the positive or negative courses and the intensity of every correlations. An example of the symbol/number possible to use is brought back in the following table (Table 3);
Numeric value
Qualitative meaning
9
High positive interaction
3 -3 -9
Positive interaction Negative interaction High negative interaction
The matrix of the correlations attends to identify which technical characteristics support themselves and which are in conflict. The negative correlations represent situations that probably demand specific compromises: situations that would not have never to be ignored (“trade-off” situations). •
In part 6: the weights assigned to the characteristics, depending on the importance of these. The calculation method classic, called Independent Scoring Method [6], consists of two simple steps: 1. first step consists in the conversion of the relations, expressed symbolically between the needs of the customer and the characteristics of product, in "equivalent" values: 1-3-9 (the most used), 1-3-5 or 1-5-9; 2. second step consists in the determination of the level of importance for every technical characteristic: it is the sum of the product between the degree of relative importance for every requirement and the quantified value of the existing tie between that characteristic “j” with every requirements that are in relation with it. The last one represents the importance that, indirectly, the customer attributes to every product characteristic and can be used in order to rank the level of attention that the designer of the service must reserve to the technical-engineering characteristics in activity-planning phase. • In part 7: classification and ranking of customer expected quality, in order to estimate which requirements are more meaningful for the improvement of the process quality. From the comparison between obtained results and the relative weights of the priority of the requirements additional weights may be defined for attributes known as “strength-points”. A conventional score of 1.5 is assigned to “strength-points”, while for the demands whose satisfaction is considered “possible” “strength-points”, the assigned score is 1.2; instead it’s attributed "weight" 1 for the demands that aren’t considered this way.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
A QFD-based approach to quality measurement in health care
•
In part 8: the deployment of the quality. In the development of the plan/design of product quality it’s moreover necessary to fix the values of target for the satisfaction of the needs, taking into account business strategies and values obtained from the competitor analysis: these are defined as the values Objective of the new model (using the same used scale from 1 to 5 in the analysis of benchmarking) based on business strategies and analysis of the competitors. Moreover the values of satisfaction of the model fix put into effect them. Now it is possible to calculate the Ratio (or Degree) of improvement (or Factor of Upgrading) that represents the measure of the necessary improvement to the achievement of the “goalvalues”. It is calculated from the relationship between the value of target and the appraisal of the customer on the model.
1105
1.
2. 3. 4.
creation of big worksheets that become difficult to treat and to analyze (the risk is to waste time in raised details and that’s not in accordance with operative level of the design); confusion in definition of customer requirements which can collect wrong data; difficulty to identify requirements and process characteristics; difficulty to establish correlation intensity between customer needs and technical characteristics of products.
Besides possible problems are: 1. 2. 3.
cultural barriers; lack of instruments (software environment); exponential growth of management difficulty due to big dimension of the project; these require definition of many requirements and technical characteristics which cannot be managed without software assistance.
III. TOOL’S ADVANTAGES AND DRAWBACKS The advantages, in short term, include less problems of process start up, shorter development cycle of products, less modifications to starting project, better quality, reliability under all profiles. Fundamental conditions for the application of QFD are: 1. 2. 3.
advice in short period, (customer’s expectation change very quickly); deep knowledge about the instruments that require many capital investments in staff formation; hard commitment and conviction by the top of management.
From a strict operational point of view, QFD allows following aims achievement: 1. 2. 3. 4. 5. 6.
defines process characteristics; ensures consistency between requirements and process characteristics (which can be measured); decreases the need to make corrections and modifications in advanced phases of development; increases the intrinsic skill of reaction of the process itself, errors of wrong interpretation of priority and objectives can be minimized; promotes self-documentation of processes; defines unique reference-documents, both for customer and documents author.
Main disadvantages tied with the application of QFD (and some risks which come out in module compilation) are:
IV. DISCUSSION The fundamental aspects that justify the choice of this methodology regard the fact that: • • • • • •
it starts from the requirements of the customer; it is one “semi-quantitative” technique: numerical data allows to reduce the number of qualitative information; it is versatile; it has a simple user interface; it gives a global vision of the characteristics of each process; it can be used in the planning (the technique foresees the process before its progress).
These features seem to perfectly apply to health processes, regarding the complexity in process activities and the great importance in client (patient) needs. The tool, is now in a phase of application test for its validation, but preliminary results show the capacity in measuring and controlling the output quality of a process. This test step is related to the application to a “laboratory medicine structure”, which has the need to agree with requirements set out by regional and international bodies for the accreditation [4]. The tool can be also applied to revision and to estimation of different processes that supply the same service; it can be used as well to have a picture of the same process in different temporal moments, or to analyze the same process in order to estimate which service is mainly satisfied to among various services [5]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1106
F. Dori, E. Iadanza and D. Bottacci, S. Mattei
REFERENCES 1. 2. 3. 4.
Ministero della Salute at http://www.ministerosalute.it QFD Capture at http://www.qfdcapture.com QFD Institute - The official source for QFD at http://www.qfdi.org Baglioni B O, Dubini S, (2002) I requisiti minimi per l'accreditamento tra federalismo e norme tecniche. Progettare per la sanità 70
5.
Gentili E, Antonelli C (2003) Miglioramento continuo e sanità. Tecnica ospedaliera
6.
Akao Yoji. (1995). QFD Toward Development Management. Proceedings of the International Symposium on Quality Function Deployment ’95 – Tokyo. pp. 1-8 Author: Institute: Street: City: Country: Email:
Fabrizio Dori Department of Electronics and Telecommunication V. S. Marta, 3 Florence Italy [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
BME Education at the University of Trieste: the Higher Education in Clinical Engineering P. Inchingolo1 and F. Vatta1 1
Higher Education in Clinical Engineering, University of Trieste, Trieste, Italy
Abstract—This paper presents the Higher Education in Clinical Engineering (HECE) program of the University of Trieste – Italy. HECE has been established since Academic Year 2003-04, following the long tradition and experience of the former post-graduate School in Clinical Engineering, started in 1991, with the cooperation of many Italian and foreign universities, hospitals, health ministries and biomedical industries. The paper focuses on the conceptual design and the educational structure which has been given to HECE’s extensive educational program in Clinical Engineering, specifically conceived to provide to the prospective Clinical Engineering professionals the appropriate education level and the related skill sets necessary to make them prepared for the future role of Clinical Engineers profession, according to the current and future developments of modern healthcare systems. Keywords— BME, higher education, Clinical Engineering, healthcare systems, ICT.
I. INTRODUCTION In the past few decades, medicine and health care have evolved into a highly specialized technological branch, offering tremendous possibilities for the prevention, diagnosis and treatment of diseases. This evolution was made possible thanks to breakthroughs in many medical fields supported by many biomedical engineering innovations, as advanced engineering realizations of 3-dimensional high resolution medical imaging, new bio-materials and biomechanics, robot assisted minimally invasive surgery, artificial organs, automation of laboratory research, bioinformatics, information and communication technology, and many others. Modern health care is no longer the domain of clinicians alone but it depends on versatile, multidisciplinary teams, where biomedical engineers play an important role, not just as problem-solvers. Given the continuous development of the technological aspects of medical practice, hospitals need more specialized personnel, namely Clinical Engineers, for the selection of large equipment (e.g., medical imaging), for the training of clinical personnel confronted with informatics systems and other high-technology equipment, for the operation of complex systems and maintenance [1]. The International Federation for Medical and Biological Engineering (IFMBE) as well as the American College of Clinical Engineering
(ACCE) define Clinical Engineering as follows: “A Clinical Engineer is a professional who supports and advances patient care by applying engineering and managerial skills to healthcare technology”. Clinical Engineering is then taken to mean the application of medical and biological engineering within the clinical environment for the enhancement of health care. Clinical Engineers are Biomedical Engineers based in the clinical environment, usually a hospital or other healthcare provider environment, responsible for the design, management and quality assurance of patient-connected equipment in hospitals and involved at many levels in the safe, appropriate and economical use of technology in the health care system. The primary focus of Clinical Engineering services in the early years was on incoming and routine inspections (with an emphasis on electrical safety testing) and on repairs of biomedical equipment. The Clinical Engineering profession has then changed its focus over time from equipment safety and control to healthcare technology management. Today, healthcare technology extends into information and communications systems and traditional medical equipment is more complex than ever. Assessing, managing and solving problems in this hyper-tech world currently constitute the work of the Clinical Engineer [2]. As a result of such developments in industry and the healthcare systems, there is a rapidly increasing need for the educational systems to provide the necessary human resources for Clinical Engineering services, making sure that the quality of education and training satisfies the needs of the employers. This paper presents the Higher Education in Clinical Engineering (HECE) program of the University of Trieste, Italy [3]. HECE has been established since Academic Year 2003-04 following the long tradition and experience of the former post-graduate School in Clinical Engineering, started in 1991, with the cooperation of many Italian and foreign universities, hospitals, health ministries and biomedical industries. The paper focuses on the conceptual design and the educational structure which has been given to HECE’s extensive educational program in Clinical Engineering, specifically conceived to provide to the prospective Clinical Engineering professionals the appropriate education level and the related skill sets necessary to make them prepared for the future role of Clinical Engineers profession, according to the current and future developments of modern healthcare systems.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1077–1080, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1078
P. Inchingolo and F. Vatta
II. NEEDS FOR CHANGING IN CLINICAL ENGINEERING EDUCATION
To reasonably predict the future role of Clinical Engineers, we must consider the future nature of healthcare systems, as Clinical Engineering evolves within the context of our healthcare delivery system. There are revolutionary changes occurring within healthcare delivery systems which are the result of a combination of technological, demographic and economic forces. Information technology promises to play a greater role in both the clinical and the business aspects of healthcare. Incorporating current technological advancements has greatly increased the amount of diagnostic and therapeutic data the clinical systems can collect, store, and process. The number of diagnostic and therapeutic systems being linked is on the rise. The overall effect is synergistic, where the benefits gained from integrated systems far exceed the benefits available when the individual devices and systems are used in their stand-alone mode. The healthcare technology landscape is therefore subjected to a rapid connectivitybased transformation. Furthermore, networking and telecommunications have the potential to bring healthcare resources to any near or remote location and to facilitate medical data and personal communications between a combination of patients and providers. The key technological developments affecting healthcare thus include the exponential growth in information processing capacity and availability and the connectivity of technologies. These are emerging medical technologies that have the potential of greatly improving the quality and availability of healthcare and doing so at a reasonable cost [4]. Furthermore, we are undergoing significant demographic changes that will have an important impact on the healthcare industry. Due to the aging population, there is a growing shift from acute, episodic care to care for chronic conditions [5]. Today, the medical care costs of people with chronic diseases account for more than 60% of national medical care costs. By the year 2020, 80% of total medical care spending will be associated with treatment of these individuals [6]. As a consequence of these trends, most of Clinical Engineering efforts need to be concentrating on the development of infrastructures for a healthcare system that provides long-term treatment programs for patients with multiple, chronic diseases. In recent years, issues related to healthcare quality and costs have led to government and industry initiatives directed at improving the quality and availability of healthcare and at reducing its costs. It has been demonstrated that large amounts of money could be saved if healthcare organizations would adopt the use of standardized data formats to exchange patient information [7]. In 1999, an industry- and users-sponsored initiative called Integrating the Healthcare
Enterprise (IHE) was launched. IHE brought together medical professionals and the healthcare information and imaging systems industry “to agree upon, document and demonstrate standards-based methods of sharing information in support of optimal patient care” [1]. The initiative was sponsored by the Radiological Society of North America (RSNA) and the Healthcare Information and Management Systems Society (HIMSS). After successful efforts in medical imaging, this program has then broadened its scope into clinical laboratory, cardiology, and other areas that benefit from the effective integration of biomedical and information technology systems. IHE represents perhaps one of the most significant regulatory initiatives impacting the adoption of technology in healthcare industry in current times. Due to forces described above, healthcare is undergoing substantial changes within the next 5 to 20 years. Healthcare industry is increasingly focusing on the long-term treatment of chronic conditions for an aging patient population. This population expects high-quality care that is both readily available and reasonably priced also outside the hospitals. Technological advances facilitate the industry’s ability to meet these demands and regulatory pressures foster better integration. Given these technological, demographic and regulatory dynamics at work in the healthcare industry now, Clinical Engineering is being transforming over the next few years. Consequently, all these significant changes in healthcare delivery result in a corresponding need for changes in Clinical Engineering education. In the following paragraph the educational approach and the innovative key-points characterizing the HECE educational program to meet all the above described needs are presented and discussed. III. THE HIGHER EDUCATION IN CLINICAL ENGINEERING EDUCATIONAL APPROACH
The HECE educational program at the University of Trieste introduces the following innovative key-points in its Clinical Engineering educational programs: Adopts a systems and process approach. Clinical Engineering education has traditionally been oriented toward the management of discrete devices (i.e., equipment management). Consideration of systems and processes requires looking at the “big picture”, not focusing on discrete devices but understanding how individual devices must interconnect to accomplish a technical process. HECE’s Clinical Engineering educational programs have been conceived to be more systems and process oriented as biomedical devices, increasingly, are becoming part of integrated technology systems. Technology significantly contributes to quality of care, patient safety, patient outcomes, health data integrity and availability issues. These issues involve processes and require a systems approach in their management.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
BME Education at the University of Trieste: the Higher Education in Clinical Engineering
Adds basic information technology and telecommunications skills. For years the trend has been for biomedical devices and systems to process increasing amounts of data and for these systems to be networked together to share these data. The confluence of biomedical, information and telecommunications technologies will continue in healthcare, supplying the backbone along which integrated biomedical systems will operate. To insure support and coverage of these merging technologies, Clinical Engineering must also be prepared to integrate its services at an appropriate level with those of information technology and telecommunications. HECE’s Clinical Engineering educational approach develops basic proficiencies in all these areas. Adds expertise in the “business” of technology. Clinical Engineers must acquire expertise in the economic field pertaining to the adoption and use of technology in healthcare, including cost/benefit analyses, return on investment and life-cycle cost analyses. These considerations typically support technology-related decisions made by healthcare executives. Adds management skills for planning the integration of existing and new medical technologies. Clinical Engineers must anticipate the need for integration, understand the implications, and possess the skills necessary to successfully manage the integration process. Prepares to develop systems and infrastructures to support technology in nontraditional venues. Health-care is increasingly being delivered outside traditional venues (e.g., hospitals and clinics). Clinical Engineers must be prepared to incorporate and support medical technology in nontraditional locations (e.g., patient’s home, assisted living facility, office, school, and public areas) by developing the necessary systems and infra-structures. Incorporates continuing education. The pace of the healthcare technology revolution is quickening. To contribute to the successful adoption of new technologies Clinical Engineering has to embrace a regimen of continuing education available for already graduated Clinical Engineers working in a hospital or in other healthcare provider environments. Provides a completely integrated distance learning system for the full on-line fruition of the HECE courses and educational resources. HECE makes extensive use of elearning facilities by means of its E-HECE (E-Higher Education in Clinical Engineering) system, with its videoconferencing facilities, its streaming facilities and the elearning platform. This system is able to provide students with the means to actively participate synchronously with live classes and/or asynchronously with recordings of the classes from any location, as HECE students are mainly personnel already working in hospitals or in healthcare provider environments, either in Italy or in other
1079
European Countries, and need therefore distance-learning cooperative instruments. IV. HECE
EDUCATIONAL PROGRAMS
Since Academic Year 2003-2004, an extensive educational program in Clinical Engineering has been instituted at the University of Trieste as a transformation of the former twoyears post-graduate School in Clinical Engineering, started in 1991, which has been active for 12 years as unique point of reference for education in Clinical Engineering in Italy and also widely recognized in Europe. HECE’s educational programs have been formally activated within the Central European Initiative (CEI) University Network with the cooperation of many Italian and foreign universities, hospitals, health ministries and biomedical industries. Two Masters - MIC and SMMCE – have been activated within HECE: the first level “Master in Clinical Engineering” (MIC-MCE) and the International “Specialist Master of Management in Clinical Engineering” (SMMCE). Given the growing student constituency and demand, since Academic Year 2004-2005 also a Magistral Laurea Degree in Clinical Engineering (LSIC), a two-year graduate program studies, has been included in HECE’s educational program in Clinical Engineering. HECE’s education in Clinical Engineering endows a fundamental understanding not only of relevant technologies but also of the physiological systems on which that technology is applied, of the healthcare environment in which the technology is used, and of the regulatory framework in which the technology exists. HECE’s graduated Clinical Engineers can be expected to develop organizational, project management, strategic planning and investigative skills to ensure availability of safe and effective healthcare technology. The aim of the MIC-MCE Master is the education of professional specialists in the field of Clinical Engineering, to make them able of working in Clinical Engineering Services, coordinating small operational units of technical staff, with the aim to: 1) manage, evaluate, install, maintain and upgrade the clinicalassistential biomedical and info-telematic instrumentation and equipment, used in social-health services (either inside hospitals or in distributed care and home care structures); 2) take care of their safe, appropriate and economic use; 3) to collaborate with the health operators in the use of engineering methodologies to solve clinical and management problems. The aim of SMMCE Master is, in turn, the education of Clinical Engineers already having an education level corresponding to the one given by the MIC-MCE Master. These engineers become professional and manager specialists in the field of Clinical Engineering able to organize and coordinate large operational units of Clinical Engineers and
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1080 60 50 40 30
Total Students: 354 Total Theses: 226 (241) Total People: 212 (227) 90%) MIC LSIC SMMCE SSIC
Students: 54 Students: 44 Students: 42 Students: 214
Students (Italian market:
Theses
Theses: 54 Theses: 24 Theses: 42 Theses: 106
20 10
19 AY 9 1 19 92 AY 9 2 -9 19 3 AY 9 3 19 94 AY 9 4 19 95 AY 9 5 19 96 AY 9 6 19 97 AY 9 7 19 98 AY 9 8 19 99 AY 9 9 20 00 AY 0 0 20 01 AY 0 1 -0 20 2 AY 0 2 20 03 AY 0 3 20 04 AY 0 4 -0 20 5 05 -0 6
0
AY
technical staff, to design and organize biomedical technologies systems, including distributed systems and systems interconnected at local and geographical level, in addition to the above cited purposes. The LSIC Degree in Clinical Engineering provides an advanced education in biomedical engineering with special regard to Clinical Engineering through a wide spectrum of teaching courses and laboratories dedicated to methodological and advanced applicative themes of biomedical technologies in the Information Society. The LSIC degree educational offer is composed by 3 curricular programs (hospital, information and management curricula) and provides the necessary professional skills to be able to design and develop biomedical instrumentation equipment and devices, advanced info-telematic systems for healthcare, artificial organs, prostheses and functional supporting devices. LSIC graduate students are able to manage small Clinical Engineering services or a section of a Clinical Engineering service in a medium-large healthcare enterprise or to operate, as responsible of small operative sections, in enterprises of Clinical Engineering services or in combined services of Clinical Engineering, medical informatics and health telematics. HECE lessons are held in Trieste and simultaneously, by means of multi-video-conference within the E-HECE system, in many other distributed classrooms located in a number of peripheral sites (University Roma Tre, Polytechnic of Turin, IRCCS San Matteo of Pavia, Institute of Biomedical Engineering-CNR in Padova, IRCCS Casa Sollievo della Sofferenza in San Giovanni Rotondo (FG), Universities of Graz, Maribor, Fiume-Rijeka and Zagreb). Multi-videoconference actually creates a multiple virtual classroom in which students from the different sites can fully interact with the teacher holding his/her lesson from one of these distributed sites or from another one, asking for questions, requests of clarifications, debates, discussion of practical experiences, etc. The E-HECE system has been extensively used for all the SSIC-HECE courses of the BiomedicalClinical Engineering Program of the University of Trieste since September 2005, serving up to now a total student population of about 340 students in 150 courses. Fig. 1 shows some statistics on the HECE’s students’ population over the years since AY 1991-92 up to the last one. Education and training are linked together in HECE educational programs. Trainees undergo a training period including supervision and experiential training in the core areas of Clinical Engineering. Trainings are normally organized in cooperation with prestigious healthcare enterprises, hospitals, biomedical industries and research institutions in National and International environments.
P. Inchingolo and F. Vatta
Fig. 1 Statistics of the HECE total students’ population and theses since AY 1991-92 to AY 2005-06. Data have been calculated as the sum of the students graduating through all the active courses per each year: the twoyears post-graduate School in Clinical Engineering (AY 1991-92 through AY 2002-03), the 1-year MIC-MCE Master and the SMMCE Master (since AY 2003-04) and the two-years LSIC Degree (since AY 2004-05).
ACKNOWLEDGMENT Work supported by Higher Education in Clinical Engineering, University of Trieste, Trieste, Italy.
REFERENCES 1.
2.
3. 4. 5. 6. 7.
Inchingolo P et al. (2004) Integrated distance learning in biomedical sciences and engineering: the experience of the Higher Education in Clinical Engineering in EuroPACS-MIR 2004 in the Enlarged Europe, P. Inchingolo & R. Pozzi Mucelli (eds), EUT:435-438 Inchingolo P, Beltrame M, Bosazzi P et al. (2006) O3DPACS Open-Source Image-Data Manager/Archiver and HDW2 Image-Data Display: an IHE-compliant project pushing the e-health integration in the world. Comput Med Imaging Graph 30(6-7):391-406 http://www.ssic.units.it Heffler S et al. (2003) Trends: Health spending projections for 2002-2012, Health Affairs W3:54-65 Chronic Disease Prevention Web page, National Center for Chronic Disease Prevention and Health Promotion, available at http://www.cdc.gov/nccdphp/about.htm Wu S, Green A (2005) Projection of chronic illness prevalence and cost inflation, RAND Health Duncan M, Rishel W, Kleinberg K and Klein J (2004) A common sense approach to HIPAA, GartnerGroup Author: Institute: Street: City: Country: Email:
Federica Vatta SSIC-HECE - University of Trieste Via Valerio 10 Trieste Italy [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Certification of Biomedical Engineering Technicians and Clinical Engineers: Important or Not James O. Wear, PhD, CCE, CHSP, FASHE, FAIMBE Scientific Enterprises, Inc, North Little Rock, AR, USA Abstract— The clinical engineering staff including engineers and technicians is an important element in the use of medical technology in health care facilities. The clinical engineer is a member of the technology management team, which is involved in the selection of new technology and the design of facilities for the use of the technology. What is the impact on healthcare delivery as a result of certified Clinical Engineers and Biomedical Engineering Technicians (BMETs)? Can a certified clinical engineering staff improve accreditation of hospitals? Certification of staff is one measure of quality control for medical technology. However, no country requires that the BMETs and Clinical Engineers be certified in order to perform any functions with medical technology. The history of certification will be presented including how it differs in different countries. The recognition of certification of BMETs and Clinical Engineers by the healthcare community will be discussed. The first certification programs were initiated in 1972 in the United States. A clinical engineer may be called a biomedical engineer or medical engineer and different terminology is also used for the BMETs. Keywords— Certification, BMET, Biomedical Engineering Technicians, Clinical Engineers, CAHTMA
I. INTRODUCTION There is a lot said about certification and there are many different certification programs. So one might ask what certification means. Certification is not just a certificate for participating in a training program. At the same time, it is not a license, which is a legal document provided by a professional institution and recognized by a government. A certification is by some professional organization and requires experience, education and usually an exam. When one talks about a certified BMET, what are they really talking about? BMET can mean Biomedical Equipment Technician, Biomedical Electronics Technician, Biomedical Engineering Technician, or even a Biomedical Instrumentation Technician and there are degree programs with all these names. With a certification program, it is a Certified Biomedical Equipment Technician. When one talks about Clinical Engineering, again this can have different names in the hospital setting. They are generally called Biomedical Engineer or Clinical Engineer. In some cases, they are a Medical En-
gineer or Medical Physicist, which is the more common terminology used in Europe. According to the certification program, it is a Certified Clinical Engineer no matter what the degree or experience II. HISTORY – CBET The BMET education program in the U.S. started with the U.S. Army training of Medical Equipment Technicians in World War II. BMET positions did not occur in the medical centers until the late 1960s when medical equipment became prominent in healthcare delivery. Technical Education Research Center (TERC) was given a grant about 1967 by the U.S. government to develop a curriculum for the Biomedical Equipment Technician. As part of their grant, they funded a couple of pilot programs in community colleges. The program was designed to be a two-year associate degree program. After a few years, several associate degree programs were developed in vocational schools around the country. Unfortunately, most of these programs did not follow the curriculum developed by TERC. As a result, the quality of the programs and the type of training that individuals received varied greatly. They were from an electronics program with one course in biomedical instrumentation to programs that had several courses in instrumentation and internships at hospitals. The Association for the Advancement of Medical Instrumentation (AAMI) promoted the training of Biomedical Equipment Technicians, as well as, their hiring in hospitals. In order to assure some quality, from a standard set of skills for a Biomedical Equipment Technician, AAMI initiated the certification program for Biomedical Equipment Technicians or the CBET. The Board of Examiners included Lt. Col. Bert Dobson, who was in charge of the Air Force’s medical equipment maintenance program, educators, clinical engineers, and other clinical staff. They developed an exam primarily based on skills required by the U.S. military training program and the curriculum developed by TERC. Over 5000 technicians have been certified under this program, with most of them in the United States. In 1973, the Department of Veterans Affairs initiated its own certification program based on the AAMI certification program. This was created because of the logistics of people being able to go to sites to take the AAMI exam. The exams
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1081–1084, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1082
James O. Wear, PhD, CCE, CHSP, FASHE, FAIMBE
continued in parallel with at times some of the same members on the Boards of Examiners. In 1985, the exams were merged and the AAMI/ICC program accepted all people certified by the Department of Veterans Affairs. In the same timeframe, the Electronics Technicians Association (ETA International) developed a certification for Biomedical Electronic Technicians (CET-BMD). More recently, they have developed a certification for Certified Biomedical Imaging Equipment Technician (CET-BIET). In the mid 1980s, the International Certification Commission was created as a result of certification programs being developed primarily for Clinical Engineers in other countries. The ICC is a coordinating commission that helps to maintain the quality of the various programs in different countries. The CBET exam was developed for the general Biomedical Equipment Technician. It was difficult for technicians that worked for vendors of specific products as well as technicians that worked in specific areas of the hospital, such as, radiological equipment and clinical lab equipment to pass the exam. In order to partially correct this situation, a certification exam was developed for technicians that worked on imaging and radiological equipment and another certification exam for technicians that worked on clinical lab equipment. For imaging equipment there is the certified radiological equipment specialists (CRES). This has been a popular certification and many technicians have been certified under this program including vendor technicians. For the clinical laboratory equipment there is the certified laboratory equipment specialist (CLES), which has not been a very popular certification program. This is primarily due to the fact that the change in the clinical laboratory equipment has reduced the number of technicians required to work on equipment in that area. III. HISTORY – CCE The term clinical engineer was coined by Dr. Caesar Caceres in 1970. The Clinical Engineer is an engineer that works in the hospital setting with the clinical staff as opposed to being a Biomedical Engineer that primarily does research. Very few training programs have been developed in the United States for Clinical Engineering and at the present time, there are only a handful of programs. Certification for the Clinical Engineer was developed initially by AAMI in 1975 to recognize the Engineers that were working in the clinical setting. Different engineering and physical science specialists were functioning as Clinical Engineers and this was one way to assure that engineers had the skill sets that were required to be a Clinical Engineer.
Also in 1975, the American Board of Clinical Engineering developed a certification for Clinical Engineers. This was established as an independent group rather than any organization because they did not think that the acceptance of the initial group of Clinical Engineers based on experience was a way to certify people. As a result, they had five (5) people that self certified themselves and developed an exam for other people to take. The two Clinical Engineering programs continued until 1984 and at that time, the two programs were merged into one program under the International Certification Commission. In 1999, the International Certification Commission discontinued the certification program for Clinical Engineers in the United States. This was done from an economic standpoint since very few people were applying to take the certification exam, typically, one or two per year. At that time, about 400 clinical engineers had been certified. After some attempts to re-establish the certification program for Clinical Engineers, under the ICC, a new certification program was developed under the Health Technology Certification Commission (HTCC). HTCC gave the first new clinical engineering exam in 2003. This certification exam is based on the body of knowledge required for Clinical Engineers developed by the American College of Clinical Engineering. IV. CERTIFICATION OUTSIDE THE US There are a variety of certifications of Biomedical Technicians and Clinical Engineers in countries other than the United States. Most of them involve certification of engineers instead of technicians. Several are spin-offs of the American Clinical Engineering certification program and are members of the International Certification Commission. The Canadian program was originally part of the United States certification for both BMETs and Clinical Engineers. This program has some slightly different requirements. For instance, the Clinical Engineer has to be a Canadian Professional Engineer and the BMET program in Canada is a Bachelors Degree as opposed to the Associate Degree program in the United States. Approximately 50 Clinical Engineers are certified under the Canadian program. Individuals that were certified under the United States certification program started the Clinical Engineer Certification program in Brazil. They have developed a program with an exam for Brazil in Portuguese. There have only been a few engineers certified under this program. Mexico started a certification program and certified their first Clinical Engineers in 1991. Again, the initial engineers that made up their Board of Examiners were certified under the United States program. Mexico has also developed
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Certification of Biomedical Engineering Technicians and Clinical Engineers: Important or Not
an exam in Spanish. This exam is available to other Latin American countries. Other countries have developed certification programs that are not affiliated with the International Certification Commission for Clinical Engineers and in some cases certification programs for technicians. Some of these programs have an exam and others have certified people based on their credentials including both experience and academic coursework. South Africa has a voluntary registration program of Clinical Engineers, Clinical Engineering Technologists and Clinical Engineering Technicians, which is based on experience and academic requirements. All three of these groups are considered professionals and would have academic training. They also have medical equipment repair personnel. Germany has developed a Certified Clinical Engineering program, but it does not require an exam. It is based on experience and academic background. They also are planning to develop a certified Biomedical Engineering Technician program. In most European countries, there are more engineers than technicians and so the certification of the engineer is more important. The United Kingdom initially developed a certification for Clinical Engineers in the 1990s but this program has been dropped due to lack of interest in it. They are now having a voluntary registration for Clinical Engineering Technologists, which includes Clinical Engineers, Medical Physics individuals and other scientific people that will be working in the clinical setting. Weden has a voluntary Clinical Engineering certification program. This program is based on academic credentials and experience. Japan has a rather unique situation, in that they have Clinical Engineering Technologists. These must be certified by the government to operate equipment. These certified operators also do maintenance on some of the equipment. Hong Kong engineers have been certified under the Health Technology Certification Commission in the United States. and technicians under the ICC program Certification of clinical engineers is developing in China with the help of Yadin David. This program is not affiliated with any other program at the present time. Last year 35 engineers were certified. The written and oral exams are given in English with a dictionary. V. COMMISSION FOR THE ADVANCEMENT OF HEALTHCARE TECHNOLOGY MANAGEMENT IN ASIA (CAHTMA)
CAHTMA was initiated in 2005 with the endorsement of the Asian Hospital Federation at the Asian Hospital Management 2005 meeting in Kuala Lumpur, Malaysia. It was established to provide a platform for healthcare professionals to discuss and exchange ideas
1083
on healthcare technologies and practices. Central to these objectives are the promotion of best technology management practices, the certification of clinical engineering practitioners and healthcare professionals, and the dissemination of appropriate management tools through seminars and workshops. CATHMA is registered in Singapore and is affiliated with IFMBE. The board of directors includes industrial, academic and ministry of health representation and a WHO advisor. The author is the current Co-chairman and Co-founder. CAHTMA will certify technicians and engineers as clinical engineering practitioners (CEP). This certification is based on experience with an oral and written exam covering aspects of the clinical engineering field. These CEPs must have a basic knowledge in healthcare engineering standards such as IEC 60601, ANSI/AAMI EQ56, RD62, JCI, etc. It will also certify training programs for clinical engineering practitioners in Asia. Workshops and seminars are conducted to aid clinical engineering practitioners to develop and advance their knowledge and skills in the field of healthcare technology. In 2008, Malaysia is going to have a Medical Device Act, which will require people with credentials in clinical engineering for medical equipment maintenance. The CAHTMA certification of CEPs will meet this requirement. The first exam will be given in the spring of 2007. This will be the basic exam. VI. CERTIFICATION REQUIREMENTS –UNITED STATES In the United States program, the requirements for a CBET are the following: have a degree in Biomedical Equipment Technology; plus two years experience or an Electronics degree and three years experience or four years experience. The degree in both cases is an associate degree. They take a written exam of 150 questions for which the passing score is 70%. In order to maintain their certification, they have to demonstrate that they are continuing to maintain their skill level with continuing education every three years. The requirements for the Clinical Engineer in the United States and for the programs that are under the ICC umbrella are as follows: they must have a bachelors degree in engineering or physical science; two years experience in the clinical setting; three references that demonstrate their work in the clinical setting as a Clinical Engineer. At least one of these references must be from a healthcare provider. They then take a 150 questions written exam. If they pass that exam they take the two-hour oral exam given by two members of the Board of Examiners.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1084
James O. Wear, PhD, CCE, CHSP, FASHE, FAIMBE
VII. VALUE OF CERTIFICATION How valuable is certification to the Clinical Engineering professional? No one appears to require certification for technicians or engineers. Certification should reduce the likelihood of equipment related accidents and misuse of medical devices, which would improve patient safety. It should lead to improved maintenance and repair of equipment and improved selection of technology, which would improve patient care in a cost effective manner. There are increasing efforts in developing certification programs and perhaps even requiring it for some functions in some parts of the world. There has been some discussion of requiring certification of technicians to work on particular equipment, that is not to certify technicians in general but to certify them to work on a particular type of equipment and maybe even down to brand and model. This would really be a certificate program as opposed to certification. If this occurs anywhere, it would be a serious mistake because even manufacturers do not have their technicians fully trained on every piece of equipment that they make. Certification can be a benefit in the hiring process. This is especially true as people are crossing country borders working in different parts of the world. Certification should indicate a minimum level of qualifications and skills. Certification in the hiring process is probably more important in the hiring of technicians than it would be in regard to engineers. There is a wide variety of training programs for technicians. At least with engineers, they do have a standard engineering background, though it may not be in the clinical setting. Certification programs have provided some benefits to the profession of Clinical Engineering. For the BMET, it provides them with recognition as being a member of the healthcare team. Other members of the healthcare team have certification or licensing within their allied health profession. With the accreditation of hospitals, there is a requirement for competency of the staff that maintain equipment. Certification of the technician is one way to demonstrate the level of competency. Certification has also helped hospitals in dealing with vendors, service contracts, and access to documentation. It is very difficult for a vendor to say that a Certified Radio-
logical Equipment Specialist is not qualified whenever their own technician does not have that certification. Therefore, certification of the technicians can result in cost saving of the overall maintenance program for the hospital. The certification of Clinical Engineers has provided the engineer with recognition in a clinical setting. This has aided them with acceptance to the clinical staff since it provides recognition of clinical knowledge and not just engineering expertise. An important area for certification of the Clinical Engineer is when there is an accident in the hospital involving equipment. The Clinical Engineer is the one that should do the accident investigation and probably in the case of litigation will be called as an expert witness. The certification as a Clinical Engineer recognizes the engineer as an expert in the field and helps the hospital to establish that they have a good equipment evaluation and maintenance program. VIII. CONCLUSION The certification of Biomedical Engineering Technicians and Clinical Engineers is important as the world becomes smaller and engineers and technicians are working in both developed and developing countries. Certification is important to provide initial or at least a minimum level of qualifications for people. The interaction between certification programs in different countries will become more important as certification develops. Perhaps some time there can be a global certification of both engineers and technicians. There should be some harmonization of the certification standards and terminology used while each certification body maintains its program.
Author:
James O. Wear, PhD, CCE
Company: Scientific Enterprises, Inc Street: 5104 Randolph Road City: North Little Rock, AR 72116 Country: USA Email: [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Clinical Engineering in Malaysia – A Case Study Azman Hamid1 1
Commission for the Advancement of Healthcare Technology Management in Asia (CAHTMA), Kuala Lumpur, Malaysia
Abstract— Clinical engineering in Malaysia has received renewed interest as an engineering discipline. Growing interest about patient safety and the decision by the Malaysian government to privatize support services at all government hospitals beginning January 01, 1997 have pushed clinical engineering to the limelight and have opened up opportunities for local biomedical engineering graduates seeking jobs as clinical engineers. Clinical engineering flourishes as new approaches to healthcare technology are introduced to enhance the quality of services provided to the hospitals. Keywords— clinical engineering, biomedical engineering, privatization, hospital support services, BEMS
I. INTRODUCTION Clinical engineering was a relatively unknown field in Malaysia where it remained as an obscured engineering discipline until the late 1990s. With increased emphasis on proper maintenance of government assets to enable delivery of quality services [1], the Malaysian government undertook to privatize hospital support services at all government hospitals. The contract, involving 500 million Malaysian ringgits in yearly expenditure, was awarded to three concession companies - Faber Mediserve (M) Sdn Bhd, Radicare (M) Sdn Bhd and Tongkah Medivest (M) Sdn Bhd (now, Pantai Medivest). The contract to the three companies was for 15 years from January 1, 1997[2]. With the privatization and increased concerns about patient safety, the future of clinical engineering in Malaysia took a different turn. Suddenly, there were abundant opportunities for local biomedical engineering graduates, who were in lesser demand previously. Local universities and colleges hurried to introduce courses relevant to clinical engineering to fulfill the growing demand. II. CLINICAL ENGINEERING AND HEALTHCARE INDUSTRY Clinical engineering in Malaysia started late as compared to the US and some other parts of the world. In the late 1960s, it had already become an accepted discipline in the US due to concerns about patient safety and rapid proliferation of biomedical equipment [3]. In Malaysia, however, the first course offered on biomedical engineering - a course very much related to clinical engineering - was discontinued
after the completion of only several batches of students. Its graduates had difficulties in securing employment owing to the mismatch of demand - between projection of employment opportunities by the academic institutions and the actual recruitment by the industry. Clinical engineering was then unpopular and could not attract decent number of prospective students. Clinical engineering in Malaysia received renewed interest with the privatization of hospital support services in 1997. Biomedical engineering maintenance service (BEMS) is one of the five services privatized and to carry out their obligations, the concession companies sought local biomedical engineering graduates as clinical engineers. As the supply of local engineers could not meet the demand, the initial acute shortage was fulfilled by engineers from India, Sri Lanka, the Philippines and Indonesia. With the privatization of hospital support services and strong government support, courses offered at local academic institutions in biomedical engineering, medical electronics and medical instrumentations mushroomed overnight. Eight local universities/colleges are currently offering courses in these fields and the rush to produce graduates for clinical engineering continues unabated as demand soars. III. MALAYSIAN GOVERNMENT SUPPORT From the outset, the Malaysian government had set high expectation of clinical engineering and ensured that clinical engineering support and other privatized services at the hospitals remained strong. It established Unit Kawalselia under the Ministry of Health’s Engineering Division to oversee the implementation of the privatization project. The government also established Technical Committees and State and Support Services Committees at the Ministry’s head office, state offices, and hospitals to support the privatization effort [2]. In late 1997, the Malaysian government appointed Sistem Hospital Awasan Taraf (SIHAT) to provide consultancy services to the government and assess the performance of the three concession companies [4]. Various monitoring mechanisms for compliance were established by SIHAT to ensure that concession companies fulfilled their obligations. The support from the Malaysian government continues as the privatization enters into its third and final phase.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1089–1091, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1090
Azman Hamid
IV. CHALLENGES IN CLINICAL ENGINEERING
safety testing to biomedical equipment at the hospitals, the concession companies also undertake other tasks as part of their obligations. Asset registration and inventory were undertaken during the first year of the privatization. Since then, the asset register has been updated progressively to include additional assets as well as to ensure its accuracy. Condition appraisals of biomedical equipment are carried out regularly and equipment replacement are also being planned for the hospitals. Life cycle costs of biomedical equipment are being analyzed to minimize healthcare expenses. Alerts of possible hazards related to biomedical equipment are systematically distributed to hospital management. Training on equipment operation and user maintenance is also planned and conducted regularly. The user training is conducted with the objective to minimize ‘use and user errors’ as well as to contain growing maintenance expenses. As nurses and equipment users relocate to new departments, hospitals and/or facilities, repeat training is also conducted to new users as part of the concession’s contractual obligations. To enhance their effectiveness, the concession companies conducted technical training for their clinical engineers, emphasizing on proper clinical engineering methodology. In addition to the training provided by the manufacturers and local distributors, the concession companies have also started to introduce their in-house training programs on specific medical devices. Training is also conducted to ensure compliance to international standards.
Despite the strong support from the government on the privatization, there have been underlying problems faced by the concession companies especially with regard to clinical engineering. The shortage of experienced clinical engineering personnel at the hospitals persists in spite of an increase in the number of engineering graduates produced each year by local academic institutions. This acute shortage is not expected to ease in the near future as more high-end biomedical equipment are being purchased requiring even more clinical engineering personnel to provide maintenance and technical support. It may take years with concerted effort by the government and local universities to produce adequate graduates for clinical engineering work before the issue could be completely addressed. To mitigate the shortage, concession companies devised their own plans to fast-track the process including engaging universities and consultants to retrain their local clinical engineers. In the meantime, influx of expatriates is expected to continue and the delay in getting work permit approval for expatriates has not helped to deal with the issue. As 90% of the country’s medical devices are imported [5], obtaining spares for biomedical equipment maintenance has never been all easy. Local suppliers and distributors of biomedical equipment are at times unable to provide spares fast enough for effective support. As such, urgent and critical repairs were sometimes delayed for months before breakdowns were finally rectified. The economic turbulence in East Asia in 1997-1998 had also impacted some difficulties to the concession companies. During and immediately after the period, the domestic price of imported items was drastically higher due to the ringgit depreciation [6]. Clinical engineering service was heavily affected by the depreciation compared to other privatized services as spare parts from overseas costed more than what they used to be. The concession companies struggled to stay afloat as the expenses for spare parts ballooned. The inability of manufacturers, appointed distributors and service maintenance organizations in solving complex equipment breakdown is another challenge that needs to be systematically addressed. With the government’s high expectation of quality maintenance services, the response to breakdown and the level of technical expertise of engineers from manufacturers and local distributors to support critical breakdown have not been encouraging.
As private hospitals in Malaysia compete for accreditation by the Joint Commission International (JCI) to improve their service quality and image, government hospitals on the other hand race for accreditation by Malaysian Society for Quality in Health (MSQH)[7]. The healthy competition among hospitals in Malaysia has created a challenging environment for clinical engineers and requires commitment from clinical engineering organizations. Certification of clinical engineering practitioners is currently a major concern of the healthcare industry in South East Asia. Endorsed by Asian Hospital Federation, the Commission for the Advancement in Healthcare Technology Management in Asia (CAHTMA) was set up to address this growing need as well as to provide a platform for clinical engineering practitioners to exchange ideas and experiences.
V. HEALTHCARE TECHNOLOGY MANAGEMENT
VII. CONCLUSIONS
Apart from providing basic maintenance such as acceptance testing, preventive and corrective maintenance and
Clinical engineering in Malaysia has progressed by leaps and bounds over the last decade mainly spurred by the pri-
VI. ACCREDITATION AND CERTIFICATION
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Clinical Engineering in Malaysia – A Case Study
vatization project. A structured clinical engineering service has now been well established. The experience gained has made all concerned parties wiser of what could or could not be achieved within a given time-frame. With their experience in handling the privatization project, clinical engineering organizations in Malaysia are currently taking their expertise overseas, especially to countries in the South East Asia. Their activities have given a new light to clinical engineering not only in Malaysia but throughout South East Asia as well.
REFERENCES 1. 2.
Prime Minister’s Department, Malaysia (1995) General circular letter No. 2 Pillay, M.S. (2002) Privatisation of Hospital Support Services at http://www.rit.no/ifhe/Privatisation of Hospital Support Services.pdf
1091 3. 4. 5. 6. 7.
Bronzino, Joseph D. (2004) Clinical Engineering: Evolution of a discipline in Clinical Engineering Handbook, ed. Dyro, Joseph F. p.3 SIHAT at http://www.sihat.com.my Gross, Ames (1999) Regulatory update on Malaysia’s medical market. Pacific Bridge Inc at http://www.pacificbridgemedical.com Economic Planning Unit (1998) National Economic Recovery Plan Ministry of Health, Malaysia (1999) Director General’s circular letter No.2/99. Ministry of Health Policy on Accreditation of Healthcare Facilities and Services Author: Azman Hamid Institute: Commission for the Advancement of Healthcare Technology Management in Asia (CAHTMA) Street: Suite (P4-10), 4th Floor, Building Information Centre, Lot 2, Jalan 51A/243, 46100 City: Petaling Jaya, Selangor Country: Malaysia Email: [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Clinical Engineering Training Program in Emerging Countries Example from Albania H. Terio Department Clinical Engineering, Karolinska University Hospital, Stockholm, Sweden Abstract— Emerging countries developing their health care systems with support and donations from the industrialized countries need also support to develop the know-how for management of the equipment they get. The lack of knowledge and competence can be over bridged by well-planned education and training programs based on established programs and routines from the supporting countries where clinical engineering is an established profession. The education and training must have both short-term and long-term goals. It is essential that the country get a good base to develop own educational and managerial systems for health care support. The Albanian project shows an example how this can be realized. Keywords— Clinical engineering, Training program
I. INTRODUCTION USA, Japan and many European countries and donor organizations are investing in Eastern Europe, Balkan and Central Asia. The investments include medical facilities and a significant amount of modern medical equipment have been introduced in the recipient countries. However, there is very little support for installation, acceptance testing, maintenance, repair, documentation and training. In the recipient countries it is often difficult to find national competence that can meet all needs in medical technology. This lack of clinical and biomedical engineers, who can support health care technology management, research and development of new technology, which gives important contribution to the improvement of the health care system, is of course a problem for the Governments when they try to provide modern health care to all citizens. The university faculty programs of today do not include education in clinical and biomedical engineering. To ensure a long-term medical device management and maintenance function in the recipient countries, it is necessary to build such a competence base through capacity building. Therefore, engineering and medical institutions must be supported to provide courses in proper medical equipment use, safety and basic preventive maintenance. This education must also provide the managements in the hospitals with competence in device related issues, as physical infrastructure needs,
investment planning, procurement and training of medical staff. In order to address these problems Sweden runs a number of different aid programs in these countries. In Albania a Clinical Engineering program has been financed by SIDA, Swedish International Development Agency. This program is a part of a project for Strengthening the Management and Maintenance System for the Albanian Health Services carried out by Swedish Health Care a Swedish health care consultant company (www.swedishhealthcare.com). This nationwide trainee-program for Clinical Engineers started in November 2005 under the supervision of Ministry of Health. II. METHOD 22 newly graduated engineers from the Polytechnic University of Tirana were recruited in this program, which will end in 2007. As collaboration partners in planning and lecturing activities Swedish hospitals, e.g Karolinska University Hospital, Uppsala Academic Hospital, and institutions has participated. Additional experts from Faculty of Medicine as well as Institute of Public Health in Tirana have participated as lecturers in some of the medical courses. The program has been one out of several project deliveries in order to provide Albania with trained clinical/biomedical engineers within the project time frame 2005-2007. Swedish Health Care AB (SHC) was established fifteen years ago as an international consulting company within the health care field. Over the years, SHC has successfully implemented about 100 projects and their main markets are Eastern Europe, the Middle East and Japan. SHC works closely with 60-80 senior experts/specialists and maintains a network of about 200 trusted specialists and international consultants. Their company strategy is to recruit expertise requested for each particular contract. This ensures flexibility and enables better provision of best-qualified consultants. The head office and central staff is based in Malmö, and they provide a project management will follow-up and assist the Site Manager in his activities in Albania. A part of these supporting activities will be to mobilize the short-term experts, organize and manage the training programs and the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1074–1076, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Clinical Engineering Training Program in Emerging Countries Example from Albania
study tours abroad. The head office shall provide all the necessary support to allow all experts to perform their tasks efficiently. Since 1996, Swedish Health Care AB operates a local office in Tirana with established local infrastructure. Furthermore, it will allow the Site Manager to focus on the important tasks that needs to be carried out in order to make sure that the comprehensive work can be concluded within given time frame. A. Program The Clinical Engineering (CE) program is a two-year trainee program combining academic courses with practical training at hospitals. It is designed to meet a postgraduate course based on optional courses in accordance with the Bologna declaration. The course plan has been developed in cooperation with Karolinska Institute in Stockholm. The CE program follows the recommendations of International Federation of Medical and Biomedical Engineering; Criteria for Accreditation of Biomedical Engineering Programs in Europe (Discussion Paper) and Protocol for the training of Clinical Engineers in Europe. The CE program specifically aims at training for careers within the public and/or private medical sector. The objective is to prepare the student with knowledge and experience in equipment maintenance, patient safety and medical applications. After the program, the student will be familiar with the technology and the clinical application, context and problems, and be able to communicate and work with experts in a meaningful way. The CE program is especially designed to meet the clinical and technical demands at a hospital level with a high degree of patient safety. The CE Trainee Program is suggested to be based on 120 ECTS credits according to the European Credit Transfer System. One theoretical module is calculated to be equal to 2 credits and involves at least 25 hours of teaching face-toface/laboratory exercise (when applicable) and/or study visits. Individual self-study is necessary as well as participation in practical training. It is suggested to achieve credits for practical training as well. In order to qualify for the program, candidates must comply with the following requirements: • • •
A university degree in Electronic/Electrical Engineering or Computer Science, Mechanical Engineering or equivalent Good understanding in English. Computer literacy.
Apart from the compulsory participation in all lectures and practical training, the student has to pass examinations in order to obtain a Bachelor of Honours with a Diploma in
1075
Clinical Engineering. The examinations will be held after each academic course module. Examinations have to be passed in each subject. B. Main content of the program The CE trainee program must teach a broad spectrum of expert knowledge, and the basics necessary for a professional occupation. The students must be able to use scientific results and problem solving concepts in practical applications. The program is a vocational program, which includes both academic and practical modules. It is designed in four semesters, where the academic modules carry on for between 7-9 weeks and then are followed by practical training for 10-12 weeks. Academic modules The theoretical subjects will cover all type of devices, routines and techniques used at Hospitals and in the Health Care sector in general. Main focus is on the understanding of the application area, the functioning of the human body, instrumentation, management, organisation, safety and regulation. Lecturers will be held in English by international lecturers or by Albanian experts within their field. Practical modules The practical part of the training shall be carried out at a hospital. The practical training at the hospitals is built on high participation in the day-to-day clinical work. The practical training includes activities like inventory, repair work, planning of and carrying out preventive maintenance and safety checks, spare part specification, providing user training etc. During the practical training modules a mentor at the hospital as well as SHC will support the students. Reports and assignments will be handed out during the practical modules. The student shall fill in a logbook to make notes on progress and experiences. SHC shall carry out follow-up seminars during the practical modules. The students will be asked to discuss and share their experiences from the practical training at such seminars. C. Literature The course literature used is mainly English provided by the lecturers as lecture notes and handouts. One textbook in Albanian has been used and that is “Inxhinieria Mjekesore dhe Klinike” by Bertil Jacobsson (1998) (Albanian translated edition of Medicine and Clinical Engineering). All students have received also some books, e.g. Medical instrumentation by Webster and Physics of diagnostic radiology by Christensen. Also literature from some companies has been available for the specific lectures like Dialysis teaching material by Gambro and Acc. System Basics by Varian. Applicable regulations and standards and clinical
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1076
engineering guidelines and policy documents has been handed out. Furthermore, the Emerald and Emit training material has been used. D. Evaluation method The purpose of evaluation is: (1) to insure that students are reaching the pedagogical objectives and (2) to maintain a high standard of training. The evaluation is based on attendance, participation and performance of the students. The latter is based on tests, exams, as well as personal work during the practice. The lecturer of that course evaluates each academic course module separately. Each student will get a course evaluation on a scale of 0 to 100 points. However, in order to promote full attendance in each and all courses, final results will be calculated on the basis of 70 % allocated to the points received in the test for a given course and 30 % for attendance during the teaching time of that course. The passing grade of a course is 50 points. Below that score, a student may ask to have a review of his/her exam, and/or a new chance for a new examination. If a student does not succeed he/she could ask to get additional evaluation for two more courses at most. If after this second evaluation, a student still does not succeed to a block evaluation, after a review of his/her case, he/she will be excluded from the program
H. Terio
III. CONCLUSIONS The need for Albania to continuously provide the country with trained biomedical/clinical engineers is high and the next step would be to develop a curriculum for permanent installation and to train Albanian lecturers. Within the Swedish Health Care project the work with discussing a plan for a curriculum has started in collaboration with Polytechnic University of Tirana, Faculty of Medicine and Ministry of Health. It is being discussed and investigated in regular meetings in an academic workgroup with representatives of each faculty as well as Ministry of Health. In these collaboration discussions have also the Higher Education in Clinical Engineering at the University of Trieste, Italy and Karolinska Institute, Stockholm, Sweden been involved.
ACKNOWLEDGMENT This paper is based on the Education Plan of the Clinical Engineering Program developed by Eva Wennerstrand, Swedish Health Care and a team of experts in Sweden. Author: Heikki Terio Institute: Street: City: Country: Email:
Department of Clinical Engineering, C2:44 Karolinska University Hospital, Huddinge 141 86 Stockholm Sweden [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Current Status of Clinical Engineering, Health Care Engineering and Health Care Technology Assessment in Austria H. Gilly Clinical Department for Special Anesthesia and Pain Therapy, Department of Anesthesia, General Intensive Care and Pain Therapy, Medical University Vienna and L. Botzmann Institute for Anesthesia and Intensive Care Medicine, Vienna, Austria Abstract— Professional organizations and activities covering the fields of clinical engineering, of health care engineering and of health care technology assessment in Austria are addressed. Thus far educational requirements for the hospitalbased biomedical equipment technician have been covered by special courses, workshops etc in a largely non standardized manner. The same applies to training and education in clinical engineering and health care technology assessment. However, medical perfusionists (“Kardiotechniker”) succeeded (already in the early 1990s) to implement an appropriate curriculum with the aim to fulfil requirements as needed with the self dependent execution of extracorporeal circulation and perfusion and related tasks. Recently a bundle of (academic) curricula for biomedical engineering and hospital technology/health care technology were established at various universities for applied sciences (FHs). Similarly a course for formal postgraduate education for the biomedical equipment technician is offered. Several (university) institutes address technology assessment, one focuses on health care technology assessment. Cooperation between representatives of clinical engineering and health technology assessment is still lacking in Austria, which is one reason for the still insignificant initiatives to evaluate medical technology on a critical but sound scientific basis. Keywords— Clinical Engineering, hospital engineering, health care technology, assessment, Austria.
I. INTRODUCTION The aim of this article is to highlight and identify professional activities covering the fields of clinical engineering, health care engineering and health care technology assessment in Austria and their interaction. II. CLINICAL ENGINEER, BIOMEDICAL EQUIPMENT TECHNICIAN, PERFUSIONIST AND HEALTH CARE TECHNOLOGY ASSESSOR
1. The definition of a clinical engineer (CE) "A Clinical Engineer is a professional who supports and advances patient care by applying engineering and managerial skills to healthcare technology." (American College of Clinical Engineers (ACCE), definition adopted in 1991). As
clinical medicine has become increasingly dependent on more sophisticated technologies and the complex equipment associated with it, the clinical engineer has become the bridge between modern medicine and equally modern engineering. Clinical Engineering education is considered to be based on classical engineering, supplemented with a combination of courses in physiology, human factors, systems analysis, medical terminology, measurement, and instrumentation as well as courses for social skills. It is often capped with a practicum in a university hospital setting, giving the student a firm grounding in hospital operations, protocols, and ethics [1]. It is well known that great disparities exist between clinical engineering departments in various hospitals in the level and breadth of services offered, in the resources acquired to support the services and in the level of recognition and acceptance achieved [2]. This particularly is true for the situation in Austria with a missing curriculum in “clinical engineering” as defined by ACCE. With no public cor private institution dedicated to provide formal (postgraduate) education for clinical engineers or hospital engineers the question arises by which means in Austria the demand for such trained personnel has been covered. The answer is that in various fields the professional expertise was gained by training on the job in close cooperation with the medical staff. This is especially true for the medical perfusionist a since long registered group of technicians. 2. The definition of a medical perfusionist The professional perfusionists finally established a curriculum which in 1998 became a federal law (Bundesgesetz über den kardiotechnischen Dienst: Kardiotechnikergesetz). Pre-requirements for becoming a professional perfusionist comprise a diploma in nursing including a specialisation in anesthesia or intensive care medicine or having practiced as an assistant medical technician. A top up education (duration 18 months following strict regulations („Rasterzeugnis”; its content being regularly updated by the Österreichische Gesellschaft für Kardiotechnik) is similar to the training in Germany (2 years, with the German Heart Center in Berlin being the single institution to offer this special course). In Austria the courses are offered by selected hospital centres. Examinations are taken by the Aus-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1070–1073, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Current Status of Clinical Engineering, Health Care Engineering and Health Care Technology Assessment in Austria
trian Ministry for Public Health [3]. Whereas the perfusionist´s service is more directed to the patients` needs, the hospital engineer and the biomedical equipment technician rarely strike the patient level. 3. The definition of a Biomedical Equipment Technician (BMET) A second professional group in the hospital is called the Biomedical Equipment Technicians (BMETs). In reality, clinical engineers and BMETs perform different but equally valuable functions. The BMET is the person responsible for direct support, service, and repair of the medical equipment in the hospital. BMET education and training is usually of a more direct technical nature, and is supplemented with specific schooling in service to the equipment. BMETs provide the repair when medical equipment fails to function properly and must work closely with nurses and other hospital staff, as well as the equipment vendor, as they service and maintain the equipment. They also care for safety in the OR and intensive care ward. In Austria these group of professionals is organized in the “Österreichischer Verband der KrankenhaustechnikerInnen“ (ÖVKT) [4] which also is a member of I.F.H.E (International Federation of Hospital Engineering). The ÖVKT membership is around 120 individuals. The ÖVKT primarily addresses professionals working in the following areas: structural engineering and facility management in health care (especially hospitals) and medical engineering. Similar to the clinical engineers´ situation in Austria, no formal curricula for BMET existed so far. The BMETs need not to have a training at university level (Dipl.Ing.) though many of them entered their job career after finishing a standard (electro)technical curriculum. On the international level the educational requirements for the hospital-based clinical engineer, its importance and the educational system in the professional development process have been extensively reviewed (for more details refer to [5, 6]). Where do CE and BMETs work in Austria? Biomedical engineers are employed in industry, in hospitals, in research facilities of educational and medical institutions, in teaching, and in government regulatory agencies. They often serve a coordinating or interfacing function, using their background in both the engineering and medical fields. In industry, they may create designs where an indepth understanding of living systems and of technology is essential. They may be involved in performance testing of new or proposed products. Government positions often involve product testing and safety, as well as establishing safety standards for devices. In the hospital, the biomedical engineer may provide advice on the selection and use of medical equipment, as well as supervising its performance
1071
testing and maintenance. In the past they may also have build customized devices for special health care or research needs. Some biomedical engineers are technical advisors for marketing departments of companies. Some biomedical engineers also have advanced training in other fields, but rather few biomedical engineers in Austria also have an M.D. degree, thereby combining an advanced understanding of newest technology with direct patient care or clinical research. The BMET´s opportunities is largely with facility management within the hospitals at the supervision level. They also serve as safety officer, in this case being responsible for all the medical equipment in one out of the 300 Austrian hospitals. 4. The definition of a health care technology assessor A health care technology assessor tries to produce evidence in the health-care sector predominantly analyzing literature reviews and randomized controlled trials but also using modeling and/or evaluations. In Austria there are no formal training courses offered by public teaching institutions. Professional assessors usually graduated from a university and, in addition, were trained on their job according to a postgraduate curriculum. 5. Professional Organizations in Austria The Austrian Society for Biomedical engineering is a member of IFMBE and the representative body for Biomedical Engineering in Austria. It provides a knowledge and information platform and is supposed to foster all aspects of Biomedical Engineering. Working Groups for hospital engineering (“Krankenhaustechnik“) and Health Care Technology Assessment (as a merely distributive information platform) are active. As yet there is no clinical engineering division or working group. Membership is around 200. Österreichischer Verband der KrankenhaustechnikerInnen: This organization (approx. 120 members) becomes more and more visible in the public, predominantly by organizing an annual meeting (this year the 2nd European Conference on health Care Engineering will address Mainstreams in Healthcare Engineering (http://www.eche2007.eu/) The Austrian Society for Kardiotechnik was founded in 1987 with the aim of certification of the profession and with the aim to promote continuing education and to exchange knowledge with similar organizations on the international level.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1072
III. HEALTH (CARE) TECHNOLOGY ASSESSMENT IN AUSTRIA Health technology assessment is a multidisciplinary process that summarises information about the medical, social, economic and ethical issues related to the use of a health technology in a systematic, transparent, unbiased, robust manner. Its aim is to inform the formulation of safe, effective health policies that are patient focused and seek to achieve best value. HTA must always be firmly rooted in research and the scientific method. In Austria several institutions are involved in technology assessment. The former Health Technology Assessment Unit was a subdivision of the Institute of Technology Assessment (ITA) located at the Austrian Academy of Sciences. It was restructured under the roof of the L. Boltzmann Gesellschaft and this research area has been transferred by April 2006 to a newly founded Ludwig Boltzmann Institute for Health Technology Assessment [7]. The LBI for HTA now is the largest research institute in Austria within the health technology assessment sector employing a full time staff of 9 academicians (no involvement of BMETs or CEs or basic scientists). Starting in 2006 it disposes over 3,2 Mio Euro for the next four years, which is about 800.000.- Euro p.a. The annual budget is funded by the partner-institutions and the Ludwig Boltzmann Society. According to the mission statement the LBI for HTA regards itself as an independent entity for scientific decisionmaking support in the health sector providing the scientific basis for decisions in favour of an efficient and appropriate use of resources. In this process, a broad socially-relevant view of medical interventions is adopted. It is emphasized that the institute works at a distance to interest groups, and refuse to fall within their influence, be they fund providers or market suppliers. The scientific program line of the LBI for HTA includes
- Comprehensive assessments of health interventions & evidence-based health services research
- Scientific support of health policy and decision-maker -
networks Health Technology Assessment in hospitals Scientific decision support of Health Ministry Public understanding and research transfer Development of policy instruments for medical decision-making - application studies and registries International cooperation / HTA Best Practice
In respect to assessment of apparatus and medical equipment or new medical technologies, no research work has been published so far.
H. Gilly
The Department of Public Health, Medical Decision Making and Health Technology Assessment [8] is located at UMIT / Private University for health sciences, medical informatics and technology in Hall in Tyrol. The permanent staff includes 12 academicians and a headcount of 10 lecturers. The goal of the research program is to develop and apply interdisciplinary methods to guide the comprehensive, systematic and practice-oriented assessment of measures and procedures in public health and medicine. Research is oriented to support decision makers and providers in improving the quality and effectiveness of health care and reducing medical risks in order to enhance the health status of both individual and society. Main topics cover the following aspects and methodological areas: Public Health, Medical Decision Making / Decision-Analytic Modelling, Epidemiologic and Biostatistical Methods, Health Technology Assessment, EvidenceBased Medicine, Systematic Reviews / Meta-Analysis, Quality-of-Life Research, Cost-Effectiveness Analysis, Causal Inference, Pharmacogenetics. Assessment of special apparatus and medical equipment do not seem to be addressed in the research work published so far. The Institute of Technology Assessment (ITA) is located at the Austrian Academy of Sciences. It has a headcount of 12 academicians. They presently focus on governance of technological knowledge, E-governance, privacy – the protection of the private sphere, technologies of the information society, innovative and sustainable environmental technologies and security research. Recent papers addressed the planning of the infrastructure of intensive care units [10] and shortly highlighted robotic surgery [11]. The Institute of Risk Research [12] is located at the University of Vienna and comprises a staff of about 12 academicians. It is using interdisciplinary or trans-disciplinary approaches to dealing with risk topics regarding dangers but also just to give an example economical questions. In the past years nuclear safety was supplemented by additional topics like socioeconomic research (accompanying technical research) on nuclear fusion, technology-assessment, nonnuclear energy-systems, biodiversity, genetic-engineering (without claim for completeness). Activities in the field of medical equipment technology do not seem to have high priority. IV. CONCLUSIONS In Austria, especially in the last 5 years, the number of teaching institutions offering both Bachelor and Master
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Current Status of Clinical Engineering, Health Care Engineering and Health Care Technology Assessment in Austria
curricula in (classical) biomedical engineering and biomedical informatics has widened substantially. This increase is in part due to newly founded BME (related) departments both at the university level (restructuring of already established university departments on the one hand (such as in Graz and Vienna Technical University) and on the other hand founding of additional private universities (UMIT)), and in part the number of Universities of Applied Sciences offering BME studies increased significantly. Following the present trend of growing awareness for the topic “evaluation and assessment” some of the new teaching units have included health technology assessment in their names and research goals. Whether research activities in the respective area will also expand to a substantial output remains yet open. Our analysis of published research papers and annual reports did not yet reflect such a trend.
REFERENCES 1. 2.
ACCE at: ttp://www.accenet.org/default.asp?page=about§ion= definition] M. Frize: The clinical engineer: A full member of the health care team? Med Biol Eng Comput. 1988 Sep;26(5):461-5.
1073
3.
http://www.kardiotechnik.at/ausbildung/ausbildungsverordnung/ index.html 4. ÖVKT at http://www.oevkt.at 5. Goodman G.R. Images of the Twenty-First Century. Proc. Ann Int Conf IEEE Engineering. Engineering in Medicine and Biology Society, 1989 Vol 5:1615 - 1617 6. Brush LC. The BMET (biomedical equipment technician) career. J Clin Eng. 1993;18(4):327-33 7. LBI for HTA at http://hta.lbg.ac.at/de/index.php 8. UMIT at http://phgs.umit.at/page.cfm?pageid=437 9. ITA at http://www.oeaw.ac.at/ita/welcome.htm 10. C Wild, M Narath. Evaluating and planning ICUs: Methods and approaches to differentiate between need and demand. In: Health Policy 2005, 71(3), 289-301 11. C Wild. Roboterunterstützte Chirurgie. ÖKZ 2005; (12), 11 12. Institute of Risk Research: http://www.irf.univie.ac.at/indexEN.htm Author: Dr. Hermann Gilly Institute: General Intensive Care and Pain Therapy, Medical Univresity, L. Boltzmann Institute for Anesthesia, Vienna Street: Waehringerguertel 18-20 City: Vienna Country: Austria Email: [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Findings of the Worldwide Clinical Engineering Survey conducted by the Clinical Engineering Division of the International Federation for Medicine and Biological Engineering S.J. Calil1, L.N. Nascimento1 and F.R. Painter2 1
Department of Biomedical Engineering - School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil 2 Biomedical Engineering Graduate Program, University of Connecticut, Storrs, United States of America
Abstract— Despite the clinical engineering profession already exists in most parts of the world, one cannot say that its activities and profile are the same in each country. However, just a number of few countries have already conducted a survey to identify the characteristics of its clinical engineers. This survey, developed by the International Federation for Medicine and Biological Engineering, is the first attempt to identify the clinical engineer, the clinical engineering activities and the kind of employer worldwide. The results have shown significant differences according to the analyzed region. Keywords— Clinical Engineering, Survey, Clinical engineering activities, clinical engineering profile
I. INTRODUCTION Clinical Engineering today is a necessary profession all over the world. The public pressure for higher quality and safety require skills that no longer can be fulfilled by traditional health professionals. Despite the same denomination, Clinical Engineering did not follow the same model throughout the world. Each country, according to its needs and characteristics, developed its own model or adapted it from already developed models.
engineering area. To identify professional activities, there were questions about employer, job position and job activities. Response Distribution: Trying to have a better picture of what is happening within defined regions that supposedly have similar models for the Clinical Engineering profession, the analysis of the answers were done by considering them as a whole and also by dividing the world into 5 different regions; Europe, USA/Canada, Latin America, Asia and Africa. Despite several contacts, there were no answers from Australia, New Zealand and Caribbean countries. So far, a total of 559 questionnaires were completed. From those we had 54% (300) respondents from Latin America, 27% (153) from Europe, 10% (57) from USA/Canada, 8% (44) from Asia and 1% (5) from Africa (Fig. 1). Since just five questionnaires from the African region were responded, all the comments below will be concentrated on the other four regions. Age: The majority of the respondents are within the age range of 30 to 49 years old (63%). This age range distribution is not homogeneous among the five selected regions. If taken per region (Fig. 2) about 63% of the respondent clinical engineers from the USA and Canada are, at least, 50 years old. Respondents from Europe (68%), Latin America (66%) and Asia (63%) have a similar age profile, between 30 to 49 years old (Fig. 2). Though the profile of the re-
II. THE SURVEY During the BIOMEDEA meeting promoted by the International Federation for Medicine and Biological Engineering (IFMBE) at Stuttgart (September 2005), a survey project was developed to identify Clinical Engineers and characterize their activities all over the world. To initiate the survey project, a ready made Internet site was used, designed specifically for survey projects. The form to be filled was developed in three different languages (English, Portuguese and Spanish) and the links for the questionnaires were placed at the IFMBE home page. For the clinical engineering identification questions were asked regarding the name, age, email address, academic background, and years of experience within the clinical
Filled Questionnaires / Region
10%
8%
1%
54% 27%
Latin America
Europe
USA,Canada
Asia
Africa
Fig. 1 Percentage of responded questionnaires per region
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1085–1088, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1086
S.J. Calil, L.N. Nascimento and F.R. Painter
% Age / Region 100%
> 60 years
80%
50 - 59 years 40 - 49 years
60%
30 - 39 years
40%
< 29 years
20% 0%
Latin Europe USA, America Canada
Asia
Africa
Fig. 2 Age distribution among the 5 selected regions spondents may not reflect the actual age profile in USA and Canada, it is true that Clinical Engineering in this region began in the seventies and, consequently, the age profile of clinical engineers may indeed be older, and therefore more experienced if compared with the ones from other regions. Employer: About the kind of employer, in worldwide terms it can be noticed that around 37% of the respondents work in Hospitals and Health Clinics and 12% with Independent Service Organizations. It was interesting to see that around 8% of the respondents work in the academic environment. Again, this is the profile of the respondents and may not necessarily reflect the actual clinical engineering employer profile. The reason for such high percentage may be due to the better communication among clinical engineers in academia due Internet facilities in these places and a good relationship with colleagues at scientific events (congresses, symposiums and workshops). Considering the employer per region (Fig. 3), there is a strong presence of clinical engineers within hospitals and
health clinics in Europe (49.7%) and in USA/Canada (40%). The second employer, however, is not so easy to define. While in Latin America it is the “Independent Service Organizations” (around 16%), in Europe the second employer was indicated as the “Medical Products Manufacturers” (14.4%) and in USA/Canada, the “Health System” (14%). Academic Degree: The majority of the respondents (53%) have a postgraduate degree, which shows a strong interest of clinical engineers for continuing education. Perhaps due to market pressure, this characteristic is quite intense in the USA/Canada region where 70% of the respondents have postgraduate degree. This number is not far from the one obtained in a survey conducted by the America College of Clinical Engineering [1] where 61.5% of the respondents declare to have completed a postgraduate course. These characteristics can also be noticed in the Latin America (59%) where due to lack of undergraduate courses on Clinical Engineering, the graduate engineers who want to obtain knowledge in this area have to attend a minimum one year specialization course (Fig. 4). Experience: The overall results of the topic “Years of Experience” show that the greater percentage of the respondents (43%) has around 1 to 9 years of experience. There is however a significant percentage (15%) with more than 21 years of experience However, looking at the results per region, the “Years of Experience” array becomes quite different (Fig. 5) from the worldwide results. People with more than 21 years of experience go up to 58% in the USA/Canada region. This data agrees to the one obtained at the ACCE survey [1] where they obtained 36.4% for the range between 20 to 29 years of experience and 21.4% for people with more than 30 year of experience. For the Latin America (50.6%), Asia (36.4%) and Europe (44.4%) regions, the majority of the respondents are
Employer / Region Degree / Region
Other 100%
Gov. Agency
100%
80%
Health System
80%
60%
Student
60%
Private Consultant
40%
Academia
20%
40% 20%
Med. Equip. Manuf. Indep. Serv. Org.
0%
Latin Europe USA, America Canada
Asia
Fig. 3 Distribution of employer per region
Africa
Hospital/Clinic
0% Latin America Post Graduation
Europe
University Level
USA, Canada Middle School
Asia
Africa
Elementary School
Fig. 4 Academic degree of the respondents per region
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Findings of the Worldwide Clinical Engineering Survey
1087
Experience / Region 100% > 21 years 80%
18-21 years 15-18 years
60%
12-15 years 9-12 years
40%
6-9 years 20%
3-6 years 1-3 years
0%
Latin America
Europe
USA, Canada
Asia
Africa
< 1 year
Fig. 5 Years of experience in the clinical engineering area per region in the range “1 - 9 years” of experience. There is, however, in these three regions a significant percentage of respondents with 15 to 18 years of experience as clinical engineers. Primary Position: The survey shows that 37% of the respondents work as managers. This feature is repeated in all 4 regions, where from 22.7% to 40% of the respondents work as managers (Fig. 6). As explained before, the African region cannot be analyzed due to the low number of responses. The second position occupied by the respondents varies according to the region (Fig. 6). While for Latin America “Professional Support” is the second most occupied position (14.6%), “Research” and “Consulting” are the second ones for Europe (17.5%) and for “USA/Canada” (21%), respectively. “Research” is also the second primary position for the Asia, confirming that “Academia” is the second main employer (see Fig. 3). Activities: To avoid misunderstandings from the respondents regarding the question about the activities they are
currently developing, a glossary explaining the meaning of each of the ten options was developed, using the same definitions adopted by the ACCE survey [1]. While all other questions allow only a single option to be answered, the question about the current activities allows multiple choices. Analyzing the data as a whole, “Technology Management” (60.8%) and “Service Delivery” (60.6%) are the major practiced activities. This picture is similar in each one of the four regions. The other activities however vary quite significantly according to each region (Fig. 7). While in Latin America, Europe and USA/Canada the respondents are very much involved with “Education” (around 53% in general), only 36.4% are involved with this activity in Asia, but “Academia” is the second main employer (16%) in the Asian region. The reason for this is difficult to explain since the employer’s identification is not asked in the questionnaire. Around 70% of the respondents from the USA/Canada region are involved with “Risk Management/Safety” activities, while about 42%, 36% and 26% of the respondents in the European, Asian and Latin American regions, respectively, are involved in the same activity. “Information Technology” is other activity that varies quite significantly according to the region analyzed. While in Europe (35.3%) and “USA/Canada” (31.6%) several respondents declared to be involved with the activity, just a few ones have declared the same in Latin America (12%) and Asia (13.6%).
Activities / Region 90.0% 80.0% 70.0% 60.0% 50.0%
Primary Position / Region 100%
40.0% Other
80%
Manufacturing Teaching
60%
Consulting
40%
Professional Support
20%
Research Service Delivery
0%
Af ric a
As ia
da Ca na
op e Eu r
US A,
La
tin
Am er
ica
Management
Fig. 6: Primary position occupied by clinical engineers per region
30.0% 20.0% 10.0% 0.0% Latin America
Europe
USA, Canada
Asia
Africa
Technology Management Service Delivery Education General Management Quality Control Facilities Management Risk Management/Safety Product Development, testing, Evaluation & Modification Clinical Services Information Technology (IT), Telecommunications
Fig. 7: Activities that are practiced by the respondents per region
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1088
S.J. Calil, L.N. Nascimento and F.R. Painter
III. CONCLUSIONS There is a need for more data to have a better understanding of the Clinical Engineering profile around the world. Though there are a good number of answers, they are highly concentrated for two regions of the world. While 53.6% are from Latin American countries and 27.4% from European countries. The USA, Canada, Asian Pacific countries and African countries counts for the remaining 19%. Though the USA/Canadian region concentrates the majority of Clinical Engineers, only 10.2% of the questionnaires come from that region. Even within each region, there is a concentration of countries that answered the questionnaire. From the 300 of Latin America region, 192 (64%) are from Brazil and 67 (22.3%) are from Mexico. In the European region, Germany represents 32% of its 153 questionnaires. While it is necessary to have a better distribution of questionnaires to increase the accuracy of this analysis a very interesting picture of the clinical engineering profile is already being drawn. As one can see, there are quite similar activities in several parts of the world which are more or
less developed according to the culture and knowledge of the country or region. These similarities can be the basis for developing stronger international cooperation among clinical engineers and clinical engineering professional organizations.
ACKNOWLEDGMENT The author wishes to thank the CAPES.
REFERENCES 1.
American College of Clinical Engineering at http://www.accenet.org/downloads/bok-survey06.doc Author: Institute: Street: City: Country: Email:
Saide Jorge Calil Universidade Estadual de Campinas Centro de Engenharia Biomedica CP 6040 Campinas - SP Brazil [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Health Technology Assessment in Croatian Healthcare System P. Milicic KBC Zagreb, Department of Nuclear Medicine and Radiation Protection, Zagreb, Croatia Abstract— The healthcare system in Croatia is in transition. HTA can improve quality and increase its efficiency. In our feasibility study we analyzed the efficiency of Croatian healthcare system, and as a result we found out that the system is expensive and of low efficiency. Health technology assessment is a systematic evaluation of properties and effects of health care technology. It may involve the investigation of one or more of the following attributes of technologies: Performance characteristics that include sensitivity and specificity of diagnostic tests, and conformity with specifications of design, manufacturing, reliability, ease of use and maintenance. Safety, a judgment of risk acceptability (possibility of adverse health outcome and its severity) associated with using technology in a particular situation. Efficiency refers to the benefit of using technology in order to address a particular problem under ideal conditions (e.g., within the protocol of a carefully managed randomized controlled trial, involving patients meeting narrowly defined criteria, or conducted at a “center of excellence”). Effectiveness refers to the benefit of using technology for a particular problem under general or routine conditions (e.g., by a physician in a community hospital treating a variety of patient types). Keywords— HTA, WHO Project, Healthcare system.
I. INTRODUCTION Health care technologies can have a wide range of microeconomic and macroeconomic impacts. Microeconomic concerns may include costs, charges or payment levels associated with individual technologies. Cost effectiveness, cost utility and cost benefit analyses compare resource requirements and benefits of technologies for particular applications. Macroeconomic impacts of health care technologies include the impact of new technologies on national health care costs, the effect of technologies on resource allocation among different health programs or among health and nonhealth sectors, and the effects of new technologies on outpatient versus inpatient care. Other macroeconomic issues that pertain to health care technologies include the effects of regulatory policies, health care reforms and new policies on technological innovation, technological competitiveness, technology transfer and employment. Many technologies raise social, legal, ethical and political concerns. For example, genetic testing, fertility treatments, organ transplants and life-support systems for the critically ill challenge legal standards and society norms.
Ethical questions continue to prompt improvements in informed consent procedures for patients involved in clinical trials. Allocating scarce resources to technologies that may be expensive, inequitably used or non-curative raises broad social concerns. Clinicians, patients and insurers have different motives and expectations in whether to adopt and pay for new technologies. Ideally, these decisions should be knowledgebased. However, in the age of information overload even the most informed clinician can find it virtually impossible to keep abreast of all the research results that are relevant to his or her practice. Patients and policy makers may have difficulty interpreting research results. Clinicians, patients and policy makers must deal with the dilemma when multiple studies of the same technology have conflicting results, making the information from these studies difficult to synthesize and interpret. II. ANALYSIS OF THE HEALTHCARE SYSTEM IN CROATIA Croatian health system has a significant number of hospital capacities, spread throughout the whole country: -
2 Clinical Hospital Centers, 5 Clinical hospitals 7 Clinics 23 General hospitals 28 Special hospitals
These hospitals spend about 50% of all money allocated for health care. About 90% of these hospitals are public (local or states). The state of hospital capacity has been inherited from previous decades, but many changes have happened during last years. Most of the hospitals are situated in the capital city of Zagreb. There are also several local hospitals in smaller cities. This situation increases? the efficiency of the healthcare system. During the war period (1991 -1995.) many hospitals were destroyed or heavily damaged. Statistics show that there are about 400 beds per 100000 inhabitants. The number of beds is regulated by a contract between hospitals and Croatian Health Insurance Agency (HZZO) (). There was no plan for the development of hospitals and healthcare. The end result is an uneven distribution of special hospitals. Out o 69 hospitals, 28 have the
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1100–1101, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Health Technology Assessment in Croatian Healthcare System
status of a special hospital. Most of them are rehabilitation centers and sanatoriums, psychiatric hospitals, and hospitals for chronic diseases. A lot of national, regional and local hospital capacity is located in Zagreb in hospitals and other hospital institutions. The capacity is uneconomically distributed, reducing efficiency and increasing costs. Some of the hospitals are located in unsuitable positions and in old buildings. As complex organizations hospitals require different organization structures and multidisciplinary staff. In 2006, the salaries of all staff members without medical degrees were reduced for about 25%. As a result, a number of skilled people (managers, economists, clinical engineers and medical physicists) decided to leave their job in healthcare system. The salary of staff with a Ph D. status in biology, medical physics and clinical engineering is now lower than the salary of a technician or nurse in the hospital. Hospital directors are in many cases clinical doctors without any experience and education in economy and hospital management. The last few years show that the number of diagnostic tests and medical examinations increased from 26.000.000 in 1994. to 56.000.000 in 2001. In other words, an increase of more than 120% in only 7 years, considering that Croatia has around 4.400.000 inhabitants. The proposed reform of Croatian health care system anticipates the introducing of capacity planning, as a way to harmonize evaluation of hospital capacity and to cover even regional distribution of health care. Hospitals in Croatia are public and non-profit organization, so there is no reason for competition between them. An important step in planning hospital capacity is to classify hospitals and to make plans of numbers of beds and quality of hospital capacity. Categorization of hospitals has to be a part of a hospital capacity plan, but also in regard to real situation. Categorization will give the base for investigation of hospital capacity in future, and also for assignment change of present capacity and for fusion of several hospitals into one administrative unit. During the period of war many aid organizations sent drugs and medical equipment over which there was no control or supervision. There is no database of medical equipment and it is hard to estimate the real situation in Croatian healthcare system. Also the implementation of testing and calibrating procedures for medical equipment as defined in international standards is very slow. There is no official institution for controlling the equipment. Only for X-ray and other ionizing radiation equipment regular inspection and control is prescribed by law. The Working group for clinical engineering and medical physics set by the Ministry of Health has created a project to establish Croatian national agency for medical equipment.
1101
III. CONCLUSIONS The first project is to standardize Croatian terminology, adopt all relevant international standards and create a database of all medical equipment in use in Croatia. As a part of WHO project, the Workgroup has to harmonize the regulations and legislation with European ones. The Ministry of Health has limited capabilities to develop and introduce new health technologies as well as to reduce costs of health care based on new technologies implementation and increase health care efficiency. This situation needs resources and assistance. The WHO HTA project will help policy makers increase efficiency in regard to allocation of coverage, funding, reimbursement, regulation, patient education and planning. Since HTA is supported by the Croatian Ministry of Health, it is a good moment to establish an institution at the government level to improve the complete healthcare system. This institution has to establish a national agency to help policy makers to improve Croatian healthcare system. We identified next few steps for developing HTA idea in Croatia: -
Continue to raise awareness about increase interest in HTA program Initiate HTA activity in all health care fields Develop a system to analyze needs for health technology and inform decisions about future investments Initiate training of a core group in research methods in HTA Develop a proposal for an HTA body and pursue potential funding sources Join international HTA networks Establish a viable network for Central and Eastern Europe and develop common activities and collaboration
REFERENCES 1. 2.
Milicic, P (2003) Implementation of Health Technology Assessment in the Croatian Health care System, ISTAHC 2003, 19th Annual meeting, Canmore, Canada Krzystof, L, HTA in reimbursement policy – examples of the real impact. Proposition for international cooperation, 6th International HIT conference, Cavtat, Octobrer 12, 2003 Author: Petar Milicic Institute: KBC Zagreb, Department of Nuclear Medicine and Radiation Protection Street: Kispaticeva 12 City: Zagreb Country: Croatia Email: [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Improving Patient Safety Through Clinical Alarms Management Y. David1, J. Tobey Clark2, J. Ott3, T. Bauld3, B. Patail3, I. Gieras3, M. Shepherd3, S. Miodownik3, J. Heyman3, O. Keil3, A. Lipschultz3, B. Hyndman3, W. Hyman3, J. Keller3, M. Baretich3, W. Morse3 and D. Dickey3 1Texas Children's Hospital/Biomedical Engineering, Houston, Texas, USA 2University of Vermont/Instrumentation & Technical Services, Burlington, Vermont, USA 3Healthcare Technology Founcation Task Force, Plymouth Rock, Pennsylvania, USA Abstract— Clinical alarms warn caregivers of immediate or potential adverse patient conditions. Alarms must be accurate, intuitive, and provide alerts which are readily interpreted and acted on by clinicians appropriately alarms and their shortcomings have been the topic of numerous studies and analysis. The (JCAHO) established a National Patient Safety (NPS) goal in 2002 to improve the effectiveness of clinical alarms. Despite the technological and healthcare improvements related to efforts to meet the NPS goal, adverse patient events continue to occur related to alarm system design and performance, care management and the complexity of the patient care environment. In 2004, the American College of Clinical Engineering Healthcare Technology Foundation started an initiative to improve clinical alarms. The HTF task force reviews the literature related to clinical alarm factors and analyzes adverse event databases. Forums, meetings and a survey of 1,327 clinicians, engineers, technical staff and managers provided feedback regarding alarm issues. Of particular value is the response from nursing who represented the majority of the respondents. Observations and recommendations have been developed to improve the impact of clinical alarms on patient safety. Future directions are aimed at awareness, a focused effort towards the reduction of false alarms, and soliciting all constituents involved in clinical alarms to meet and develop action plans to address key issues. Keywords— Equipment Alarm Systems; Medical Device Safety; Monitoring, Physiological; Patient Care Management; Clinical Engineering.
I. INTRODUCTION Alarms on clinical devices are intended to call the attention of caregivers to patient or device conditions that deviate from a predetermined “normal” status. They are generally considered to be a key tool in improving the safety of patients. The purpose of alarm systems is related to “communicating information that requires a response or awareness by the operator.” In some cases the normal conditions are preset in the device, while in others the correct use of the device requires directly setting the parameter limits. The user often has the ability to turn the alarms on or off, and to set the volume of the audible alarm output. Alarm information may also be transmitted away from the bedside to a remote location that can be down the hall, or at some dis-
tance away. Such transmission may also be disabled, either intentionally or inadvertently. When an alarm is triggered the caregiver is tasked with noting the alarm, identifying its source, and responding appropriately. Effective alarm setting, noting and responding is a design, user, and systems issue. From the design perspective alarms should be easy to set, their status easily determined if not directly visible, and the identification and specificity of a triggered alarm should be unambiguous. From the use perspective, users must be trained, and the number of staff must be suitable to the setting and the number of patients. Alarms are a primary source of information when the situation triggering it is not directly observable. When caregivers rely on alarms, it becomes essential that the alarms perform to their expectations. When they don’t patients may not receive the care they need, with potentially serious adverse consequences. Alarms must be set properly and applicable to the clinical setting the device is being used in. While many nonperformance issues may be associated with “use error”, the culture of blaming the user is now recognized as both inappropriate and ineffective. For a clinical alarm to be effective it must be triggered by a problem which adversely affects the patient, personnel must identify the source and meaning of the alarm, and correct the problem prior to an adverse patient event. This deceptively simple set of concepts has not yet resulted in clinical alarm systems that universally meet usability and other performance objectives directed toward improving patient safety. This report presents the work of an ACCE Healthcare Technology Foundation (AHTF) task force focusing on an initiative to improve the management and integration of clinical alarms. ECRI provided valuable input into the task force work and contributed to this report. It includes a review of relevant literature, analyzes available adverse event databases, and presents results from a national survey containing constructive feedback from clinical users and other support staff. This offers insights into current clinical alarm issues, and to enhance patient safety.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1051–1054, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1052
Y. David, J. Tobey Clark, J. Ott, T. Bauld, B. Patail, I. Gieras, M. Shepherd, S. Miodownik, J. Heyman, O. Keil, A. Lipschultz, …
II. REPORTED PROBLEMS As part of this study the FDA Manufacturer and User Facility Device Experience Database (MAUDE) and ECRI’s Problem Report System were reviewed. It should be noted that the FDA has stated, “Adverse events related to medical devices are widely under-reported by device users.” This underreporting deters the ability of healthcare providers and the medical device industry in taking appropriate corrective action to improve patient safety where clinical alarms are used. The FDA MAUDE database was queried over the period of 2002-2004 using the search terms “alarm” in the Product Problem field and “death” as the Event Type selection. Two hundred and thirty-seven reports were found using this search criterion with breakdowns shown in Figure 1 – Deaths by Year and Figure 2 - Deaths by Device Type. III. AHTF INITIATIVE The ACCE Healthcare Technology Foundation (AHTF) put forth an initiative in 2005:
•
To improve patient safety by identifying issues and opportunities for enhancements in clinical alarm design, operation, response, communication, and appropriate actions to resolve alarm-related events.
A task force was formed to focus on clinical alarms management and integration. Activities have included open forums, audio conferences, literature and hazard reviews, the design, implementation and analysis of a clinical alarms survey, and development of educational materials including materials on the AHTF website http://www.acce.htf.org and the publication of this paper. The task force focus has been on development, delivery and analysis of a national survey on clinical alarm usage, issues, and priorities for solution. The American Association for Critical-Care Nurses offered valuable input into the development of the survey. A goal was to help gain information on the extent to which the management of clinical alarms is a problem in hospitals so that manufacturers and caregivers can take appropriate corrective actions. The survey was divided into four main sections. The first section requested demographic information from the respondent, the second provided a number of general statements about clinical alarms and prompted the respondent to rate their level of agreement with the statement with options for Strongly Agree, Agree, Neutral, Disagree, and Strongly Disagree. The third presented a listing of nine issues that inhibit effective clinical alarm management asking to rank them on a scale of 1 (most important) to 9 (least important). The final section requested commentary on what is needed to improve clinical alarm recognition and response. The survey was implemented on-line via SurveyMonkey ™ on August 15, 2005 and closed on January 15, 2006. It was also made available in a paper version. A. Clinical Alarm Survey Results
Fig. 1 Deaths by Year (2002-2004)
Fig. 2 Deaths by Device Type (2002-2004)
The survey was completed by 1,327 respondents, the large majority (94%) of which worked in acute care hospitals. Over half of respondents were Registered Nurses (51%), with a sizable portion of surveys completed by Respiratory Therapists (14%), Clinical Engineers and Biomedical Equipment Technicians (6% and 9%, respectively), and Clinical Managers (6% ). Almost one-third of respondents (31%) work in an intensive care unit, with the remainder of respondents fairly dispersed among various other departments. 66% of respondents had more than 11 years of experience and only 8% had less than three years. Answers to second section yielded large majority of respondents (>90%) agreed or strongly agreed with the statements, the purpose of clinical alarms, and the need for prioritized and easily-differentiated audible and visual alarms.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Improving Patient Safety Through Clinical Alarms Management
Likewise, a large portion of respondents identified nuisance alarms as problematic, with the large majority agreeing or strongly agreeing that they occur frequently (81%), disrupt patient care (77%), and can reduce trust in alarms and cause caregivers to disable them (78%). 80% support smart alarms which can help minimize some types of nuisance alarms 49% of respondents believe that a dedicated central alarm management staff (i.e., monitor watchers) for disseminating alarm information to caregivers is helpful, while 34% were neutral; 54% of respondents see utility in integrating alarm information with communications systems (e.g., pagers, cell phones), while 30% were neutral. Responses were split on whether properly setting alarm parameters is overly complex on existing systems. 49% of respondents disagreed or strongly disagreed with this statement, while 28% agreed or strongly agreed and 23% responded as neutral on the issue. 72% of respondents agreed or strongly agreed that alarms are adequate to alert staff to changes in the patient’s condition. Third Section provided insight into rating the relative contributions of various challenges faced with clinical alarm management. For most of the items, responses were welldistributed across the range of importance. However, two items showed more consistency. 42% of respondents consider “frequent false alarms reducing attention” and “response to alarms” as the most important of the presented issues, and 78% rated false alarms in the top four rankings. Conversely, 25% of respondents believe lack of training on alarms is the least important issue, and 63% rated it at the lowest ranking - 6 through 9. Many nurses see alarms as one item on a long list of tasks to be managed, rather than as an enabling tool that improves the nursing staff’s ability to stay informed of their patients’ conditions. By not recognizing the importance of training, the results indicate that nurses may underestimate their role in alarm management and see the “burden” of clinical alarms as solely a technology problem. Clearly, frequent nuisance alarms have played a role in breeding this mindset, and technology improvements are a necessary component in addressing this problem. Thus, effective clinical alarm management relies on (1) equipment designs that promote appropriate use (e.g., easy to set, obvious visual indictors when alarms have been disabled), (2) clinicians taking an active role in learning how to use equipment safely over its full range of capabilities, and (3) hospitals recognizing the complexities of clinical alarm management and devoting the necessary resources to develop effective management schemes. As stated by one survey respondent, a “combination of technology and nursing process adjustments need to be implemented in order to effectively address this issue. Smart alarms, improved communication systems, directing alarms to the caregivers,
1053
training, accountability regarding alarm response policies, etc, all should be helpful in reducing the risk.” IV. OBSERVATIONS The studies presented revealed several themes: • • • •
• • • • •
•
The number and complexity of alarm systems in critical care environments challenge human limits for recognition and action Alarms in critical care environments may not significantly affect care management decisions In general, alarms are a tool in assessing patient conditions should be used in conjunction with direct clinical measurements and observations The term “alarm” was found in the FDA MAUDE adverse event report Product Problem field most commonly for physiological monitoring systems, ventilators and infusion pumps Parameter acquisition improvements (e.g. pulse oximetry) are important in improving alarm accuracy and value Remote alarm communication devices (e.g. pagers) if well designed can be of value but problems have occurred when used as the primary alert method The IEC/ISO standards are viewed by many as a way to improve alarms by standardizing audible and visual alarms, priority and parameter differentiation The alarm problem is a systems issue and actions toward specific areas must consider their impact on the system There is disagreement about the role of user operation of alarm systems in alarm system performance. Caregivers de-emphasize the need for alarm configuration and operation training while adverse event analysts find many instances of improper setup and subsequent action when alarms do occur. False alarms have been consistently reported as a major issue with alarm systems. They reduce staff confidence in alarms which may result in deactivation of alarm systems and detract from care management V. RECOMMENDATIONS
A. Medical Device Industry Manufacturers should consider the complexity of the healthcare environment in order to design alarm systems that are operationally intuitive, and effective given the care tasks of users, and which are focused on the true need for
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1054
Y. David, J. Tobey Clark, J. Ott, T. Bauld, B. Patail, I. Gieras, M. Shepherd, S. Miodownik, J. Heyman, O. Keil, A. Lipschultz, …
intervention. False alarms must be reduced for alarm systems to be effective. There must be additional emphasis on accurate parameter acquisition, human factors design and a systems approach to alarm systems. The IEC/ISO standards for alarm systems represent an improvement in design and should be considered for implementation in the U.S. Standardization offers the opportunity to eliminate some elements of confusion over what different alarms mean, as well as how they are operated. The actual use of recognized standards by various manufacturers must become the norm rather than the exception. Additional standards and standardization are also necessary so that devices that are commonly used together operate as a system rather than as a collection of individual components. Furthermore, how devices are configured must also reach a greater level of commonality. B. Healthcare Healthcare organizations and clinicians should recognize the limitations of alarm systems and utilize them only as a tool in the overall assessment of patient condition. It should be recognized that improper configuration and operation can result in adverse events in the complex patient care environment. Effective education and training must take place to better understand proper operation, the implications of mis-configuration or defeating alarms, and the limitations of current alarm systems. False alarms should not result in reduced alarm vigilance and deactivation of alarms. The care of patients where clinical alarms are used should be planned with input from clinical staff, biomedical/clinical engineers, facilities staff and others involved in the environment of care so that alarm use is well integrated with other procedures and requirements. Healthcare institutions should carefully evaluate the potential for devices to reduce false alarms and other cited problems through intelligent processing of incoming signals, the use of “smart alarm” technology, ease of use, usability and human factors design principles, and application of standardization and systems engineering measures. Consider implications of, interfacing and environmental factors in adding remote enunciator systems. C. Education Effective education for clinicians is a critical part of the process that needs to be considered when working to improve alarm-related safety. Clinicians need to be provided with plenty of opportunities to learn about the details of the alarm-based medical devices they are expected to operate. Such learning must reach the level of operational effective-
ness rather than just intellectual knowledge. Planning for this education needs to start during the technology planning and procurement process. Specifically, the cost for training clinicians on how to use devices with alarms needs to be included in the budgeting and implementation timeline for new technology procurement. Need to train clinicians once devices arrive and annual basis refresher courses, training of per diem or other staff. Training should include information on the institution’s alarm setting and response protocols. VI. FUTURE DIRECTIONS The results of this study lay the groundwork for future efforts towards improving the area of clinical alarms. Including: • • •
•
•
•
Developing awareness of the need to improve clinical alarms Soliciting the constituents to meet at focused forums to develop action plans to improve identified problem areas Promote to the medical device industry the critical need to reduce false alarms by a. enhanced parameter acquisition accuracy and employment of proven “smart alarms” technology to reduce false alarms b. better human factors engineering in alarm systems such as the use of more intuitive graphical user interfaces c. improved alarm integration and intelligence Bringing forth the data to standards bodies to promote alarm standardization improvements including the use of scientific research data in developing alarm standards such as a uniform method of annunciation (tone, display, etc.) for life critical versus other types of alarms Developing a better awareness by clinical staff of the criticality of alarms and deleterious effects of operational problems so that there can be an enhanced emphasis of the importance of training and preparation in the area of alarms Re-evaluate the area of clinical alarms in 1-2 years by administering a similar survey and other measures to determine progress in clinical alarm improvement Author: Institute: Street: City: Country: Email:
Yadin David Texas Children's Hospital 6621 Fannin Street Texas USA [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Medical Equipment Inventorying and Installation of a Web-based Management System – Pilot Application in the Periphery of Crete, Greece Z.B. Bliznakov1, P.G. Malataras2 and N.E. Pallikarakis1 1
Department of Medical Physics, University of Patras, Patras, Greece 2 Institute of Biomedical Technology, Patras, Greece
Abstract— The development of an equipment inventory of the medical devices installed and used in the Peripheral Healthcare System (PHS) of Crete, Greece is considered to be the cornerstone for the initiation of a process for the evaluation, monitoring and management of biomedical technology in this institution. The medical equipment inventorying process is performed by the Institute of Biomedical Technology, in cooperation with the Biomedical Technology Unit from the Department of Medical Physics of the University of Patras, Greece. The whole procedure is divided and accomplished in three phases: 1) collection of medical equipment data on structured paper sheet forms; 2) data entry in a computerized management system; 3) installation of an in-house developed webbased medical equipment management system, called WEBPRAXIS, used to store and manage the medical equipment. As a result, the procedure leads to the creation of an electronic database, containing essential information for the identification of each medical device such as: equipment control number, device group, type and manufacturer, serial number, department and location, age and acquisition cost. Total number of 4 958 medical devices from 22 healthcare institutions are recorded. Furthermore, the medical equipment is classified in 355 device groups, 2 050 device types and 715 manufacturers. The current project overcomes a number of problems, present in the field of biomedical technology management in the PHS of Crete. The most important are: 1) ineffective practice of keeping local inventory files, due to insufficient information on codification and nomenclature standards; lack of computerized systems and software, and lack of personnel experience; 2) no centralized database for the medical equipment in PHS of Crete, resulting in poor technology management, assessment, planning and decision making. The systematic use of WEB-PRAXIS is expected to improve the management of medical equipment with significant benefits related to cost-efficiency and safety. Keywords— biomedical technology management, equipment inventory, web-based management system
I. INTRODUCTION The development of a medical equipment inventory is considered to be the cornerstone for the initiation of a process for evaluation, monitoring and management of biomedical technology. Whether equipment is used for diagnosis, monitoring of patient condition, or therapy, the
healthcare facility should ensure that the equipment is performing as intended by the manufacturer. This imposes the use of software tools, especially designed for medical equipment management as the only cost-effective solution. The current work presents the whole procedure of medical equipment inventorying and installation of a web-based management system in the Peripheral Healthcare System of Crete (PHS), Greece. II. MATERIALS & METHODS A. Background information Crete is the largest island in Greece. Situated in the Mediterranean Sea, it is the most south part of the country. It has an area of 8 300 square kilometers, a coastline of 1 040 kilometers, and a population of approximately 600 000 people. The Peripheral Healthcare System of Crete consists of 22 institutions, among which there are 8 hospitals and 14 medical centers. The procedure for inventorying of medical devices is performed by the Institute of Biomedical Technology (INBIT), in cooperation with the Biomedical Technology Unit (BIT unit) from the Department of Medical Physics at the University of Patras, Greece. The whole process is divided and accomplished in three phases: 1) collection of medical equipment data on structured paper sheet forms; 2) data entry in a computerized medical equipment management system; 3) installation of a web-based medical equipment management system. B. Collection of medical equipment data The first step of equipment inventorying procedure is to record the data of every single medical device on a structured paper sheet form. This is imposed by the needs for time saving and quick completion of the work, highest mobility of the working team, as well as, least possible interference in the hospital daily routines. For these purposes, a standardized data collection form is created. It comprises the following fields of information being recorded:
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1092–1095, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Medical Equipment Inventorying and Installation of a Web-based Management System
• • • • • • • • • • • • • • • • • • •
Device code - a unique identification code assigned to every single medical device. Hospital - code and name of the hospital, where the device belongs to. Parent system - applicable when the device is a part of a multi-modular system. Device group - code and nomenclature of the device group. Manufacturer - code and name of the device manufacturer. Model - the model of the device. Serial number - represents the serial number of the device. CE mark - indicator for CE marking of the device. Department - code and name of the department/clinic responsible for the device. Location - code and name of the location (room, ward, cabinet) where the device is in operation. Status - indicates the current status of a device at a given time. Supplier - code and name of the device supplier. Acquisition date - the date of the device acquisition. Manufacture year - the year when the device is manufactured. Installation date - the date when the device is installed in the hospital. Warranty expiration date - the date when the warranty for the device expires. Software - the software accompanying the device, if existing. Acquisition cost - the cost for purchase of the device. Comments - any other useful information is recorded.
The data collection procedure of medical equipment inventory is performed by a team, comprised of 8 specialized biomedical engineers and medical physicists, both from INBIT and BIT unit. The team is divided in 4 workgroups, each comprised of 2 people. Based on preliminary information, a time-plan is created and the responsibilities for each workgroup are assigned. Hospital-by-hospital, room-by-room, item-by-item, the 4 workgroups accomplish simultaneously collection of medical equipment data. Preliminary prepared labels, comprising information for the device code and hospital, are attached to each medical device, thus allowing easy and unique identification in the future. C. Data entry in a computerized medical equipment management system Once, the medical equipment data are collected on paper sheets, conversion to electronic format has to be accomplished. For this purpose, an in-house developed computerized medical equipment management system, called PRAXIS [1,2], is used to carry out data entry procedure and
1093
store the collected data in a relational database. It is a powerful software tool supporting the overall management of medical equipment in healthcare. Among the majority of PRAXIS features, some of them mostly related to specific procedure of medical equipment data entry are: • • • •
Special instruments used to facilitate and speed up the process of data entry. Preliminary inserted catalogs used for standardization of common names and nomenclatures. Network installation setup allowing several users to perform data entry simultaneously into the same database. Data consistency, security and backup.
Data entry procedure is carried out by the same team, performed the data collection in the hospitals. For this purpose, at the main office of INBIT in Patras, a customized network installation of PRAXIS with 5 workstations is accomplished. This allows 5 users to operate the system at the same time and to enter date into the same database. Time schedule is created and job activities are designated. At the end of each working day, a backup of the database is taken. All medical devices are classified following the coding and classification of device groups in compliance with the Universal Medical Device Nomenclature System (UMDNS) [3], developed by Emergency Care Research Institute (ECRI). D. Installation of a web-based medical equipment management system For the purposes of the project, a customized medical equipment management system, developed by INBIT, is used. The system, called WEB-PRAXIS, is designed and implemented on the basis of PRAXIS and it is its successor. It features several improvements and advantages, among which the most important related to the current work, are: • • •
Web-based application and service. Centralized database management. Facile support of application upgrade and data update from a distance.
At present, WEB-PRAXIS is developed using PHP open source code and is able to work with ORACLE or MySQL databases. The WEB-PRAXIS work environment allows the user to manage information contained in the database in a structured, effective and user-friendly way. There are six main areas as shown in the figure 1.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1094
Z.B. Bliznakov, P.G. Malataras and N.E. Pallikarakis System menu Main toolbar Linked information
Link buttons Data tables
Main area
Data summary
Fig. 1 Basic structure of WEB-PRAXIS workspace Data tables area lists all the records related to the specific form in a table format. The most important database fields are included and capability of information sorting, according to the desired field is provided. Main area presents all available information for the selected record and it is the area, where the record’s data is managed (inserted, deleted, updated, searched, etc.). Database fields are either compulsory (blue-colored) or non-compulsory (blackcolored). Link buttons open linking screens, where the user connects relative information from other screens to the specific record. Data summary is the area where the user obtains a comprehensive image of the whole data table, shown on the specific form. It contains information such as: total number of records; the specific number of the record currently reviewed; the time and the user of the last record modification; and the hospital to which the specific record belongs. Linked information is functionality that allows the user to link a specific record with relevant information, contained in a file (text, worksheet, picture, video, etc). A preview feature is available for majority of the file types, and allows the user to view the information, associated with the specific record at any time. Furthermore, the user can open the linked file to edit or update the information available. System menu and Main toolbar provide access to all the functionalities of the system and facilitate access to the
most common features used throughout the system operation. It provides easy navigation and managing information of the data tables, search, print, and export capabilities. III. RESULTS The whole procedure leads to the creation of an electronic database of medical equipment inventory of Crete Peripheral Healthcare System. Total number of 4 958 medical devices are recorded and classified. Table 1 shows the distribution of medical equipment among the 22 healthcare institutions. Furthermore, the total medical equipment inventory is classified by five different categories: device groups, device types, manufacturers, suppliers, and departments. The number of items for each category is shown in table 2. WEB-PRAXIS is installed on a central web server located at the administration office of the Peripheral Healthcare System of Crete. Each of the 22 healthcare institutions as clients connects to the central application and database by means of a web browser. No additional installations are required at the client sites. Access restrictions for each hospital users are assigned in order to retain security of data.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Medical Equipment Inventorying and Installation of a Web-based Management System
Table 1 Medical device distribution by healthcare institution No 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22
Healthcare institution GH VENIZELEIO - PANANEIO IRAKLEIO UNIVERSITY HOSPITAL GH AGIOS NIKOLAOS GH-MC IERAPETRA GH-MC NEAPOLI GH-MC SITEIA GH RETHIMNO GH HANIA - AGIOS GEORGIOS MC KISSAMOS MC KANDANOS MC VAMOS MC PERAMA MC SPILI MC AGIA FOTEINI MC ANOGEIA MC MOIRES MC KASTELI MC AGIA VARVARA MC HARAKAS MC ARKALOHORI MC ANO VIANNOS MC TZERMIADO TOTAL
Medical devices 764 1675 457 212 48 170 276 855 34 42 27 39 37 27 45 39 45 34 33 33 39 27 4958
Table 2 Distribution of medical device categories Categories Device groups Device types Manufacturers Suppliers Departments
Items 355 2050 715 150 120
Prior to the present work, there is no centralized map for the medical equipment in PHS of Crete. This results in poor or limited technology management, assessment, incident reporting and lack of a reliable data-based planning decision making scheme for the distribution of medical instrumentation. V. CONCLUSIONS The core of a biomedical technology management program is the development of a comprehensive equipment inventory of the healthcare system. The creation of an equipment inventory serves the need for identification and control of all medical devices in the PHS of Crete. WEBPRAXIS addresses the needs associated with the services performed by the Clinical Engineering Departments related to all aspects of a medical device life cycle. The systematic use of WEB-PRAXIS is expected to improve the management of medical equipment with significant benefits related to cost-efficiency and safety.
ACKNOWLEDGMENT The authors would like to express their thanks to all the people from INBIT and BIT unit participating in current project for their valuable help. The work is financially supported by the Ministry of Health and Social Solidarities, Greece.
REFERENCES 1.
IV. DISCUSSION The current project reveals and tries to overcome a number of problems and deficiencies present in the field of biomedical technology management in the Peripheral Healthcare System of Crete. Some of the most important are the following: There is a rather non-uniform and ineffective practice of keeping local inventory files in the hospitals due to: insufficient information on codification and nomenclature practices and standards; lack of computerized systems and software, and lack of personnel experience.
1095
2. 3.
Bliznakov Z, Pappous G, Pallikarakis N (2002) Development of a Biomedical Technology Management System. 3rd European Symposium on Biomedical Engineering and Medical Physics, In Proceedings, Patras, Greece, 2002 Bliznakov Z, Pappous G, Bliznakova K, Pallikarakis N (2003) Integrated Software System for Improving Medical Equipment Management. Biomed. Instrum. Technol. 37(1):25-33 UMDNS at http://www.ecri.org/Products_and_Services/Products/ UMDNS/Default.aspx Author: Zhivko Bliznakov Institute: Street: City: Country: Email:
University of Patras Department of Medical Physics, School of Health Sciences Rio – Patras, 26500 Greece [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
MIDS-project – a National Approach to Increase Patient Safety through Improved Use of Medical Information Data Systems H. Terio Department Clinical Engineering, Karolinska University Hospital, Stockholm, Sweden Abstract— MIDS, Medical Information Data Systems is proposed as a term for systems consisting of medical devices and IT-systems. MIDS are used for collection of physiological data from patients and transferring this data through a computer network to serves and databases. The project showed that the manufacturers of IT-systems for health care and the users of them do not take their full responsibility for the development and implementation of the information safety and functionality. For example there is no clear routines how to secure the accuracy of the information that is transferred between different IT-systems. It is also a problem that the borderline between medical devices and IT-systems is unclear, which makes it difficult to decide what directives or legislation should be applied. This makes it also unclear who should support the systems and the project proposes guidelines to improve the situation. The project also showed that there is need for continuing education of the staff handling MIDS. Keywords— Medical devices, IT-systems, medical information systems, PEMS, MIDS.
I. INTRODUCTION The virus Sasser caused severe problems for several medical devices connected on a network, for example heart monitoring systems, PACS systems and CT, where images disappeared. A PACS system crashed and was down for three days after installation of new anti-virus program and security updates, patches from Microsoft. These are example from recent adverse events occurred in Sweden. An analysis of withdrawal of medical devices during 1992-1998 conducted by FDA, showed that out of 3140 cases 8% were depended on software problems and 80% were depended on changes made after production and distribution of the software. Very often these problems are connected to other software like anti virus software that are running parallel to the application software. The development of medical devices and IT-technology has made it possible to use new technological solutions within health care. Medical information systems are to day connected directly to medical devices in order to retrieve physiological data or to control the function of the medical devices. IT-systems and software used in health care have obtained more crucial importance for the treatment and care of an individual patient. These systems are often even life
supporting. Shortcomings and defects in the software can constitute a risk for injuries or they can even harm the patient. Integration of medical devices and IT and the use of the devices in networks have made it necessary to find a new notion describing the integration of the two areas. The proposed new acronym is MIDS that stands for Medical Information Data System, which are medical devices integrated with IT-products/systems that are used to collect physiological data for diagnosis and/or treatment of a patient and transfer this data through a network to a server/database. The use of MIDS increases the requirements for higher competence in the persons who handle these systems. It is also necessary to develop the co-operation between clinical engineers and IT-engineers. The Swedish Society for Medical Engineering and Medical Physics (MTF) started a project in December 2005 that aim to improve patient safety through clarification of responsibilities for work with MIDS. The project aims to give a proposal for national requirements for competence that engineers working with MIDS should have and also guidelines for co-operation between Clinical Engineering (CED) and IT departments in order to fulfill the demands that directives and legislation state. II. METHOD Information and data that has been used for development of the proposals was collected by interviews of different professionals, using questionnaires and survey of legislation and literature. Discussions and exchange of experience among the members of the project and reference groups have also been of importance. One of the first areas that was mapped during the project was how the MIDS are used and handled in the hospitals. The current level of competence and what continuing education the engineers, working in the field felt that they needed was clarified by a questionnaire posted on the MTF website. This special MIDS-portal was designed specially to inform MTF’s members and public. The project members had their own web based project place where all the documentation was collected and it was also used for project administration.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1047–1050, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1048
H. Terio
The project group arranged seminars where the progress of the project was reported and where it was possible to exchange information with different professionals. Participation at different national meetings and debates has also been a good forum for information exchange. The project group had members from the largest university hospital in the country as well as from smaller regional hospitals and some of the organizations that work with medical informatics. The board of MTF was the steering group and the reference group was composed of representatives from national health care organisations and some representatives form county councils and person working with higher education. III. RESULTS A. MIDS components The average MIDS consist of the medical device itself that very often is based on computer with application software or it delivers data to a computer. It can be connected directly to a network, but often the connection is through an external computer. The data is usually analyzed on a client or a workstation, which is mostly an ordinary personal computer, even though there is a number of systems where the analysis are performed on the medical device itself. On these computers the application software, classified as a medical product, must work together with other software. The network where MIDS are connected can be a Local Area Network (LAN) functioning within a single clinic for collection and analysis of data for diagnosis of patients. However, in large hospitals and in cases where several people need to share the information, MIDS are one part of the general IT-infrastructure. In the latter case the manufacturer can demand that the hospital must use a segmented physical network or firewalls and Access Control Lists (ACL). The systems must be designed so that a stop in a network will not jeopardize the patient safety according to directives. Continuous monitoring of the network function is needed when using segmentation and intelligent switches for network traffic control. Quite often the servers and databases used in MIDS are not of the latest design. This means that one has to use operative systems that are not approved by the IT-department at the hospital. The manufacture can also demand that the database is installed on a separate physical server. In all cases it is required that verified and validated anti-virus software is used. Likewise, the patches used must be verified and validated for the application software. MIDS are used in most of the different parts of the modern health care. The different imaging systems are perhaps
the most well know, like CT, MRI, ultrasound etc. with accompanying storage and communication systems. Laboratories with their sophisticated analysis systems connected to Laboratory Information System (LIS) must follow even the In Vitro Diagnostic Directive (IVDD). Digital ECG, EEG and EMG equipment are examples of electro medical MIDS. Monitoring systems like the Patient Data Management System (PDMS) used for example in intensive care are very demanding since they are connected to a number of medial devices collecting vital data from patients in very serious conditions. These systems communicate also with Electronic Health Record (EHR) systems and LIS, which make the system sensitive for disturbances. Also the systems used with telemedicine and home health care are classified as MIDS. B. Regulation and requirements The Medical Products Agency is the Swedish national authority responsible for regulation and surveillance of the development, manufacturing and sale of drugs and other medicinal products like medical devices. They have together with the National Board of Health and Welfare, classified a software based patient data management system as a medical device. This means that the manufacturer has much greater obligation to make risk analyses and test the product according to the Medical Devices Directive (MDD). The quality of a medical device is mainly depended on how it is designed, developed and produced. But, the quality of software depends instead mainly on its design and development and almost not at all on its production that is carried out by copying the code, which easily can be verified. Most of the software problems can be traced back to shortcomings in design and mistakes made during the development. This new situation has also brought about need to revision of European directives and international standards. The revised version of MDD, 93/42/EEC states in the first article, paragraph 2 point (a) that “a ‘medical device’ means any instrument, apparatus, appliance, software, material or other article, whether used alone or in combination, together with any accessories, including the software necessary for its proper application intended by the manufacturer to be used for medical purposes for human beings…” In Annex I, section II “Requirements regarding design and construction”, the item 12.1b states: “For devices which incorporate software or which are medical software in themselves, the software must be validated according to the state of the art taking into account the principles of development lifecycle, risk management, validation and verification.” In the 3rd edition of IEC 60601-1:2005 there is a number of requirements that the Programmable Electrical Medical
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
MIDS-project – a National Approach to Increase Patient Safety through Improved Use of Medical Information Data Systems
Systems (PEMS) must meet. Since medical devices will sometimes be used together to create a system, which is likely to become more frequent with the increasing use of computers to analyze clinical data and control treatment. Sometimes medical devices are designed by the manufacturer to work with other medical devices, however, it will often be the case that the separate medical devices are not designed to work with each other. Therefore, the standard also require that there should be someone in the organisation who is responsible for ensuring that all the separate medical devices work together satisfactorily in the integrated system; in other words, someone has to be responsible for designing the integrated system. It is recognized that the system integrator often has to comply with particular regulatory requirements. In order to perform its function, the system integrator needs to know: • • • • • • •
how the integrated system is intended to be used; the required performance of the integrated system; the intended configuration of the system; the constraints on the extendibility of the system; the specifications of all medical devices and other equipment to be integrated; the performance of each medical devices and other equipment; and the information flow in and around the system.
The IEC standard 62304 demands how the software for medical devices must be developed, documented, validated and supported. It also demands how the software must be classified depending on what type of injury that a possible malfunction can cause. C. Problems in connection with MIDS Specification of the function and technical requirements of large, complex MIDS are sometimes difficult to do, especially if there is not the competence needed at the hospital. Purchase of the system will then be done with imperfect documentation and this can lead to severe problems later on. It appeared during the project that sometimes the manufacturers and vendors have poor knowledge of the regulation and directives that govern the usage of the systems. For example the vendor in one case had a “Declaration of Conformity” for their system that was suppose to be connected to medical devices and to EHR. After the system installation it turned out that the development of the system was not finished and it was not possible to connect the system to the EHR. Handling of anti-virus software and patching of the operative system can lead to re-definition of the system. If the manufacturer has not validated the installed software or the patches, they can decline they liability with motivation that
1049
the system has not been used in an intended way. The system will then be classified as an in-house product and the user is liable for the whole system. Introduction of new MIDS or changes done on them should be combined with risk analysis. However, very often this is overlooked because there is no competence to carry out them. D. Competence requirements To survey the current competence of the engineers working with MIDS at the hospitals and the need of continuing education for these persons a questionnaire study was carried out. The questionnaire was sent out to personnel working in this area in every county in Sweden. Moreover, a number of persons with a good general view of the area were interviewed. The results showed that both the clinical engineers and IT-engineers were interested to enhance their knowledge on respective area. However, clinical engineers were more positive to take a supplementary examination in computer sciences and data communication than the ITengineers in subject on biomedical engineering. The results also showed that only half of the repliers considered that they had sufficient knowledge to use the MIDS they are working with. It can be concluded that there is a lack of knowledge within the MIDS-area and that there is need to define what competence is really required. This will also help the hospitals and county councils to plan for continuing education. E. Proposal for classification As a result of the project, a proposal for classification of computers used in MIDS was presented. This classification would make it easier to point out who is responsible for the support. All the computers, no matter if it is a thin client, server or workstation, have been divided in three main classes • • •
W is an ordinary computer with standard configuration, WM is an ordinary computer with standard configuration, but with a medical application, and M is a medical device with configuration in accordance of the regulations.
The computers are then grouped in three groups depending on where they are placed. • • •
Group 0 is outside the patient environment Group 1 is inside the patient environment with safety requirement according to IEC-60601 Group 2 is inside the patient environment with safety requirement according to IEC-60601 and IEC-529 Table 1 summarizes the classification of the computers.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1050
H. Terio
Table 1 Class W Class WM Class M
Classification of computers
Group 0
Group 1
Group 2
W0 WM0 M0
W1 WM1 M1
W2 WM2 M2
IT-department is responsible for the groups W-WM-WP computers and CED is responsible for the group M-MP computers. CED and IT co-operate on support of the groups WM and WP. In some cases there is difficulties to decide in which class the computer belongs and in such case CED and IT must discuss to clarify support responsibility. F. Proposal for technical solutions Special MIDS-domains were proposed to be created for the clients and servers at larger hospitals. In smaller hospitals Organizational Units in Microsoft Active Directory environment can be used. CED is proposed to be responsible for the MIDS-domain or the AD-environment for MIDS, whereas the IT-organization would be responsible for the general IT-infrastructure in the hospital. To increase the safety when usin MIDS segmented network with Access Control List should be introduced. Also labeling of the cables and workstations/clients should be used in order to separate MIDS and ordinary IT-systems. The cables used for MIDS are proposed to be green and for the IT-systems white. The labeling of the computers should follow the same convention. Anti-virus software with updating should be used for MIDS connected to a network. The function of MIDS should be verified and validated together with the user and the manufacturer. It is also recommended that there should be a general anti-virus administration with representation from CED and IT-department. They should supervise that the systems are managed according to regulations.
cal Informatics, CED, IT and biotechnology will be integrated in the new organisations to support the development of health care. To ensure the positive development for increased patient safety when using different MIDS, requires that CED and IT organizations develop their Quality Systems and work for certification of their work. The certification can be accomplished according to ISO 9000, ISO 13485, ISO 17025 and ISO 20000 or a combination of these standards. The two organisations can do the certification separately in the beginning but in the future they should do it together. Common product and system administration for CED and IT is recommended. On this way the routines, processes, cooperation and responsibilities will be ruled in a natural way. New functions like the System Integrator, described in the IEC 60601, 3rd edition, should be introduced as soon as possible in order to handle the MIDS in a proper way. This function should not be only a physical person, but a group of professionals from CED, IT and users system administration.
ACKNOWLEDGMENT This paper is based on the project report from the national MIDS-project. The leader of this project has been Salvatore Capizzello from the County Council of Norrbotten.
REFERENCES 1. 2. 3.
Medical Devices Directive (MDD) Directive 93/42/EEC–OJ 169/ 12.7.93 IEC 60601-1:2005, Medical electrical equipment – Part 1: General requirements for basic safety and essential performance IEC 62304, Ed. 1: Medical device software – Software life cycle process Author: Heikki Terio
IV. CONCLUSIONS Technical development and political decisions that direct the health care will require in the future that new interdisciplinary organisations be created within CED and IT. Medi-
Institute: Street: City: Country: Email:
Department of Clinical Engineering, C2:44 Karolinska University Hospital, Huddinge 141 86 Stockholm Sweden [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Patient safety - a challenge for clinical engineering J.H. Nagel and M. Nagel Department of Biomedical Engineering, University of Stuttgart, Stuttgart, Germany Abstract— Every tenth patient in US and European hospitals suffers from preventable harm and adverse effects related to his or her care. As adverse events also carry a high financial cost, patient safety remains a major global priority. Often forgotten in discussions on the necessary actions to improve patient safety, clinical engineering is one of the major pillars for safe health care. Realizing the huge potential of contributing to the world-wide efforts to provide safer care, the clinical engineering community has accepted the challenge to take a lead in providing a safer environment for patients. Keywords— Patient safety, clinical engineering, Biomedea.
I. INTRODUCTION Patient Safety is a globally important issue in health care that the public has long been unaware of, though an increasing body of research from around the world consistently suggests that in any nation, regardless of the nature and quality of the health care system, a high percentage of hospital admissions may result in disease, injury or even death due to adverse events. These are all incidents and accidents that result in unintended harm to the patient by commission or omission rather than by the underlying disease or condition of the patient. Errors in treatment, nosocomial infections, communication problems, human and technical error, mistakes in patient management, as well as inadequate education and training of personnel are only some of the most frequent causes for adverse events. According to WHO, each time a patient is harmed by the health system, it is a betrayal of trust. These so-called adverse events are actually reverse events. Instead of advancing people’s health and well-being, medical errors send them backwards, causing more harm than good [1]. Studies in a number of countries have shown rates of adverse events ranging from 3.5% to 16.6% among hospital patients in industrialized countries. An average of one in every ten patients admitted to a hospital suffers some form of preventable harm that can result in severe disability and one in every 300 patients even dies as the result of an adverse event. Added to the considerable human misery is the economic impact of adverse events. Several studies have shown that additional hospitalization, litigation claims, hospital-acquired infections, lost income, disability and medical expenses cost some countries between US$ 6 bil-
lion and US$ 29 billion a year. In developing countries and countries in economic transition the situation is even far more serious. WHO reports that 77% of all reported cases of counterfeit and substandard drugs occur in developing countries and that at least 50% of all medical equipment in many of these countries is unsafe or unusable. Quality health care should be safe, effective, patient centered, timely, efficient and equitable. Safety is a core principal of quality health care provision and a fundamental value of any health system. In 2005, the European Union even proclaimed in its Luxembourg Declaration on Patient Safety that “access to high quality healthcare is a key human right recognized and valued by the European Union, its Institutions and the citizens of Europe. Accordingly, patients have a right to expect that every effort is made to ensure their safety as users of all health services” [2]. In our world of highly complex health technology where new equipment as well as medical and surgical procedures are developed and employed at an increasing pace, safety in the health care system is substantially depending on the achievements and performance of biomedical and clinical engineering as well as medical physics. Clinical engineering is taken to mean the application of medical and biological engineering within the clinical environment for the enhancement of health care. A Clinical Engineer is a professional who supports and advances patient care by applying engineering and managerial skills to healthcare technology. Due to the increasing dependency of clinical medicine on highly sophisticated health technology and thus on complicated medical equipment, devices and information & communication technologies, the clinical engineer has become an essential connecting link between modern medicine and technology. His work is directly associated with patient safety. The International Federation for Medical and Biological Engineering (IFMBE), mainly through its Clinical Engineering Division, and the International Union for Physical and Engineering Sciences in Medicine (IUPESM), together representing more than 140,000 professional biomedical/clinical engineers and medical physicists in virtually all WHO member countries, thus constituting a unique pool of expertise on the subject matter of health technologies, have established close cooperation with the World Health Organization (WHO) for the purpose of advancing patient safety. The Federation is closely cooperating with WHO in
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1043–1046, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1044
J.H. Nagel and M. Nagel
the areas of health technologies, specifically policy and planning, quality and safety, norms and standards, technology assessment and management, education and capacity building within the broad context of improving health service delivery and health systems performance. IUPESM and IFMBE are resuming leadership and coordination of the global WHO activities for the improvement of patient safety in the areas of health technologies and education with regard to health technologies. II. WHO WORLD ALLIANCE FOR PAIENT SAFETY Primum non nocere – first do no harm! Under this motto, attributed to Hippocrates (ca. 460-370 BC), the World Health Organization (WHO) and its partners, including IFMBE and IUPESM, launched the World Alliance for Patient Safety on October 27, 2004, in Washington, D.C. The purpose of the Alliance is to advance patient safety by implementing a series of key actions to reduce the number of illnesses, injuries and deaths suffered by patients during medical treatment. The creation of the World Alliance took place two years after the Fifty-Fifth World Health Assembly Resolution on Patient Safety in 2002 called on Member States to pay the closest possible attention to the problem of patient safety and to establish and strengthen science-based systems necessary for improving patient safety and quality of health care, including the monitoring of drugs, medical equipment and technology. The resolution urged WHO to take the lead in developing global norms and standards, encouraging research, and supporting efforts by Member States in developing patient safety policy and practice. It is expected that its work will eventually lead to much greater long-term safety in health care. The impact of welldeveloped and well-applied strategies on patient safety is expected to include a dramatic decrease in adverse events in health care and a decline in expenditure in the order of billions of dollars of saved costs annually. To focus its activities, the Alliance is defining key initiatives, the so-called Global Patient Safety Challenges which aim to identify a specific patient safety topic for a two year program of action which addresses a significant area of risk relevant to all countries. The motto of the first challenge “Clean Care is Safer Care” (2005 and 2006) aimed at reducing the burden of health care associated infections by good hygienic practice, demonstrating at the same time that much can be achieved and lives can be saved by simple, inexpensive measures. The goal of the second challenge “Safe Surgery Saves Lives“, which is currently being implemented, is to improve the safety of surgical care around the world. Clinical engineering will contribute with regard to all aspects of technologies, technology assessment and manage-
ment, as well as training of medical personnel. IFMBE and IUPESM are aiming at developing the third challenge focusing on health technologies and education. IFMBE and IUPESM are supporting the World Alliance for Patient Safety through their participation in Alliance initiatives and events as well as through their own patient safety activities which include the organization of patient safety symposia, the participation in the World Standards Cooperation (WSC) Healthcare Technology Task Force, promotion of biomedical and clinical engineering research related to patient safety, health technology assessment and management, and educational activities including quality assurance measures with regard to patient safety. III. IUPESM HEALTH TECHNOLOGY AND TRAINING TASK GROUP
In parallel and as part of the activities of the World Alliance for Patient Safety, the newly founded Health Technology and Training Task Group (HTTTG) of the International Union for Physical and Engineering Sciences in Medicine (IUPESM) is dealing with the issue of patient safety as well. Health technologies, from the simplest health care systems to the most sophisticated, are viewed as the backbone of each country’s health services, a strong mesh which is one of the most fundamental prerequisites for the sustainability and self-reliance of health systems. According to the definition commonly used by WHO, these include drugs, devices, equipment, technical, medical and surgical procedures, the knowledge associated with them in the prevention, diagnosis and treatment of disease as well as in rehabilitation, and the organizational and supportive systems within which care is provided. Drugs, which belong to special subsets of health technologies, are not included in the work of the HTTTG. Medical and surgical procedures are within the scope of this program only with regard to devices, technical support and education. Included into devices and technical procedures are the information and communication technologies. Steadily increasing health care costs have reached crisis proportions in many countries and are coming under close scrutiny from governments, health-care providers, insurers and consumers. Efforts to contain these costs, or at least to slow their growth, have been largely unsuccessful as they continue to outpace growth in gross domestic product. Though often blamed for the cost explosion in the health care systems, the cost of medical devices is only about 28% of health care expenses in most countries and properly selected technologies can substantially increase the quality of health care and at the same time reduce the overall burden and cost of sickness and health care. Realizing these
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Patient safety - a challenge for clinical engineering
opportunities is one of the goals of modern health technologies. In spite of all efforts to make all health technologies available to all countries, an increasing number of countries still can not shoulder the financial burden of acquiring and maintaining all technologies that would be desirable and beneficial for the health care of their people. Therefore it is necessary to establish priorities based on available resources and the burden of disease, a rather complex task for which the World Health Organization together with the IFMBE has already developed methodologies and tools such as the WHO Integrated Health Technology Package (IHTP). One of the prerequisites to make proper use of health technologies is the existence of an appropriate, reliable infrastructure. In order to set up and/or maintain this infrastructure, centers for health technologies should be established as part of the health ministries or at least strongly linked to them. These centers should implement the national strategies and plans for the health technologies, oversee and guide the national health care systems and, where appropriate, regional health care centers with regard to the health technologies as well as collaborate and build partnerships with health-care providers, industry, patients’ associations and professional, scientific and technical organizations (WHO Resolution passed by EB120, 22-30 January 2007). Another important step in improving the quality of health care and patient safety through health technologies is to build up the necessary health workforce, i.e. medical physicists, clinical engineers and technicians, which is able to manage, maintain and operate the technologies and educate the users, i.e. physicians and nurses, in the safe and competent use of equipment and devices. The industrialized countries should help those countries who cannot afford to provide education and training for a sufficient health care workforce by offering educational support. The role of the IUPESM HTTTG, being borne by IFMBE and IOMP, is to help identify needs in health technologies and training for each cooperating country, to make recommendations for actions to satisfy these needs, and, as far as appropriate and possible, support the countries in the necessary actions. The Task Group will cooperate and coordinate its activities with the World Health Organization and participate in the maintenance and further development of the WHO Integrated Health Technology Package. The HTTTG will collaborate with other relevant national and international organizations, academic institutions and professional bodies which provide support to developing countries in the prioritization, selection, acquisition and use of appropriate health technologies which are, according to the WHO Health for All Series’ Glossary of Terms, methods, procedures, techniques and equipment that are scientifically valid, adapted to local needs, and acceptable to those who
1045
use them and to those for whom they are used, and that can be maintained and utilized with resources the community or the country can afford. The HTTTG will, in cooperation with WHO, organize workshops in participating countries together with the local Clinical Engineers and Medical Physicists as well as all other relevant professional groups, the health ministries, health care providers and political decision makers to evaluate the health technologies in the countries and to develop plans for the realization of appropriate infrastructures for health technologies. IFMBE and IUPESM are in the position to substantially contribute to the initiative to improve patient safety, specifically, but not limited to, the areas of medical devices and equipment, appropriate health technologies, management and maintenance of healthcare technology, access to medical devices and norms and standards, all areas where especially IFMBE has demonstrated significant involvement in patient safety matters in previous and ongoing cooperative projects with WHO, both in developing and developed countries. Medical device and equipment safety is comprehensively dealt with in a set of guidelines for improved management of physical resources in health care, including a software-based resource-planning methodology and management tool, the Integrated Healthcare Technology Package. The two organizations have the expertise, the resources, the research capabilities and delivery potential to tackle all patient safety issues related to health technologies, including assessment and management, as well as means to facilitate the access to medical devices and to aid the transfer of technology for developing countries. IFMBE is also active in the development and application of international norms and standards as essential tools to ensure the quality and safety of medical devices. IV. CLINICAL ENGINEERING ENHANCING PATIENT SAFETY Dyro lists safety as one of nine components of clinical engineering practice. The clinical engineer is well-versed in the following issues bearing directly upon patient safety [3]: systems analysis; hospital safety programs; accident/incident investigation; root cause analysis, healthcare failure mode and effect analysis; user error identification and reduction; risk analysis and management; hazard and recall reporting systems; vigilance and post-market device surveillance; device-device adverse interaction awareness; electromagnetic compatibility and interference; and disaster preparedness. The other components of clinical engineering practice, while not directly addressing safety, do affect secondarily the safety of the patient. They are as follows: health technology management, medical device service,
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1046
technology application, information technology, standards and practices, education and training, research and development, and clinical facilities engineering. Adequate, high quality education and training within the hospital with subsequent certification are essential prerequisites for the ability of clinical engineers to significantly contribute to patient safety. Only certification can assure that clinical engineers have the necessary knowledge, abilities and experience. It is hard to believe that, nevertheless, even in the US and almost all European countries any engineer, in many cases even anybody without an engineering degree, can call him/herself a clinical engineer and resume responsibility for the health technology in a hospital – without any proof of the required qualifications and competencies. Pointing out this problem at the European Union Patient Safety Summit in London in November 2005, Prof. Nagel asked the 543 delegates from 56 countries who were European and national political decision makers, representatives of the medical and nursing professions, health care providers and hospital managers, to vote on the question whether certification of clinical engineers should become mandatory in Europe. He received an overwhelming 82% rate of approval. Only 8% of the votes were against mandatory certification. So the biomedical/clinical engineering community has a political mandate to move on towards implementing the programs and management structures for the training and certification of clinical engineers. Hand in hand with the regulation of the CE profession there must be the obligation for the hospitals to employ sufficient numbers of CEs. Such an obligation would make it necessary to substantially increase the capacity of clinical engineering programs and would give higher educational institutions the perspective for the introduction of new programs. The Bologna Process and the Europe-wide introduction of Bachelor/Master programs offer a good opportunity to do so at least in Europe. For the Europe-wide implementation of high quality, harmonized Biomedical Engineering programs, J.H. Nagel is currently coordinating the BIOMEDEA project that he initiated in 2005 and in which 83 European universities and 30 national and international societies are participating. The project aims at developing European guidelines for the harmonization and accreditation of biomedical engineering programs, reach a Europe-wide consensus on these guidelines and thus allow for the mutual recognition of degrees. It also aims at setting up European Protocols for the training, certification and continuing education of clinical engineers. The project has proven to be very successful. The established guidelines for accreditation have already been accepted throughout Europe and have been implemented in a number of countries as national regulations. The protocols are close to being finished.
J.H. Nagel and M. Nagel
Due to the global importance of clinical engineering education for patient safety, WHO and the IFMBE are cooperating with BIOMEDEA, including the organization of patient safety symposia with global participation. The results of the BIOMEDEA meetings found international recognition and consent, and as a result, certification of clinical engineers on the basis of the BIOMEDEA protocols is currently being implemented around the world under the coordination of the IFMBE. V. CONCLUSION Now that the awareness of the need for improved patient safety has been raised and the dimensions of the problem have been recognized, there are numerous steps taken all around the world to make health care safer. Biomedical and Clinical Engineering have joined forces with the WHO to help improving the safety of health care technology. Much has been done already, but still more remains to be done to make health care in all its facets safer for all patients in the world. A poll taken by the authors at the World Health Care Congress Europe 2007, showed that while 78% among some 600 participants thought that we have a patient safety problem in Europe (20% were not sure), and 91% agreed that all relevant stakeholders should work together to make patient safety a top priority in Europe, only 43% of those attendees associated with a hospital confirmed that their hospital has implemented a medical error reduction or other patient safety program.
REFERENCES 1. 2. 3.
World Alliance for Patient Safety (2006) A Year Living Less Dangerously. Progress report 2005. WHO 2006 The Luxemburg Declaration on Patient Safety at http://ec.europa.eu/ health/ph_overview/Documents/ev_20050405_rd01_en.pdf Dyro JF (2004). The Clinical Engineering Handbook. Elsevier, Burlington, MA Corresponding author: Author: Joachim H. Nagel Institute: Department of Biomedical Engineering, University of Stuttgart Street: Seidenstrasse 36 City: Stuttgart Country: Germany Email: [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
System for Tracing of blood transfusions and RFID P. Di Giacomo1 and L. Bocchi2 1
2
Center for Biomedical Research - University of Rome La Sapienza, Rome, Italy Dept. of Electronics and Telecommunications, University of Florence, Florence, Italy
Abstract— In a medical center for blood transfusions, the blood haversacks are prepared and identified by a bar code label which identifies, for type and destination, the blood material according to the required standards. The traditional information systems use rigorous methodologies to verify and to watch the haversacks until they are delivered to the nurses. Nevertheless, at this stage, the tracing of the blood material is no more rigorous with particular regard to the mode and timing to give it to the generic patient. This is because we have not automatic systems to verify modes and timing for administering to the patient. This paper describes an automatic system for tracking the delivery process from the transfusion center to the patient transfusion. The proposed tracking system is based on RFID tags in order to offer a basic subset of functionalities also in absence of network collection. Keywords— RFID, quality, blood-transfusion, automatic clinical tracing.
I. INTRODUCTION The quality management in blood transfusion service is concerned with every aspect of transfusion practice and applies to all activities of a blood transfusion service. It involves identification and selection of prospective blood donors, adequate collection of blood and preparation of blood components, quality laboratory testing and ensuring the safest and most appropriate use of blood/blood components [1]. A simple definition of quality is ‘fitness for a purpose’. In a blood transfusion service, the primary goal of quality is ‘transfusion of safe unit of blood.’ The objective is to ensure availability of a sufficient supply of high quality blood and blood components for transfusion with maximum efficacy and minimum risk to both donors and recipients [2]. Quality management can be achieved by adopting good manufacturing practice, good laboratory practice, good hospital practice and good clinical approach by establishing a comprehensive and coordinated approach of total quality management [3]. All those who are involved in blood transfusion-related activity must be aware of the importance of quality management for its successful implementation. To maintain a high level of performance in most of the laboratory techniques, it is essential to monitor the functioning of reagent, equipment, techniques and procedures in the labo-
ratory and finally to manage the administering to the patient in a correct and safe way [4]. Medicare regulations in fact and the guidelines of the Joint Commission on Accreditation of Healthcare Organizations require assessment of the appropriateness of transfusions by a hospital committee. A set of criteria maps for component transfusion review by nurses or technical personnel was designed, tested, and modified. The intent of this paper is to present a system, as described in the following, for the automatic tracing of the blood units administering to the patients allowing to the health care professionals of familiarizing with the concept of effective quality assurance in regard to blood use. Although evaluation of the appropriateness of transfusion therapy is now required by the Joint Commission on Accreditation of Health Organizations, health care facilities have little experience with this aspect of professional quality assurance. To this end, for example, the Committee on Transfusion Practices of the American Association of Blood Banks, in Arlington, has provided examples of indications and audit criteria for individual blood components and products and commented on areas of controversy surrounding their use. Audit criteria from different institutions may vary because of differences in local interpretation of the indication, different patient populations, and, in some instances, the availability of blood and laboratory services. Moreover, several approaches to the review of transfusion practices are discussing in relation to clinical settings and pertaining to particular blood components [5]. II. SYSTEM DESCRIPTION The functional principle of the system is based on the coupling of the each blood unit with a specific patient. Moreover, the system requires of identifying the health operators involved in each intermediate phase, from the delivery of the unit to the transfusion to the generic patient. All the phases of the operations are executed across optical character readers for bar codes (fixed or mobile) and of RFID tags to identify the patient and prescriptions. RFID tags are increasingly in use in several sectors of health care system [6,7,8], as the standard bar code labeling suffers of
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1062–1065, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
System for Tracing of blood transfusions and RFID
1063
ome drawbacks (read-only, need to optical contact, limited amount of data). The major requisites which guided the design of the system are: • •
interoperability: the system needs to interface with any existing Hospital Information System (HIS) with a minimal effort safety first: often transfusion center operates in emergency situation. The system needs to take into account that some checks may be skipped due to an emergency situation.
Due to those requisites, the system has been designed in order to minimize the data transfer between different phases, by using RFID tags where essential data may be stored, and by designing the software in order to maximize usability and flexibility of check points. The resulting system is composed by almost independent modules, which may be connected to the current HIS without the need to create a parallel and overlapping infrastructure. The modular architecture permits to configure and to adapt the system to the health context. The structure of the system can be represented through four functional distinct phases: two of them (admission and prescription) are executed with a computer, while the other two, the monitoring and the tracking of the operations during the transfusion, use a mobile device. A major issue which arises during the design of a system based on mobile devices is the selection of a connection method which allows transmission of data to the system. An emerging possibility is the use of wireless connections, which may provide continuous network connection while the device is in use. However, the usage of a wireless network in a health care system is still controversial. Interferences of wireless network with the medical instruments are the major concerns in several situations, and many hospitals do not have, yet, a complete wireless network available. For this reason, we resorted to design a systems which does not rely on a continuous connection of devices with the informative system, by using the memory embedded in the tags to store essential information required to perform real time checks, while transmission of data to the main storage is performed off-line when mobile devices are connected to the wired network.
ceives from the HIS the essential data about the patient (name, used for visualization purposes, and any unique id used in the HIS to identify the patient). The module can be realized in different configurations, for example as a WEB application and as a connection library. The simpler configuration uses a WEB application which uses the software installed on the server of the system. The acceptance of the patients is made by a WEB page which allows to insert the personal data related to the patient (name, surname etc.) and to assign of an unambiguous ID. When the data are stored, the system activates the RFID reader to write the data of the RFID bracelet. The WEB interface provides also the possibility to store the data, allowing the research of the patients and the optional printing of the form for the transfusion. Alternatively, the same interface can be used to allow connection from a custom HIS, which may activate the module using a single http call. The library version is used when it is needs to directly interface with the HIS. The operator uses the existing system to make all the necessary operations for the acceptance of the patient. However, the existing program has to be modified in order to make the association of the patient to the RFID bracelet.
A. Patient admission
B. Prescription
Each patient, at check-in phase, is identified by a tag RFID with a bracelet. At the moment of the reception, the tag is coupled with the code of the patient. The coupling is valid until patient is dismissed. This procedure has been realized (Fig. 1) by using a tag writing module, which re-
Each blood unit is prepared in the laboratory according to the universal standards and it is identified by a bar code to guarantee the contained material. The proposed system does not affect the clinical procedures involving the transfusion center until a unit is prescribed to a patient. At the moment
Fig 1: Prescription and association of a blood unit to a patient
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1064
P. Di Giacomo and L. Bocchi
of assigning the unit to the patient for the transfusion, the system associates the bar code of the blood unit to the code of the patient until the end of the transfusion. This is performed by labeling the unit with a removable RFID tag. The system receives from the HIS the unique identification number of the patient and asks the operator to read the bar code which identifies the unit. Both data are then written of the RFID tag, which is applied to the unit. The system may also be configured in order to require the identification of the operator which performs the association. This identification is made by a badge if it has a bar code, or with a selfadhesive RFID tag. The module for the management of the prescriptions is analogous to the module for the acceptance of the patients, and it can be used both as WEB version and as library version. The realization in WEB version doesn’t require the installation of the software on the PC of the center for the transfusions, but it requires the presence of a server. The WEB version presents an interface to search the patients on which it’s possible to identify the patient that has to receive the unit and then to identify an unambiguous code. The requirements of the system are the same of the acceptance module with the addition of a bar code reader. When the system is connected by means of a communication library, the association is made at the available information system level. At the end of the operation, the library will activate the tracking system to write the tag, providing to the library the necessary data (bar code of the unit and identification of the patient).
Fig 3: Prescription of the unit
C. Tracking The system may be optionally configured to require a complete tracking of each unit during transport and temporary storage in the ward, until the delivery to the patient. Tracking is performed by means of a mobile device, a palm computer, equipped with a RFID reader and a bar code scanner (see Fig. 3). When the unit is transferred from an operator to a second one, or placed into a storage device, the tracking module requires the identification of both operators, or of the operator and of the storage device by reading their badges, or a tag attached to the storage device. All information concerning the operation is stored into the palm, while the current operator that is in charge of the unit and any constraint on the treatment of the unit (e. g. maximum storage time) is also stored inside the memory of the RFID tag. When the palm is positioned in the cradle for charging, data is uploaded to a central database to store all the history of the movement of the unit. At the same time, the information which is required to validate the next operation is available in the RFID without the need for any network connection. If the tracking system identifies any problem in the correct unit handling (e. g. storage time has exceeded the maximum duration, or the operator does not corresponds to the one who is in charge of the unit) it does not block the process, but signals the problem to the operator, who must acknowledge the warning and decide how to act in consequence to the problem.
Fig 2: Tracking of the unit
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
System for Tracing of blood transfusions and RFID
1065
indeed, quality assurance which can be achieved by means of a tracking system needs to be combined with some degree of flexibility of the tracking method, in order to comply with the major requisite of the health care process, which is the safety of the patient, especially in emergency situations. A tracking system which proves effective and which is well accepted by the medical staff needs to tune its requisites to each situation, ranging from minimal tracking, but maintain the essential functions, in the emergency situation, to the more complete tracking during routine procedures, without any excessive interference with ordinary activities.
REFERENCES 1.
Fig 4: Transfusion
2.
D. Transfusion The most important step is the transfusion of the blood unit to the patient. This step is mandatory, and the system requires the identification of the unit, the operator (or the operators) and of the patient. The system may be configured to require a certain category of operators (nurse, physician, or both) to be present during the operation (see Fig. 4). In this situation, in addition to the warning messages which may occur, as described in the previous step, a mandatory check is performed on the correct matching of the patient and the blood unit. After the operation is completed, the system may also record any adverse effect which may occur. As described before, all data is stored in the mobile device, and it will be transferred to the database when the palm is connected to its charging base.
3. 4. 5.
6. 7.
8.
Dzik WH. Emily Cooley lecture 2002: transfusion safety in the hospital. Transfusion. 2003, 43:1190-1199 Food and Drug Administration. Center for Biologics Evaluation and Research. Bar code label requirement for human drug products and biologics. Rockville, MD: February 25, 2004. Available at: http://www.fda.gov/OHRMS/DOCKETS/98fr/04-4249.htm Dzik WH. Emily Cooley lecture 2002: transfusion safety in the hospital. Transfusion. 2003;43:1190-1199 Fridey JL. Standards for blood banks and transfusion services, Bethesda, MD: American Association of Blood Banks, 2003 Rossi ED, Simon TL. Transfusion in the new millennium. In: Simon TL, Dzik WH, Snyder EL, et al, eds. Rossi's Principles of Transfusion Medicine, 3rd ed. Baltimore, MD: Lipincott Williams and Wilkins, 2002:1-12 Glabman M. (2004) Room for tracking. RFID technology finds the way. Mater Manag Health Care. 13(5):26-8, 31-4 Becker C. (2004) A new game of leapfrog? RFID is rapidly changing the product-tracking process. Some say the technology--once costs drop--could displace bar-coding. Mod Healthc. 2004 Jul 12;34(28):38, 40 James JS. (2005) FDA, companies test RFID tracking to prevent drug counterfeiting. AIDS Treat News. (417):5-8
Author: Leonardo Bocchi , Paola Di Giacomo
III. CONCLUSIONS The proposed system is a cost-effective solution to the tracking problem in the health care system. In this case,
Institute: Street: City: Country: Email:
Dept. of Electronics and Telecommunications Via S. Marta 3 50139, Florence Italy [email protected], [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
BIOMEDEA Joachim H. Nagel Department of Biomedical Engineering, University of Stuttgart, Stuttgart, Germany Abstract— There is widespread recognition of the need for high quality Biomedical Engineering education, training, accreditation and certification throughout Europe. Many schemes are being developed or are awaiting implementation, but there has been little harmonization. The continuing national differences in the educational systems are a serious problem that can hinder and limit trans-national education, training, employment and cooperation. The BIOMEDEA project aims at changing this situation by establishing Europewide consensus on guidelines for the harmonization, not standardization, of high quality MBES programs, their accreditation and for the training, continuing education and certification or even registration of professionals working in the health care systems. Adherence to these guidelines, which ultimately should be recognized in all 45 Bologna signatory countries, will insure mobility in education and employment as well as proper management of health care technologies, an important aspect with regard to the necessary safety for patients. Targets for the dissemination of results will be the European universities, political decision makers at European and national levels, the European Accreditation Council as well as the accreditation councils of all European countries, European quality assurance and accreditation agencies, health care providers and students. Keywords— BME education, BME accreditation, CE certification, CE continuing education.
I. INTRODUCTION Though harmonizing the European education systems and making European education policies more dynamic is high on the list of European political priorities, there are strict regulations and limitations on what is possible and who can decide which way to go within the EU. The 1997 Amsterdam treaty [1] clarifies which activities of the European Commission in the area of education are allowed in cooperation with the member countries in order to reach the common goal of high quality educational systems in all regions of the EU. The treaty emphasizes the European dimension of education, but nevertheless insists on subsidiary, clearly limiting the power of the Union, and leaving full and unrestricted responsibility for the structuring of educational systems as well as for curricula with the individual member states. The responsibility of the Union is to support and supplement activities of the member states in the area of education. The treaty does, explicitly, not allow harmoniza-
tion of national laws and administrative procedures by unilateral decisions of any European entities. Thus, implementation of the European Higher Education Area cannot be decided or dictated by the European Commission, it can only be achieved by European bodies that include all member states and that are able to reach unanimous decisions. Therefore, the Bologna process, i.e. the realization of the European Higher Education Area through the consensus of all 45 Bologna signatory states, is very important and needs to be fully supported by the Medical and Biological Engineering and Sciences (MBES) community [2, 3, 4]. For this purpose, a Europe-wide participation project, BIOMEDEA, has been launched in 2004 by Joachim Nagel in cooperation with Dick Slaaf (University of Utrecht) and Jan Wojcicki (International Centre of Biocybernetics of the Polish Academy of Sciences) as well as colleagues from 32 European countries, aiming at contributing to the realization of the European Higher Education Area in MBES [5]. The project coordinates previously started initiatives, using the available synergies to facilitate the implementation of the European Higher Education Area in the field of Medical and Biological Engineering and Sciences for the benefit of the universities, the students and last but not least the European people. The project aims at establishing Europe-wide consensus on guidelines for the harmonization of high quality MBES programs, their accreditation and for the certification or even registration and continuing education of professionals working in the health care systems. Improved quality assurance of MBES education and training is a vital component and is also directly related to the issues of health care quality. It offers the advantages of providing confidence for the employer that the employee has the necessary education, training and responsible experience, and the reassurance for the user of the service, meaning the patients, that those providing the service are effective and competent. Adherence to these guidelines will insure mobility in education and employment, and improved competitiveness of the European biomedical industries. Thinking about how to realize the requests for employability, mobility, compatibility, and quality assurance, it becomes obvious that the most urgent issues in this context are to harmonize and to generate agreement on the recognition and transparency of qualifications, specifically on ac-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1118–1121, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
BIOMEDEA
creditation of educational programs, training, continuing education, certification of individuals and a regulation of safety-critical professions. II. ACHIEVEMENTS, TRENDS AND DEVELOPMENTS Several studies have been published on the recent changes in the European national educational systems in general which indicate mostly positive influences on the quality of education. Information on the post Bologna developments of education, training and accreditation in the area of Biomedical Engineering has been gathered in the IFMBE White Paper on the status of MBES in Europe, organized and edited by Joachim Nagel [6]. Information has been obtained about the situation and practice in 28 European countries and contains an overview, written by Joe Barbenel (University of Strathclyde, Glasgow, UK), attempting to compare and contrast the different national models. When looking at all the information it is necessary to bear in mind two important constraints. The field of Biomedical Engineering is changing and growing rapidly, which means that some of the information was out of date almost as soon as it was written. The sections on different countries also show the enormous national variability in both educational practice and nomenclature that makes comparison difficult. It is to be hoped that the implementation of the ideas and aims of the Bologna Declaration will lead to more consistency and simplicity in the future. BIOMEDEA, the European participation project preparing Medical and Biological Engineering and Sciences for EHEA, is moving ahead very successfully with its goal to harmonize MBES education and training in Europe. There have been three meetings so far that took place in Eindhoven (2004, http://www.bmt.tue.nl/biomedea), Warsaw (2005, http://hrabia.ibib.waw.pl/Biomedea) and Stuttgart (2005, http://www.biomedea.org) which dealt with Biomedical Engineering (BME) curricula, the training, certification and continuing education of clinical engineers, and the accreditation of BME programs in Europe. The Eindhoven meeting consisted of 4 workshops: 1. The Undergraduate Biomedical Engineering Curriculum, with the goals to delineate the core topics in biomedical engineering science that all BME students should understand, the biomedical engineering science topics, underpinning areas of BME specialization, and the critical skills expected of all undergraduate biomedical engineers. 2. The Biomedical Engineering Master Curriculum. The goals were to delineate at the graduate level intellectual underpinnings for the future of biomedical engineering,
1119
integration of the engineering sciences and modern biology, engineering opportunities in the hospital, and critical skills. 3. Educational methods and best practices. The goals of the workshop were to discuss educational methods and to illustrate best practices adapted to teaching biomedical engineers how to solve clinical and biological problems. 4. Training. The goal of this part of BIOMEDEA was to gather the information necessary to write a survey on BME/CE Training in Europe and to establish guidelines for the minimum requirements for the training of Clinical Engineers in Europe. The Warsaw meeting included workshops on: 1. Guidelines for the accreditation of BME Programs in Europe: why do we need them and what should they specify? The goal of the workshop was to specify the general requirements of the guidelines. 2. BME/CE training – a European training scheme, with the goal to establish a European Protocol for the formation and training of biomedical or clinical engineers working in a hospital environment. 3. BME core competencies and specializations that should be recommended in the guidelines for the accreditation of BME programs in Europe. 4. Guidelines for curricula, specifying a flexible framework of BME curricula as a guide for the accreditation of BME programs. 5. Basic competencies in engineering/science, biology and medicine and general competencies including “soft skills” as minimum output requirements for accredited BME programs. The Stuttgart meeting included workshops on: 1. Criteria and Guidelines for the Accreditation of Biomedical Engineering Programs in Europe. Agreement has been achieved with regard to Bachelor and Master Programs. It was discussed whether there should be an accreditation of PhD programs as requested by the Bologna countries. 2. European Protocol for the Training of Clinical Engineers. 3. European Protocol for the Certification of Clinical Engineers. 4. European Protocol for the Continuing Education of Clinical Engineers. 5. IFMBE International Register of Clinical Engineers, and 6. Patient Safety - Biomedical/Clinical/Hospital Engineering Providing a Safe Health Care Environment.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1120
Joachim H. Nagel
The third BIOMEDEA meeting in Stuttgart in September 2005 featured an international symposium on an important issue of quality assurance in biomedical/clinical engineering: patient safety. The Symposium was co-sponsored by the University of Stuttgart and the International Federation for Medical and Biological Engineering (IFMBE). It was organized in cooperation with the World Health Organization (WHO) and endorsed by the European Alliance for Medical and Biological Engineering and Science (EAMBES). The meeting was dedicated mainly to the development of a European scheme for the certification and continuing education of clinical engineers and sought cooperation with the responsible bodies in other parts of the world including the American Institute of Clinical Engineering (ACCE) to establish international cooperation with the goal to achieve global harmonization of the education and certification of biomedical/clinical engineers. 81 European academic institutions participated in the first three meetings and as a result, there has been agreement on the Criteria and Guidelines for the Accreditation of Biomedical Engineering Programs in Europe [7] and a European Protocol for the Training of Clinical Engineers [8]. European Protocols for the Certification and Continuing Education of Clinical Engineers have been discussed and are currently being written. In order to realize the principles of the European Protocol for the Training of Clinical Engineers, ade-
quate structures for the management of the training scheme as shown in Fig.1 must be put into place. On general request by the participants in the first three workshops, three additional meetings are being planned for 2007/2008. III. FUTURE DEVELOPMENTS The expected results of BIOMEDEA will be a white paper on BME education, educational methods and best practices in Europe, protocols for the formation, training, certification and continuing education of clinical engineers in Europe, and guidelines for the accreditation of BME programs in Europe. The International Federation for Medical and Biological Engineering (IFMBE), the main sponsor of BIOMEDEA, will, in cooperation with WHO and as a part of the initiatives of the World Alliance for Patient Safety and the Global Alliance for the Health Workforce, set up a global registry of certified clinical engineers with the goal of international mutual recognition of certification, and strive towards making certification and/or registration of clinical engineers, based on the same criteria, mandatory everywhere in the world. This will substantially improve mobility of clinical engineers but will also contribute to increasing patient safety. Primary goal of BIOMEDEA remains, however, to prepare the BME European Higher Education Area and to find
Fig. 1. Structure for the management of clinical engineering training and certification in Europe.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
BIOMEDEA
1121
recognition by the national governments throughout Europe, the European Union and the European bodies that are the main players in engineering education and accreditation. IV. GLOBAL ASPECTS MBES is not isolated from the rest of the world in the European Higher Education Area. The BIOMEDEA meetings have attracted international interest and participation in its activities. Global exchange of experiences and harmonization of MBES education and training, specifically in the field of clinical engineering, does not only contribute to the mobility of students, teachers and those employed in the various MBES professions, but it also contributes to the improvement of the health care systems and specifically patient safety. Experiences gathered in these international activities will in the future permit to further develop the guidelines and protocols for biomedical and clinical engineering education and training for the benefit of the discipline and the well-being of the people not only in Europe.
ACKNOWLEDGMENT The BIOMEDEA project has been made possible by the valuable contributions of all participants and organizers of the workshops, and the generous support from many universities and societies/organizations.
REFERENCES 1. 2. 3. 4. 5.
6.
7.
V. CONCLUSION The evolving European Higher Education Area will substantially influence the development of medical and biological engineering and sciences. These developments will be beneficial to the biomedical engineering profession and to society as a whole. The biomedical engineering community must grasp this opportunity through focused national and European actions and cooperation with the relevant bodies.
8.
Treaty of Amsterdam, http://www.eurotreaties.com/amsterdamtext.html The Bologna Declaration of 19 June 1999, http://www.bolognaberlin2003.de/pdf/bologna_declaration.pdf From Bologna to Bergen, http://www.bologna-bergen2005.no/. ECTS User’s Guide, http://www.hrk.de/de/download/dateien/ECTSUsersGuide(1).pdf J.H. Nagel, Biomedical Engineering in a European Higher Education and Research Area, Lecture Notes of the ICB Seminars, International Centre of Biocybernetics of the Polish Academy of Sciences, Warsaw, pp. 11-35, 2002. J.H. Nagel (Ed), Biomedical Engineering Education in Europe – Status Reports, http://www.biomedea.org/Status%20Reports%20on%20BME%20in %20Europe.pdf Criteria and Guidelines for the Accreditation of Biomedical Engineering Programs in Europe, http://www.biomedea.org/Documents/Criteria%20for%20Accreditati on%20Biomedea.pdf European Protocol for the Training of Clinical Engineers, http://www.biomedea.org/Documents/European%20CE%20Protocol %20Stuttgart.pdf Corresponding author: Joachim H. Nagel Department of Biomedical Engineering University of Stuttgart Seidenstrasse 36 D-70174 Stuttgart Germany Email: [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Biomedical Engineering Education, Virtual Campuses and the Bologna Process E.G. Salerud1 and Michail Ilias1 Department of Biomedical Engineering, Linköping university, Linköping, Sweden Abstract— Higher education in Europe can be divided into before and after the Bologna Declaration, the most revolutionary process in modern education. Biomedical engineering, an emerging “subject” during the last 40 years, strongly interdisciplinary, fragmented and lacking of international coordination, may benefit from this harmonization process. An early initiative such as BIOMEDEA has made a contribution through proposing biomedical engineering foundations for building a common curriculum among higher education institutions. A common curriculum would presumably contribute to student and teacher mobility, certification and accreditation and as a consequence promote increased international employability. The virtual campus action extends or adds values to already existing educational exchange networks such as Erasmus, important in student mobility and educational harmonization and recognition. A virtual education dimension is added to European co-operation, encouraged through the development of new organisational models for European institutions, promoting virtual mobility and recognition. Virtual campuses may have a possibility to bridge the gaps in national BME curricula all with respect to the action towards a consensus on European guidelines for the harmonization. The evaluation of the e-curricula is conformant with the roadmap of BME courses as defined by BIOMEDEA. Most courses are classified as second cycle courses on a Master level, supporting that studies in BME could be a continuation from cycle one. Learning environment and the students learning outcome, points towards a strong teacher-centred approach to learning. The transparency at all levels are low, a factor that might influence recruiting potential students to a programme, especially those students with working experience and an international background. To fulfil the Bologna Declaration and other steering documents for the higher education in an expanding European future there are still tasks to be solved regarding recognition, legalisation, pedagogical issues and employability looking for a harmonized solution. Keywords— Biomedical engineering, Bologna, harmonization, virtual campus
I. INTRODUCTION In 1999, the most paramount reform in higher education in Europe, took off with the Bologna Declaration. It was signed by 29 European countries in an action programme with a clear defined goal: “to create a European space for higher education in order to enhance the employability and mobility of citizens and to increase the international com-
petitiveness of European higher education”. To reach the goal a number of objectives [1] were specified: • • • • •
the adoption of a common framework of readable and comparable degrees the introduction of undergraduate and postgraduate levels in all countries, with first degrees no shorter than 3 years and relevant to the labour market ECTS-compatible credit systems also covering lifelong learning activities a European dimension in quality assurance, with comparable criteria and methods the elimination of remaining obstacles to the free mobility of students and teachers
Further a European Higher Education Area (EHEA) should be established by 2010, now involving more signatories, focusing on curricular reforms and quality assurance. The aim of the European Higher Education Area is to provide citizens with choices from a wide and transparent range of high quality courses and benefit from smooth recognition procedures. Biomedical Engineering (BME) constitutes a field where the need for harmonisation and comparability is readily seen. Although BME has been established within Europe for more than 40 years, it still has not managed to get recognition between European countries or internationally to this day. Shortcomings of funding opportunities, fragmentation of educational and research programmes, and a lack of international coordination between programmes are some of the unfortunate features characterising the field. II. BIOMEDICAL ENGINEERING EDUCATION A. Biomedical engineering as a subject Undergraduate degrees in BME have been granted for many years. As an emerging field, biomedical engineering has been an interdisciplinary field; in which specialization occur after completing an undergraduate degree in a more traditional discipline of engineering or science. Biomedical engineers are supposed to be equally knowledgeable in engineering and the biological sciences. Comparing already established programmes, no defined common core or all required fundamental courses or a proposed curriculum,
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1122–1125, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Biomedical Engineering Education, Virtual Campuses and the Bologna Process
although in many programmes traditional courses in biomechanics and systems physiology are quite common. A complete recognition and becoming a “subject” of its own is indispensable in BME and do not exist today but have to be defined in the near future [2, 3]. However, the lack of agreements at the course levels does not mean that there is no similarity at the content level. In countries such as US recognition or accreditation is solved by a thorough evaluation of the Accreditation Board for Engineering and Technology, Inc. (ABET), a very demanding procedure, necessary for a BME programme to exist and be esteemed. Following the Bologna process aims at creating convergence and, thus, is not a path towards the “standardisation” or “uniformisation” but instead ”harmonization” of European higher education. The fundamental principles of autonomy and diversity are respected, preserved and recognized. The field of BME is progressing rapidly into new areas, quite often fusioning different technologies and methods from many different domains. The BME domain demands the students to develop multidisciplinary skills and knowledge and a possibility for life-long learning. Therefore, embracing pedagogical renewal as a part of new or revised curriculum in biomedical engineering education has been demanded. Harmonization with the Dublin descriptors on programme level is therefore inevitable. [4] B. BIOMEDEA – defining the curriculum BME, this emerging and evolving field, was early recognized and dedicated special attention to become harmonized with the Bologna Declaration because of its diversity and lack of common curriculum. More than 200 higher education institutions in Europe already offer educational programmes in Biomedical Engineering at all academic levels, but without any international coordination of contents and required qualifications. A harmonization and accreditation project “BIOMEDEA” [5] was initiated by Joachim Nagel and strongly supported by the IFMBE society in 2001. The aim of the project was to establishing Europe-wide consensus on guidelines: • •
for the harmonization of high quality BME programmes, their accreditation and recognition. for the certification or even registration and continuing education of professionals working in the health care systems
Recognition among higher education institutions is an important factor in ensuring student and teacher mobility and accreditation has mostly impact on the transnational employability. To improve human health and quality of life it is of vital interest that the employer is confident that the
1123
employee has the necessary education, skill, training and responsible experience, and is capable of managing the technology served. The BIOMEDEA project published in 2005 guidelines in recommending programme modules spanning the BME curriculum. Modules were attributed to: • • • • • • •
BME foundations BME in-depth topics Mathematics Natural Sciences Engineering Medical and Biological foundations General and social competencies
With the recommendations of BIOMEDEA, IFMBE, their national member societies, higher education institutions and stakeholders are able to comply with international harmonization of higher education, to show transparency and recognition and support mobility for education, training and employment. C. Virtual campuses - EVICAB Virtual campuses: Mobility of students and teachers do not always imply physical transfer. Using Information and communication technologies (ICT), may contribute to the quality of education and training and to Europe’s progress towards a knowledge-based society. It may also have an impact on the harmonization of curricula and joint degrees. The eLearning Initiative and Action Plan, proposed by EU, encourage co-operation, networking and exchange of good practice at a European level. It also has the potential to realize the vision of technology serving lifelong learning. One action is the European virtual campuses and particularly the European Virtual Campus for Biomedical Engineering (EVICAB). A virtual education dimension is added to European cooperation, encouraged through the development of new organisational models for higher European institutions, the virtual campuses, creating virtual mobility and recognition. It will add values to already existing exchange programmes like Erasmus, Comenius, etc. The objective of EVICAB is to develop, build up and evaluate sustainable, dynamic solutions for virtual mobility and e-learning that, according to the Bologna process, [6] • • •
mutually support the harmonization of the European higher education programmes improve the quality of and comparability between the programmes advance the post-graduate studies, qualification and certification
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1124
Virtual European curriculum in BME cannot be obtained without first evaluating the existing curricula to be compared with a new or existing e-curriculum with good possibilities to strengthen harmonization, recognition and quality assurance. The survey is based on the BIOMEDEA curriculum proposal, extending the latter into the virtual domain. The status report produced within the BIOMEDEA framework [7] showed that BME programmes are often offered in single institutes and therefore the programmes cannot cover all subfields of BME needed in education of highly specialized engineers and physicists. Therefore, Virtual campuses may have a possibility to bridge the gaps in national BME curricula all with respect to the action towards a consensus on European guidelines for the harmonization. Evaluation: The evaluation was conducted through a manageable survey involving as many higher education institutions as possible, found in Biomedical Engineering Education in Europe – Status Reports (ref BIOMEDEA) and a wider search on the World Wide Web (WWW). Basic course information was collected regarding number of existing courses and courses planned to run within the next couple of years. The major part of the survey was designed elucidating the degree of course compatibility to the Bologna declaration. The following general trends could be discerned: • • •
the course operative language is often the native language of the responsible university. the course most often belongs to the 2nd cycle of qualification. the ECTS credit system is widely adopted.
The BIOMEDEA definitions seem to apply to the contents of the surveyed courses to a certain degree. The results showed that • •
the majority of the foundations and modules of BME as defined by BIOMEDEA are covered by the pooled courses. still there is a need for defining other topics and descriptions not covered by BIOMEDEA.
The survey also included the most valuable resources for the support of student learning outcomes as judged by those offering the course. The following resources were pointed out in order of priority: • •
in most courses the tutors were stated to be the most important. students’ work, laboratory and demonstrations was highly valued.
E.G. Salerud and Michail Ilias
A majority of courses was, according to those who responded, subject of measures assuring course quality in practice. The most common measures are: • • • • •
feedback by students. internal quality controls at a university level. peer review and internal work at an institutional level. controls by external bodies. use of field expertise in an educational context, assuring the quality of teaching or tutoring. Transparency issues were also addressed revealing that:
• •
course outcomes are most often publicly available. The most frequent means of publication is the WWW. outcomes are as a rule directly delivered to the students, most often by means of handouts.
Finally, the survey tried to shed light on distance course benefits regarding life long learning and student mobility between countries. The results showed that: • •
few BME professionals take advantage of distance education in order to support their continuing education. there is a limited number of foreign students attending distance courses. III. CONCLUSIONS
In the EUA document “Trends IV: European Universities Implementing Bologna” [8] evidence is found that the two cycle implementation has been achieved at nation level in most countries. Positive reports are also available regarding the curricular reform focusing on the learning outcomes. The evaluation of the e-curricula is conformant with the report since existing and planned courses seem to cover the proposed roadmap of BME courses as defined by BIOMEDEA. Most of the courses are classified as second cycle courses on a Master level, supporting the existence of a first cycle, and that studies in BME could be a continuation from cycle one. Approaching pedagogical viewpoints and the students learning outcome, both on a programme and course level, the teacher-centred approach to learning is still dominating. However, students’ work, labs and demonstrations could in this context, supporting a more student-centred approach. All educational centres reported working with quality assurance issues on a local, department level. The routines for external quality assurance aren’t clear and some showed willingness to comply with the European Network for Quality Assurance in Higher Education (ENQA) directives but without a declared roadmap and lack of transparency.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Biomedical Engineering Education, Virtual Campuses and the Bologna Process
The transparency at all levels are low, a factor that might influence recruiting potential students to a programme, especially those students with working experience and an international background. Virtual campuses are an added value to existing exchange networks for mobility of students and teachers. To fulfil the Bologna Declaration and other steering documents for the higher education in an expanding European future there are still tasks to be solved regarding recognition, legalisation, pedagogical issues and employability looking for a harmonized solutions.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8.
ACKNOWLEDGMENT The project is funded by the European Commission under the program Education and Training
1125
THE BOLOGNA PROCESS at http://ec.europa.eu/education/policies/ educ/bologna/bologna_en.html Linsenmeier R.A. (2003) What makes a biomedical engineer? IEEE Eng. Med. Biol. Mag. Jul-Aug;22(4):32-38. MBES position paper at http://www.eambes.org/docs/MBESposition-paper-final.pdf Dublin descriptors at http://www.jointquality.nl/ Criteria for the Accreditation of Biomedical Engineering Programmes in Europe at http://www.biomedea.org/documents.htm EVICAB at http://www.evicab.eu/ Biomedical Engineering Education in Europe – Status Reports at http://www.biomedea.org/ documents.htm Trends IV: European Universities Implementing Bologna – http://www.eua.be/fileadmin/user_upload/files/EUA1_documents/ TrendsIV_FINAL.1117012084971.pdf Author: Institute: Street: City: Country: Email:
E. Göran Salerud Department Biomedical Engineering Linköping university Linköping Sweden [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
European Virtual Campus for Biomedical Engineering EVICAB J.A. Malmivuo and J.O. Nousiainen Ragnar Granit Institute, Tampere University of Technology, Tampere, Finland Abstract— A Curriculum on Biomedical Engineering is established to the Internet for European universities under the project EVICAB. The curriculum will be free access and available free of charge. Therefore it will be available worldwide. EVICAB will make high quality education available for everyone and facilitate the development of the discipline of Biomedical Engineering. Keywords— e-learning, biomedical engineering
(iii) Advance the post-graduate studies, qualification and certification. These practices will be developed, piloted and evaluated in the field of biomedical engineering and medical physics. Important goal is that these approaches and mechanisms for virtual e-learning can be extended and transferred from this project also to other disciplines to promote virtual student and teacher mobility and credit transfer between European universities.
I. INTRODUCTION Biomedical Engineering is a multidisciplinary field of science covering a large number of sub-specialties. All these are developing very fast. Therefore, for any university, especially for smaller ones, it is difficult to produce and update high quality teaching material in all aspects of the field. Globalization encourages the students to mobility between universities. The BIOMEDEA project facilitates this by harmonizing the study programs in European universities. Internet is more and more used as a platform for educational material and student administration. The use of internet makes the geographical distances to disappear. All this gives strong reasons to develop an education program on the Internet for the use of all European universities. This is the basis for the project: European Virtual Campus for Biomedical Engineering – EVICAB. II. EVICAB PROJECT EVICAB project is funded by the European Commission, Education and Training. The objective of the project is to develop, build up and evaluate sustainable, dynamical solutions for virtual mobility and e-learning that, according to the Bologna process, (i) (ii)
Mutually support the harmonization of the European higher education programs, Improve the quality of and comparability between the programs, and
III. EVICAB CONSORTIUM EVICAB is coordinated by the Ragnar Granit Institute of Tampere University of Technology. Professor Jaakko Malmivuo serves as Director of the project and Assistant Professor Juha Nousiainen as coordinator. The other partners are: -
Mediamaisteri Group Ltd, Tampere, Finland Department of Biomedical Engineering, Linköping University, Linköping, Sweden Biomedical Engineering Center, Tallinn, Tallinn University of Technology Institute of Biomedical Engineering, Kaunas University of Technology, Kaunas, Lithuania. Department of Biomedical Engineering, Brno University of Technology, Brno, Czech Republic.
EVICAB welcomes interested institutes to join as associate partners. This means that the associate partners may participate the main meetings and they will get all the relevant information and may use the EVICAB material even before it is in public use. We also hope that the associate partners active participate in producing teaching material to EVICAB. IV. IDEA OF THE EVICAB The fundamental idea of the EVICAB is that it offers a platform for Biomedical Engineering curriculum on the Internet. Teachers, who are experienced and recognized experts in their field, are encouraged to submit full ecourses, course modules and other teaching material to EVICAB. The material may include many different formats
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1115–1117, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1116
J.A. Malmivuo and J.O. Nousiainen
Fig. 21. Universities take one or more BME Courses or the whole
Fig. 1. EVICAB is built from the BME Programs of the partner universities
curriculum from the EVICAB to complete their study programs.
A-E. Other universities (X) may also contribute.
like video lectures, PowerPoint slides, pdf-files, Word files etc. EVICAB is not a university. The course and student administration continue in the universities as usual: The teacher, responsible of the course/study program, may select from the EVICAB courses for the BME curriculum of the university. The students study the course either as ordinary lecturing course with the EVICAB material supporting the lectures or the course may be partially or solely studied from EVICAB. The students, or anyone even outside the university, may study EVICAB courses to add their competence in Biomedical Engineering. The EVICAB has an Administrative Board which administers the EVICAB curriculum. The board accepts courses of sufficient scientific, pedagogical and technical quality. The board may also invite experts to provide course material to the EVICAB. Courses which apparently are of low quality, either out of date, lower quality than competing courses and not appreciated by the users of the EVICAB will be deleted. Active feedback from the users of EVICAB, both teachers and students, is essential. All this will be realized by utilizing a dynamical quality assurance system.
For teachers: -
teaching material and other resources e-learning methods support for e-course development
For the study programs: -
improved quality harmonization of the degree studies
General: -
model applicable also for other disciplines VI. MOODLE PLATFORM
In EVICAB the Moodle program has been selected to serve as platform for the learning environment and learning management. Moodle is an open source program and therefore suitable to the EVICAB philosophy of free access. Moodle is also very versatile program offering a vide variety of tools for various pedagogical and administrative tasks. However, other open source platforms may also be used.
V. THE ROLE OF EVICAB IN E-LEARINING In its completed form, EVICAB will have impact on all main levels of the education process: For students: -
virtual mobility e-courses
VII. INTERNET EXAMINATION Another successful innovation and application in our elearning activities has been the Internet examination. In the Internet examination the students make the exam in a computer class. This may be performed simultaneously
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
European Virtual Campus for Biomedical Engineering EVICAB
in several universities. Therefore the students do not need to travel to the location there the course was given. The students open the Moodle program at the time of the examination and find the examination questions from there. We usually allow the students to use all the material available on the Internet. This requires that instead of asking “What is ...” the questions shall be formulated so that they indicate that the student has understood the topic and is able to apply this information. The only thing which is not allowed is communication with some other person via e-mail etc. during the examination. VIII. WHY TO PROVIDE COURSES TO EVICAB? EVICAB will become an important teaching and learning method only if it is available free of charge and worldwide. As a consequence, the learning material should be provided free of charge. Why experienced and competent teachers should provide such material without charge and without royalties? Acceptance of a course by EVICAB will be a certificate for quality. Worldwide distribution to all university students will give exceptional publicity for the author and his/her university. All this will facilitate the sales of traditional teaching material produced by the course author. This will also attract international students from other countries all over the world to apply to the home university of the material author. We already have experience which has proven these issues to be realistic. The Internet has dramatically changed the distribution of information. Distribution is world wide, real time and free of delivery costs. The technology also supports wide variety of attractive presentation modalities. All this ensures wide audience and publicity for the material on the Internet. For
1117
instance, the Wikipedia dictionary serves as a successful example of this new era of information delivery. On the basis of this publicity it is possible to create markets also for traditional printed educational material. IX. CONCLUSION In future, the teaching and learning will mainly be based on Internet. The ideas and the technology of EVICAB are not limited only for application on Biomedical Engineering but it may be applied to all fields and levels of education. EVICAB will be the forerunner and show the way to more efficient and high quality education.
ACKNOWLEDGMENT Financial support from the European Commission and the Ragnar Granit Foundation is acknowledged.
REFERENCES 1. 2. 3.
EVICAB at http://www.evicab.eu Moodle at http://www.moodle.fi/evicab/moodle/ Malmivuo J, Plonsey R (1995) Bioelectromagnetism. at http://www.tut.fi/~malmivuo/bem/bembook/ Author: Institute: Street: City: Country: Email:
Jaakko Malmivuo Ragnar Granit Institute Korkeakoulunkatu 3 Tampere Finland [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
How New and Evolving Biomedical Engineering Programs Benefit from EVICAB project A. Lukosevicius, V. Marozas Kaunas University of Technology / Biomedical Engineering Institute, research associates, Kaunas, Lithuania Abstract— Problems of new and evolving Biomedical Engineering (BME) programs in European universities are collated with the opportunities and benefits offered by the EVICAB project. Benefits of European Virtual Campus for Biomedical Engineering (EVICAB) for new course developers, administrators, teachers and students are presented and illustrated by examples.
virtual environment which EVICAB project is offering, illustrated by case examples. Those three parts define a presentation structure below.
Keywords—Biomedical engineering; programs; teaching; learning; virtual environment.
Problems of new BME education programs begin with the choice of proper course composition, curricula and content of courses. Program from one side preferably should meet high quality criteria for EU accreditation [2] and from another – should include good courses delivered by high level specialists. Since BME education is multidisciplinary and high technology based at the same time, a difficult problem arises to cover the program with sufficiently high level courses from all necessary disciplines. Small countries also those newly entered EU usually haven’t good experience in BME field, also there is lack of long term traditions and industries developed. Therefore programs usually suffer from the top-down fitting of program courses to the limited possibilities of teachers and teaching environment. This usually causes a creation of big variety of one-sided or too specialized programs of limited quality defined by local qualifications and facilities. Problems with good teaching materials including demos, interactive practical work, textbooks, and software are very essential for new programs as well. Facilities of libraries don’t cover the needs, specific problem is in post-soviet countries where books in Russian are present in libraries but are not practically readable for new-generation students. Therefore collaboration, mobility, virtual environment for program development are problems of vital importance. Apart of technological problems in a new program development a conceptual teaching and learning problems arise [9, 10]. BME is a field where special teaching and learning methods are necessity, since it covers complicated issues of physiology, anatomy, tissue engineering, bioelectromagnetism, biophysics, sensors and transducers, signal and image processing, visualization, modeling of complex systems, direct and inverse problems of 3D systems and so on. Constructionism and constructivism, social constructionism concepts promoting a self-construction of knowledge, sharing and brushing knowledge obtained within appropriate environment are conceptual problems to cope with [4, 7, and 8].
I. INTRODUCTION Although biomedical engineering educational systems have been under development for 40 years, interest in and the pace of development of these programs has accelerated in recent years [1]. This acceleration is a natural consequence of the rapid evolvement of biomedical engineering science, technologies and rising sophistication of the equipment used today in medicine and biology. The pace of development causes specific challenges both for new programs which are starting (especially actively in new European Union (EU) member states) and already established and long running programs which are seeking for better quality, modernization and international harmonization within EU. Today more than 100 universities and colleges offer education programs on BME in EU. Wide scope of education goals and multidisciplinary of BME as a field of science and technology makes it difficult to consolidate and harmonize education programs under certain international criteria. Therefore activities towards EU accreditation of BME programs has been taken [2,11]. Additional challenge for the education programs is high requirements of research and education unity outlined in a form of national science education standards (for example in USA [3]). Universities experience significant problems in keeping the appropriate level of multidisciplinary BME programs, especially at new program establishment and in initial stages of program running. Since one of the missions of EVICAB project is to create a favorable virtual environment for program startup and modernization, the aim of present paper deals first - with the problems which new and evolving BME programs are facing, second – with means and tools which EVICAB project offers for the problem solving, and third - with benefits of
II. PROBLEMS OF NEW AND EVOLVING BME PROGRAMS
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1126–1129, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
How New and Evolving Biomedical Engineering Programs Benefit from EVICAB project Table. 1. Classification and ranking of main problems for new and evolving BME education programs No.
Problem
1.
Curricula composition in accordance with EU criteria for BME programs
2.
Re-engineering of programs in accordance with Bologna process Covering of wide, multidisciplinary scope of BME program in sufficiently high level Lack of teaching materials, textbooks, demos, interactive labs Lagging behind rapid development of BME technologies in the world Requirements for entrance to BME master program: need for an appropriate flexible equalization courses Keeping research and education unity: translation emerging technologies to the studies Sharing efforts and resources in preparation of courses, especially advanced ones Balancing the core/fundamental courses with application and emerging technologies oriented ones Internationalization and mobility of students and teachers, recognition of credits Adaptation of program for life long learning and part time studies Introduction of modern teaching and learning paradigms – constructionism, problem orientation, self-evaluation etc. Need for advise, discussion and collaboration in course and program development Involvement of the best lecturers worldwide
3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
13. 14.
New program 5
Evolving program 3
2
2
4
3
5
3
4
4
4
2
3
3
5
3
3
2
5
3
4
2
3
2
5
2
4
3
Problem rating used in the table 1. 1 – practically no importance 2 - little importance 3 – significant importance 4 – high importance 5 – very high importance In big extent problems typical for new programs are also valid for not new programs which are seeking for update, modernization and accreditation in EU (shortly – evolving programs). Among them one can point on the transition to the Bologna process and leading decisions by Sorbonne Declaration (inclusive objectives and statements), Salamanca resolution and Prague follow-up meeting from 2001defined the two-cycle program [5]. This problem is
1127
particularly important for well established BME programs in German speaking countries. The list of main problems is presented in Table 1. together with approximate ranking of problem importance for new and evolving programs. Evidently, new programs experience main difficulties since in addition to the problems listed practical questions of management, rooms and other facilities, legal frame of program, motivation and involvement of staff should be solved simultaneously. Entirety of problems needs for appropriate environment for successful solution. III. METHODS AND TECHNOLOGIES FOR NEW PROGRAM SUPPORT BY EVICAB
The contribution possibilities of EVICAB project for new and evolving BME programs lies deep in the mission and philosophy of the project. Openness and inheriting evolvement of the project itself respond to the evolvement of dynamic BME discipline and to the corresponding study programs, especially new ones. Project suggests an open coordination method used widely in EU (one evident example is EU Lisbon strategy implementation) together with concrete assistance. Main methods and technologies used for support of new and evolving BME programs by EVICAB project are as follows (in brackets the corresponding problems from the Table 1 are listed): EVICAB project keeps in account the accreditation criteria for EU BME programs developed by BIOMEDEA project [2] concerning program course composition, curriculum and course content. New programs are oriented towards those criteria from the beginning and further path of evolvement towards future EU accreditation is supported. (Response to problems No. 1, 2, 10, 13). Project integrates multidisciplinary BME courses creating virtual environment enabling choice of necessary high level, especially advanced courses which are usually not affordable for new program organizers due to lack of specialists, experience and facilities and other practical reasons. In many cases this is a chance to fill painful gaps in program influencing overall quality of the program. (Response to problems No. 3, 4, 7, 14). Advanced teaching and learning concepts and technologies available in MOODLE environment [4,6] and beyond [7] including problem orientation, self creation of knowledge structures, interactivity, self - assessment, internet examination are offered together with implementation examples in particular EVICAB courses. (Response to problems No. 5, 7, 12).
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1128
A. Lukosevicius, V. Marozas
Teaching and learning materials including e-Books, interactive models, demos, textbooks and other information is offered thus covering the painful lack of teaching resources for new BME programs and for programs seeking for update. (Response to problems No. 4, 5, 8). Flexibility of the virtual learning environment (VLE) used by EVICAB makes translation of research advancements and emerging technologies in the teaching and learning process easier, due to the openness, effective technologies for course update and involvement of best competences both in research and education. (Response to problems No. 3, 7, 9). Creating an environment for sharing efforts in development programs and courses. Since advanced course development is expensive and in many cases not affordable by national universities, especially for new members of EU, sharing of efforts and competences offered by EVICAB is of vital importance. The environment creates possibility to contribute for all participants and use the best competences wherever they are in EU. (Response to problems No. 3, 7, 10, 13). Virtual European campus for BME is a favorable environment for internationalization both of program and course development as well as teaching and learning. Open discussions, encouragement of contributions in course development for teachers, as well as mobility and course choice possibilities for students, environment of communication are supported by EVICAB. New programs especially in small countries are accepting little number of students (1020), therefore communication is vital. (Response to problems No. 10, 13, 14). Generally speaking new program developers and students are supported by EVICAB in several ways simultaneously: conceptual, methodical, technological, and teaching material supply. Complexity and openness makes EVICAB an evolving environment favorable for absorption of new findings and developments whenever they occur. IV. RESULTS, BENEFITS AND CASE EXAMPLES Results and benefits of EVICAB project could be classified for developers, lecturers/teachers, students and administrators. A benefit gradually becomes more evident with the development of the project. Below will be presented those of benefits, which could be already illustrated by the case examples. Benefits for developers: Harmonization of the program and curriculum with EU accreditation criteria, Bologna process direction; filling the gaps caused by the lack of national competences with best internationally recognized courses provided by outstanding lecturers; improving local
course content by collaboration within the project framework, getting support in terms of teaching materials, methodical and conceptual advises, good practice examples. Project promotes the use of modern technologies and tools of course development. New BME master program has been started in Kaunas University of Technology (KTU) in 2003 and it was the first BME program in Lithuania. Developers experienced a lot of difficulties and problems, because specific experience in BME field was very limited in the country. EVICAB support here was essential. Project enabled to gain an experience in program shaping and modernization, application of modern teaching and learning methods, also made available valuable teaching materials – e-Book on Bioelectromagnetism opened for free use by prof. Jaakko Malmivuo, valuable support and supervising Lithuanian students by prof. Goran Salerud, and other participants of the project. Without this support of EVICAB the successful start and running of the program hardly would be possible. Benefits for teachers/lecturers: Gaining experience from colleagues and good examples; use of open teaching materials; possibility to concentrate and improve own competence in the favorite field of BME and relying on EVICAB courses when needed other competence; possibility to contribute to EVICAB by own course and materials; sharing efforts in update and development of new courses; participation in discussions; self-evaluation of course quality; gradual adaptation of course to BME accreditation criteria. Teachers in KTU started to use MOODLE virtual learning environment, aligned new courses on adaptive biosignal processing, biomedical engineering methodology, clinical engineering with accreditation requirements, prepared computer - aided interactive laboratory works and practices, developed virtual instrument laboratory. New teaching concepts – problem based approach, encouraging of selfconstruction of knowledge system by students are under implementation. In 2006 a self-evaluation report on the KTU BME master program was submitted to the national quality evaluation committee. Benefits for students: Increased possibility for best course choice; participation in distant lectures and webinars; access to the advanced learning materials, textbooks, demos, illustrations, models and interactive practices; possibility to take course and pass exam remotely thus saving money for travel; more easy contacts with foreign colleagues-students and teachers; better conditions for mobility; better knowledge and better adaptation of European labor market in BME; possible recognition of qualifications in EU. Quiz organized in KTU for BME master students had shown that students are in general for the use of virtual European campus offered by EVICAB, they like new teach-
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
How New and Evolving Biomedical Engineering Programs Benefit from EVICAB project
ing and learning materials delivered remotely. Interactivity and friendliness learning of environment was welcomed and better career opportunities stressed. Students participated in remote pilot intense courses and lectures delivered by Finnish and Swedish lecturers and enjoyed good technical quality (sound, image, demos, slides) of lectures and seminars. Benefits for administrators: Saving financial and other resources in organizing and running BME program; ability to motivate staff to raise qualification and to self-assess quality of courses and delivery methods; more objective evaluation of programs using international context; approaching a possibility to accredit program in EU; possibility to involve best EU competences with minimal expenditures. KTU administration welcomes project support and encourages Biomedical Engineering Institute to start a new bachelor program on BME using the opportunity to gain benefit from EVICAB project. V. CONCLUSION Project EVICAB responds well to the needs and problems of new and developing BME programs. Openness for access and contribution and open coordination concept of the project, use of modern teaching and learning technologies makes it useful for BME education in Europe.
ACKNOWLEDGMENT The work is supported by EU EVICAB project. Authors appreciate support by project leader prof. J. Malmivuo, and by all project participants from Finland, Sweden, Estonia, Slovenia and Czech Republic.
1129
REFERENCES 1.
Harris TR, Bransford JD, Brophy SP. Roles for learning sciences and learning technologies in biomedical engineering education: a review of recent advances Annu Rev Biomed Eng. 2002;4:29-48 2. J.H Nagel,. Accreditation of biomedical engineering programs in Europe – challenge and opportunity. Engineering in Medicine and Biology Society, 2001. Proceedings of the 23rd Annual International Conference of the IEEE Volume 4, Issue , 2001 Page(s): 3898 - 3900 vol.4 3. National Science Education Standards: Research and education unity. http://books.nap.edu/readingroom/books/nses/html/ 4. MOODLE philosophy http://docs.moodle.org/en/Philosophy 5. Helmut Hutten Two-phase Biomedical Engineering Education Program: Engineering followed by Biomedical training http://www.bmt.tue.nl/archive/BMEcongress011101/Hutten.htm 6. EVICAB project web page: http://www.moodle.fi/evicab/moodle/ 7. Theory and practice of online learning, Anderson T., Elloumi F., editors. Online: http://cde.athabascau.ca/online_book/index.html 8. Challenge-based instruction in biomedical engineering: a scalable method to increase the efficiency and effectiveness of teaching and learning in biomedical engineering. Med Eng Phys. 2005 Sep;27(7):617-24. 9. William B. Wood Inquiry-Based Undergraduate Teaching in the Life Sciences at Large Research Universities: A Perspective on the Boyer Commission Report Cell Biol Educ. 2003 Summer; 2: 112–116. 10. Howard L. Adaptive learning technologies for bioengineering education IEEE Eng Med Biol Mag. 2003 Jul-Aug;22(4):58-65 11. Biomedical Engineering Education in Europe – Status Reports at BIOMEDEA http://www.bmt.unistutgart.de/biomedea/Status%20Reports%20on%2 0BME%20in%20Europe.pdf Author: Arunas Lukosevicius Institute: Biomedical Engineering Institute of Kaunas University of Technology Street: Studentu str. 65 City: Kaunas, LT-51369 Country: Lithuania Email: [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Learning Managements System as a Basis for Virtual Campus Project K.V. Lindroos1, M. Rajalakso2 and T. Väliharju2 1
Ragnar Granit Institute, Tampere University of Technology, Tampere, Finland 2 Mediamaisteri Group, Tampere, Finland
Abstract— Learning management system has an important role in web-based education. Mediamaisteri Group is an expert company in web-based solutions for e-learning and provides the platform for EVICAB (European Virtual Campus for Biomedical Engineering) project. In the project the openness and free availability has been the fundamental ideas since the beginning. Moodle learning management system supports the idea. Moodle is an open source platform. Leaning management system has been modified for providing the needed tools for virtual campus project. Keywords— Learning management system, ICT, Moodle, Virtual learning environment
I. INTRODUCTION Learning managements system (LMS) has an important role in web-based learning and especially in virtual campus projects. Activities and modules provided by the LMS can be chosen so that it supports the learning process. European Virtual Campus for Biomedical Engineering – project the driving idea has been open access, open content and free of charge utilization of all the material in the system. Based on the idea the open source Moodle platform was chosen.[4] Moodle platform has been used as a basis for the courses and all education. In addition to this the platform has provided beneficial tool for project management and communication between partners. In this report some aspect concerning the usability of such learning management system in virtual education and in project management will been discussed. II. WEB-BASED MANAGEMENT A. Course Management The benefit of the open source platform in the EVICAB project has been the possibility to modify the platform based on the needs of the learning process and content. The education in EVICAB is based on the totally distant education, and combination of contact teaching and virtual lessons. The learning process has several aspects which have to be supported when there is no direct contact with the teacher. The first problem is to provide easy access and
appealing environment for the students to study. The layout and color schemes have been designed so that the usability of the platform is as easy as possible. The main focus of the student has to be in the content and not in the applicability for ensuring successful learning experience. This can be achieved by careful design of the platform. Second issue is to provide inspiring and interesting lecture material. In this context the applicability is once again the key issue. The content of the lectures have to be easily accessible in order to let the student to focus to the content and not on the applicability. The content providers and teachers are advised to use media that are not based on any particular format and can be opened in a web-browser. For instance in video lecture production the Flash format is supported by the learning management system by providing a Flash module. Other, text based material, should be implemented to tools provided by the platform if possible, to ensure the functionality of the resource. Tutoring and communication with fellow students are also supported by the platform. Internal message system and e-mail lists are used for non-synchronous communication. Chat, provided by the platform, can be used for synchronous communication. Very popular communication channel in EVICAB has been the discussion forums. Forums have been opened on the course pages for various topics. Students may add their comments and questions related to the given topics.[2] LMS provides various tools for teacher to create different activities for the students. These activities support the exercises, assignments, quizzes and other tasks given for the student during the course. Moodle has built-in student administration and enrolment system, which have been used in EVICAB for keeping the record on the visitors, students, teachers and other personnel. [2] B. Project Management Learning management system has not only been used for supporting the web-based education but has been used in EVICAB project management. Various tools such as communication tools have been beneficial for the project. Several sites have been created to support the project work and work packages. Project files can be shared and modified in
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1130–1131, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Learning Managements System as a Basis for Virtual Campus Project
cooperation between different institutes by using the tools provided in the platform. For creating all the forementioned tools without learning management system the project would not be possible. The role of custom made platform for the education has been very important in terms of the whole project. The approach in the project is that “The system will guarantee a sustainable learning environment and content, of which development is based on continuous dynamic peer and self evaluation and effective exploitation of information and computer technology”.1 For this approach the Moodle platform was considered as the best option. Open source and free software supported the idea of the EVICAB project. Continuous development world wide of the platform ensures the dynamically evolving and up-to-date system. It is important to have dynamical and developing management system for dynamical virtual campus such as EVICAB.
1131
educational purposes may be further discussed but the experiences in EVICAB project have been very positive.
ACKNOWLEDGMENT EVICAB, European Virtual Campus for Biomedical Engineering is funded by European Union. Mediamaisteri Group is a private company. Mediamaisteri Group is an expert company in e-learning and aims to support the processes of web-based learning. [3]
REFERENCES 1. 2. 3. 4.
EVICAB home page at www.evicab.eu EVICAB Moodle at www.moodle.fi/evicab Mediamaisteri Group at www.mediamaisteri.com Moodle organization at www.moodle.org
III. CONCLUSIONS Learning management system is a key factor in webbased education. In the EVICAB project the Moodle platform has been successfully used as an educational tool but also as a project management tool. The selected platform for
Author: Institute: Street: City: Country: Email:
Kari Lindroos Ragnar Granit Institute PL 692 Tampere Finland [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The E-HECE e-Learning Experience in BME Education P. Inchingolo1, F. Londero1 and F. Vatta1 1
Higher Education in Clinical Engineering, University of Trieste, Trieste, Italy
Abstract— This paper focuses on the e-learning experience in BME education of E-HECE (E-Higher Education in Clinical Engineering), an integrated distance learning system for education in Clinical Engineering at the University of Trieste (Italy). E-HECE is oriented toward providing to remote students many of the valuable aspects of the live classroom experience that are essential for learning. E-HECE has proven its successfulness in providing convenience to students who can actively participate in a class, whether they attend in person (physically, by videoconference or by video-streaming), and in making also available to the students recordings on-demand of classes synchronized with the lecture’s didactic material on the E-HECE e-learning platform. The E-HECE system made its debut in its current final version in September 2005, and since then it has been extensively used by the 340 E-HECE registered users for all the 150 courses in the Clinical Engineering program which have been delivered up to now. Its use has grown beyond Clinical Engineering including also courses in the health management and medical fields. The expansion of E-HECE’s capabilities continues to extend its utility and power as a distance education system. Keywords— BME education, e-learning, Moodle, videoconference, clinical engineering.
I. INTRODUCTION Education in engineering and technology fields is dominated by traditional classroom lecture presentations [1]. This setting requires students to be physically present on campus at the time of the lecture, but benefiting from interaction with each other and the lecturer. While the overall presentation of most courses remains lecture oriented, many organizations make increasingly productive use of the Internet as an instructional tool to make available supplemental and/or complementary course-related material to students with an asynchronous access [2]. With the aim of combining the benefits of both of these two fundamental trends, the SSIC-HECE (Studi Superiori in Ingegneria Clinica – Higher Education in Clinical Engineering) of the University of Trieste (Italy), within its educational Program in Biomedical-Clinical Engineering, has proposed and designed an integrated distance learning system named EHECE (E-Higher Education in Clinical Engineering), able to provide students with the means to actively participate synchronously with live classes and/or asynchronously with recordings of the classes. As a matter of fact, postsecondary
educational institutions offering engineering and technology programs face increasing demand from a student demographic characterized by working professionals often seeking further training (to remain current in their field or expand their expertise) or additional certification (certificates, degrees) [4]. This holds particularly in the field of Clinical Engineering [5]. One demand of this demographics is access to courses outside normal teaching hours, and, increasingly, without requiring the student to be physically present but participating in a class from their home or place of work using a computer. This demand is also felt keenly in urban areas, where traffic congestion can make classes difficult to reach. E-HECE has been conceived to meet this demand from a growing student constituency at a cost comparable to classroom instruction. The challenge posed by this growing demand is how to provide these students with the essential qualities of being present in a live classroom lecture, delivered using the Internet. What features of the live classroom experience are (or are not) critical to learning? The simultaneous teaching of a set of students in separate locations, some together in a classroom with the instructor, results to be a strong exigency. Irrespective of their location, students should have a strong sense of participation in the ongoing class, and critical to this feeling is the ability to interact with the instructor and their fellow students. As the ability to receive spoken and graphical content (e.g., slides and any annotations made on them), and to originate audio and video, can be provided by any modern personal computer, this same ability, combined with a connection to the Internet, can offer students a strong sense of participation, only requiring infrastructure that is widely and cheaply available. Hardware and software are hence required that allow remote-location students to do the following: 1) receive the live classroom content at any location while being assured that they do not lag the originating class; 2) ask and respond to questions in a natural way, e.g., by speaking; 3) interact with each other during class, the equivalent of a student asking a neighbor a question. E-HECE was designed and developed to fulfill these needs and to have the following benefits: 1) Being convenient for students, providing students with the ability to participate synchronously in a live class experience, not simply hear and see the classroom presentation, and also providing a recording of the class for future playback; 2) Being convenient for instructors, mini-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1107–1110, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1108
P. Inchingolo, F. Londero and F. Vatta
mizing change in lectures’ presentation style; 3) Being conservative in capital and operating costs, as E-HECE requires no special hardware and minimal specialty software and can operate comfortably on configurations that are, by today’s standards, quite modest. These goals, and how E-HECE meets them, are discussed in more detail below. Technical details of E-HECE components and their operation appear in the next section, followed by a discussion of experience using E-HECE over the last years. II. THE E-HECE SYSTEM This section is organized as follows. First, the SSICHECE background history is briefly outlined to give the rationale for the choices that have been made in E-HECE system’s design. Then, the E-HECE system structure is described, with its videoconferencing and streaming facilities and the e-learning platform with the systems implemented for management of students activities, courses, exams and related safety procedures. A. The SSIC-HECE background The University of Trieste has a long tradition in the development of data and multimedia communication networks begun in 1988 with the First Biomedical Network of Trieste (RBT). From this starting point a strong activity in distanceeducation, health telematics and telemedicine has been developed. Since Academic Year 2003-2004, an extensive e-learning experience has been started at the University of Trieste for the total on-line fruition of the first level “Master in Clinical Engineering” (MIC-MCE) and of the International “Specialist Master of Management in Clinical Engineering” (SMMCE), activated within SSIC-HECE. The MIC-MCE and the SMMCE Masters have been formally activated within the Central European Initiative (CEI) University Network and they have been instituted as a transformation of the post-graduate Specialization School in Clinical Engineering, which has been active for 12 years as unique point of reference for education in clinical engineering in Italy and also widely recognized in Europe. Given the growing student constituency and demand, since Academic Year 2004-2005 also the Magistral Laurea Degree in Clinical Engineering (LSIC), a two-year graduate program, has been included in this e-learning experience. SSIC-HECE students are very often personnel already working in hospitals or in healthcare services’ companies either in Italy or in Europe and need therefore distancelearning cooperative instruments.
B. E-HECE videoconferencing and streaming The SSIC-HECE lessons are held in Trieste and simultaneously, by means of multi-video-conference, in many other distributed classrooms located in a number of peripheral sites (University Roma Tre, Polytechnic of Turin, IRCCS San Matteo of Pavia, Institute of Biomedical EngineeringCNR in Padova, IRCCS Casa Sollievo della Sofferenza in San Giovanni Rotondo (FG), Universities of Graz, Maribor, Rijeka and Zagreb). Multi-video-conference actually creates a multiple virtual classroom in which students from the different sites can fully interact with the teacher holding his/her lesson from one of these distributed sites or from another one, asking for questions, requests of clarifications, debates, discussion of practical experiences, etc. The classroom at the University of Trieste has been provided with fundamental videoconferencing facilities: a videoconference terminal, a projector/TV and a system of audio-diffusion with annexed mixer and microphone. The EHECE system has also been provided with a server installed for video distribution (streaming) in addition to a recording system integrated in the classroom with the videoconference system. This facility allows the students connected to the Internet with a PC to attend the lesson at the same moment in which the lesson takes place. A tool for production, synchronization and publication of multimedia contents has been designed and implemented to automatically obtain an electronic lesson complete of teacher’s audio-video and lesson’s slides, synchronized one by one, to be distributed on the Internet both live and on-demand, with a graphical output shown in Fig. 1.
Fig. 1 Snapshot of the system’s interface appearance with synchronization of the lessons slides with the video recorded during the lesson
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The E-HECE e-Learning Experience in BME Education
1109
C. E-HECE e-learning platform The analysis of the characteristics of a suitable e-learning platform for E-HECE has been performed pointing out the required characteristics in both administration and courses area. The Moodle suite showed up as the best solution for the easiness of management and of system’s configuration [5-6]. An appropriate table for users list has been created in Moodle’s database with users’ main personal data. After certification on the platform, students can insert other personal information into their profile, as their photo, the personal home page, phone numbers, address, nickname for the used messaging systems etc. For the purpose of personalizing the access of each student to the platform, the Moodle’s registration procedure has been used, consisting of creating a registration table to enable accounting to the courses chosen by the single user. At the user’s first access to the platform, the system performs a control of existence of the user in users table and student’s data are uploaded with student’s registration profile for access to the courses according to the registration table. Fig. 2 shows a snapshot of the E-HECE homepage, with the links to the courses, links to lessons timetables and exams, contacts to secretariat, courses calendar, information about special events and general news. In addition, for each course, students can enter their name for examinations in a certain exams’ sessions thanks to Moodle’s “choice” function. Fig. 3 shows an example of a course’s module in the platform, with the links to the available lessons, identified by date, time, teacher and the lessons’ didactic material. The “Forum” section constitutes an information exchange section among the students attending a course and the teacher of that course.
Fig. 3 Example of a course in the platform E-HECE administrators can access each student’s profile and recall the activity report, visualizing the student’s complete activity, or the activity pertaining to a specific day on the entire platform, or the activity pertaining to the specific course. Teachers can monitor only the activity of the students enrolled in their courses. Given the data managed by the E-HECE platform, a twofold backup system has been also implemented, with a first weekly backup procedure according to Moodle’s platform functionality and with a second safety procedure executed at Operative System level of daily backup. III. E XPERIENCE WITH E-HECE
Fig. 2 Snapshot of the E-HECE homepage.
The E-HECE system made its debut in its current version in September 2005. Since then it has been extensively used for all the SSIC-HECE courses of the Biomedical-Clinical Engineering Program of the University of Trieste, serving a total student population of about 340 students in 150 courses. The actual set of students attending the courses has been observed to vary through the semesters. Some students attended live classes and then preferred to enroll in the E-HECE section of a course but the converse also occurred, as the nature of the classes is such that a student who attends one class online can easily attend the next one in the physical classroom. Students consistently rate highly the value of the lessons recordings; reasons include the ability to time shift a lecture to fit better with their schedule, to be able to handle interruption in cases where the entire lecture time cannot be allocated and the ability to re-play part of a lecture to review a particular topic. Fig. 4 shows some statistics on the E-HECE system from which the intensive use of the courses can be appreciated.
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1110
P. Inchingolo, F. Londero and F. Vatta
Fig. 4 Example of report of the monthly history of visits to the E-HECE system for January 2007 (top) and statistics for country (bottom) IV. CONCLUSION This paper has described E-HECE, an integrated distance learning system for education in Clinical Engineering. EHECE is successful in providing to distance students many of the valuable aspects of the live classroom experience that are essential for learning, participating in a class, whether they attend in person (physically, by videoconference or by video-streaming), and in making also available to the students recordings on-demand of classes synchronized with the lecture’s didactic material on the E-HECE e-learning platform. It imposes little or no change in the way class presentation material is prepared, and mastery of the technology for teaching is very easily accomplished. E-HECE is extensively used in its final version since 2005, growing to a student community of over 340. Its use has grown beyond the clinical engineering courses, where it was first used, to include courses in the management and medical fields. The expansion of E-HECE’s capabilities continues to extend its utility and power as a distance education system.
ACKNOWLEDGMENT Work supported by SSIC-HECE, University of Trieste and by the CEI University Network.
REFERENCES 1. 2.
3. 4.
5. 6.
Pence HE (1997) What is the role of lecture in high-tech education? J Educ Technol Syst, 25:91–96 Waits T and Lewis L (2003) Distance education at degreegranting postsecondary institutions 2001–2002 National Center for Education Statistics, Washington, DC, Tech. Rep. NCES 2003-017 Wilson J (2003) After the fall: The lessons of an indulgent era Ann Conf Distance Teaching Learning, Madison, WI, Aug. 13–15 Inchingolo P et al. (2004) Integrated distance learning in biomedical sciences and engineering: the experience of the Higher Education in Clinical Engineering in EuroPACS-MIR 2004 in the Enlarged Europe, P. Inchingolo & R. Pozzi Mucelli (eds), EUT:435-438 Graf S and List B (2005) An evaluation of open source elearning platforms stressing adaptation issues Proc Fifth IEEE Int Conf Adv Learning Technologies, 3 pp. Colace F, DeSanto M and Vento M (2003) Evaluating on-line learning platforms: a case study Proc. 36th Hawaii International Conference on System Sciences, Hawaii, USA, 2003 IEEE Press Author: Institute: Street: City: Country: Email:
Paolo Inchingolo SSIC-HECE Via Valerio, 10 Trieste Italy [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Web-based Supporting Material for Biomedical Engineering Education K. Lindroos, J. Malmivuo, J. Nousiainen Ragnar Granit Institute, Tampere University of Technology, Tampere, Finland Abstract— European Commission funded virtual campus project EVICAB (European Virtual campus for Biomedical Engineering) was launched in January 2006. The idea is to develop a virtual environment for students to study biomedical engineering by means of e-courses. The transfer from contact teaching to e-courses gave rise to a need for web-based learning material. In order to face the challenge a new project was launched in Ragnar Granit Institute to produce video lectures and other supporting material to the Internet. The produced material has been evaluated and implemented as a part of ecourses in EVICAB. Keywords— Biomedical Engineering, EVICAB, E-learning material, Video lectures
I. INTRODUCTION Virtual campus project EVICAB -European Virtual Campus for Biomedical Engineering was started in January 2006. The goal of the virtual campus is to establish virtual curriculum in Internet for students in biomedical engineering. EVICAB has been build on a learning management system and will provide web-based applications for providing study material, communication system, supporting material, and assessment tools. Ragnar Granit Institute in Tampere University of Technology is one of the contributors in this project. [1] Virtual campus will provide a variety of e-courses in the field of biomedical engineering. The transfer from classroom education to Internet-based education needs extensive study on available applications for supporting the process. The institute has provided learning supporting material in Internet for several years and used learning management system (LMS) for three years as an important part of the education. The experience in fore mentioned web applications has been vital in the process for providing all the course material in electronic form. For providing the needed material to EVICAB, a project was started for producing web-based learning material for supporting the students’ learning process. The education in virtual campus is based on courses with no or at least very little contact teaching. This fact has given rise to specific need for web-based learning supporting material. The different phases of the process will be discussed and different production methods will be presented. During the whole process the feedback and acceptance from students
has been the driving force. Every method and new approach has been evaluated and changes have been done accordingly. Biomedical Engineering is very technical discipline and for this reason the transfer from contact teaching into eteaching is not as simple as providing text files or online books for student but needs variety of tools for student to support their studying. Lecture material, supporting study material, online quizzes and exercises, peer communication, and online tutoring have now been implemented into the education. Especially the combination of lecture material and activation of the student during the study process have been carefully considered. The example course used in this study is Bioelectromagnetism by Professor Jaakko Malmivuo. II. MATERIALS AND METHODS A. Modeling Phase The process for creating e-learning material was started by evaluating the existing study material and comparing that to a student centered learning process model. (Fig. 1) The Model illustrates in very simple way the learning process. As an input for the process there are number of sources of information. In Internet education the text formats are the major source of information. Internet books and lecture material in electronic form is the basis for e-courses. The role of teacher is different in e-courses than in contact teaching. Teachers can be considered more as instructor or tutor for providing guidance and guidelines for students, how to study all the essential parts of the material. The role of supporting material is emphasized in the environment with no face-to-face contact with the teacher. Supporting material in this study is considered to be all the material provided in addition to the primary study material. All the sources of information are inputs to the study process. The most important role for supporting material in this process is to activate and instruct the student in such way that the course outcomes are achieved. Supporting material should also enable the student to do selfassessment during the process. Finally the outcomes of the study process will be evaluated. These outcomes can be exam, exercise answers, writ-
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1111–1114, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1112
K. Lindroos, J. Malmivuo, J. Nousiainen
Feedback
Teacher/Tutor
Outcomes
Student
Book Lecture Material
Study Process
Supporting material Assessment Fig. 1 Model for learning process
ten final report, learning diary etc. This is the way for the teacher to ensure that the objectives of the course have been reached. The model can be applied in various stages in a course. One particular section or chapter may be analyzed and the objectives for that particular entity can be monitored. Also the whole course may be implemented to the model, containing all the study material and applications. Once a student has studied all the material and generated the outcomes needed, the study process may be evaluated in correlation to the course objectives. The study process may be considered as an outcome in course design. In course design all the sections in the course has been represented. In the example course Bioelectromagnetism the sections have been divided into four: 1. Preparation, 2. Literature, 3. Video lectures, and 4. Internet Exam. In preparation section all the information on the course will be provided for the students. Contents of the course, prerequisites, learning material, contact information, and learning outcomes are all presented in the beginning of the course. Course literature will be the primary study material in the course. Internet book and lecture slides are provided. In addition to text-based material, video lectures are produced. Video lectures combined with quizzes and selfevaluation tests will support the learning process. Exercises have also been added in order to guide the students to focus on the facts that the teacher sees essential. At the end the Internet exam will test whether the objectives of the course have been achieved or not.
Different parts of the model in figure 1 have been supported in the learning environment. The communication between teachers and students are supported by the discussion forums in learning management system and via e-mail. Also the lecture material is available on the Internet in form of online book and written files in LMS. The most problematic part in the process of students learning in virtual environment for our case was how to support the learning process and how to provide such an interesting material that student feel comfortable for studying without face-to-face contact with the teacher. The idea of supporting material is to activate and stimulate students by providing interactive study material, quizzes for selfassessment, and exercises. Production of lecture videos in different format started the production of supporting material. Lecture videos were considered to be good format for mostly theoretical courses. First and perhaps the easiest way to provide lectures on the Internet is to combine PowerPoint slides with narration or with audio file. This method was first considered because of the simple and fast production. At the same time shooting of lectures was started. The captured video was combined with screen capture from the PowerPoint slides in order to provide more convenient way for students to follow the lectures. (Fig. 2) In this production type the lecture video and screen capture were recorded separately, edited and combined later by using SMIL, Synchronized Multimedia Integration Language. Hypermedia laboratory in Tampere University of Technology provided the SMIL code. The benefit from the custom made code was the possibility to change lay-out, table of content and size of any window in the video. These features are commonly considered quite limited in commercial versions. The next step was to embed interaction to the videos in order to maintain students’ interest and activate them during the process. The need for interactivity was met by adding
B. Implementation Phase The implementation of the study process model is a challenge for all the partners providing e-courses to EVICAB platform. For creating and evaluating different methods and applications that can be used in the implementation, a study has been launched.
Fig. 2 Screen capture from Bioelectromagnetism lecture video http://butler.cc.tut.fi/~malmivuo/bem/bembook/in/vi.htm
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Web-based Supporting Material for Biomedical Engineering Education
quizzes and surveys to the videos. The video will be paused until student answers the quiz. In this way we provide a pause for the student to process the information and time to think the key issues in the particular section. III. RESULTS All the methods introduced in previous section were analyzed and evaluated. The method with only narrated pptfiles was not very well accepted by the student because of the lack of interactivity and it was considered even boring. The combination of screen capture and video on the other hand had very positive feedback. In the very first survey on the matter the feedback was 100 per cent (19/19) positive varying from average to excellent. The video lectures were also evaluated in relation to contact teaching and the result was surprising; in some cases the videos were preferred because of the possibility to rewind and pause. These features were especially valuable for those who have not fluent English or are easily distracted in classroom education. The negative feedback was related to usability of the videos. For watching the video students needed Real Player and one plug-in that was considered little awkward procedure. Due to the negative feedback on the file format the production was rearranged so that the outcome was in Flash format (.swf). Other feature which lead the production to Flash format was the possibility to add quizzes and surveys into the video. This feature was considered very important for activating students while they are watching the videos (Fig. 3) Production of Flash videos is at the moment in progress in Ragnar Granit Institute and no evaluation data has been analyzed yet but the experience has been promising.
1113
Due to the absence of contact teaching in e-learning the role of supporting material and especially self-assessment methods is important. In the example course in EVICAB, Bioelectromagnetism, for this purpose a set of quizzes has been created for every topic. Once a student has finished reading a chapter he/she has an opportunity test how well the concepts were understood. This gives the student opportunity to do self-assessment and get instant feedback whether some topics need more careful study or not. IV. DISCUSSION The challenge set in model for the study process has been taken seriously in EVICAB project. The format of all material is initially based on students’ feedback. The experience and feedback has been very positive on the lecture videos. The role of supporting material will be even more important in e-courses but its role also in contact teaching should not be neglected. In EVICAB course, Bioelectromagnetism, the students are advised to use the provided material so that they will follow the lecture videos and the e-book simultaneously and even try to find the answers to quizzes and exercises while reading. (Fig. 4) By combining the provided materials as an entity the student will be active participant of the lecture and will focus on the essential parts of the information flow. [2] The produced lecture videos are only one part of the ecourse. During the process many other aspects were taken into consideration. Both online and offline tutoring has been considered. In EVICAB courses students have possibility to discuss with the tutor or with peer students via discussion groups on specified topics or use chat for online communication. This is a way to provide guidance for students
Fig. 3 Screen capture from Bioelectromagnetism lecture video with quiz http://butler.cc.tut.fi/~malmivuo/bem/bembook/in/vi.htm
Fig. 4 Screen capture from Bioelectromagnetism lecture in EVICAB
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
1114
K. Lindroos, J. Malmivuo, J. Nousiainen
through Internet. In addition, peer communication will strengthen the feeling of solidarity. V. CONCLUSIONS European Virtual Campus for Biomedical Engineering project has a challenging approach to e-learning. The approach is to “develop a framework for a sustainable Internet-based virtual biomedical engineering curriculum” This task will be faced by developing and producing e-courses with up-to-date tools and content. [1] Ragnar Granit Institute is one of the contributors to the EVICAB curriculum. For this reason an extensive study and production of e-learning material has been started. Lecture materials are now available in electronic form. Also the importance of supporting material has been realized. The Internet-based tools are not only considered as tools for ecourses but they have also shown to be valuable asset in contact teaching. The current work at Ragnar Granit Institute has focused on video lectures and development of interactive study material. Videos have been accepted well by the students. The main benefits are the possibilities to rewind and pause if some concepts are not fully understood. Videos are also preferred in situations when students do not have time to attend the classroom lectures. The student may watch the lecture later at home. Quizzes and surveys are embedded to the videos for adding interaction, so that the student is more active participant of the lecture and not only passively listen to the teacher.
Video lecture production is a challenging task and has to be designed well before the shooting. The teacher has to be motivated and well prepared because the editing and reproducing the video is time consuming. If the topic of the course is developing fast, other production models should be considered. Changing the content of particular period of time in the video is inconvenient. The production methods represented here have worked well in Bioelectromagnetism course. The course is very well prepared and the professor has years of experience lecturing the material. Well prepared lecture slides ensure the successful video production and there has not been need for changing the content afterwards. The products of this study have been implemented to EVICAB -learning managements system. (www.moodle.fi/evicab)
REFERENCES 1. 2.
European Virtual Campus for Biomedical Engineering, http://www.evicab.eu/ Jaakko Malmivuo & Robert Plonsey: Bioelectromagnetism - Principles and Applications of Bioelectric and Biomagnetic Fields, Oxford University Press, New York, 1995 Author: Institute: Street: City: Country: Email:
Kari Lindroos Ragnar Granit Institute P.O. Box 692 Tampere Finland [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Biomedical Engineering Clinical Innovations: Is the Past Prologue to the Future? P. Citron St. Paul, Minnesota, USA Abstract— The medical device industry, defined here as implanted therapeutic or restorative technologies, is roughly 50 years old. The first commercially available cardiac pacemaker to treat complete heart block was implanted in 1958 in Sweden followed by the first ball and cage prosthetic heart valve in 1960. From these tentative beginnings, medical device industry sales by U.S. companies were estimated to be $77 billion in 2003. Significant technology sectors now include mechanical and tissue prosthetic heart valves, cardiac pacemakers, implanted cardioverter-defibrillators to convert chaotic heart rhythms, cardiac re-synchronization devices to manage heart failure, vascular stents to treat occluded coronary and peripheral arteries, neurostimulation devices for certain central nervous system disorders, artificial joints and spinal implants for degenerative conditions, intraocular implants for cataracts, to cite representative examples. While financial metrics provide an indication of the direct economic impact of medical devices, a more relevant measure is the effect medical technologies have on reduction in patient morbidity and mortality, improved well-being, and increased quality of life. Illustrative of this, data compiled by the CDC shows mortality from heart disease has decreased about 50% between 1970 and 2003. A number of factors are collectively responsible, including lifestyle changes, availability of novel pharmaceutical agents, and more rapid initiation of treatment during a window of opportunity following onset of myocardial infarction. Unquestionably, the availability of improved heart valves, progressively more “physiological” rhythm management devices and coronary stents have also played an important role in the dramatic reduction in mortality. Technological innovations have made substantial inroads in treating the effects of diseases of the heart’s electrical conduction system which is responsible for setting appropriately the heart’s rhythm and contraction pattern, resolving valvular abnormalities that compromise the heart’s ability to meet the hemodynamic needs of the body, and reversing insufficient blood circulation to the heart muscle itself caused by atherosclerosis. Historically, medical devices have been mechanical or electro-mechanical. Although effective in the presence of serious disease, they lack the elegance of natural tissue and normal physiology. The emerging field of tissue engineering has already had a twofold effect on the advancement of medical devices. It has expanded knowledge of local biological phenomena in the presence of synthetic materials and influences. This has led to improved therapeutic device reliability, effectiveness, and performance. The second aspect has led to the development of so-called combination devices that consist of a drug plus device, or biological agent plus device to produce a desired tissue response. Arguably the first widely used example is the steroid-eluting pacemaker lead that became commercially available in the mid-1980s. This
combination of a device and drug reduced inflammatory response at the site of tissue/electrode contact and resulted in sharply lower short and long-term stimulation energy required to reliably stimulate the heart. This in turn led to extended device longevity. Other more recent examples are drug-eluting stents which improve long-term artery patency compared to bare metal stents and the combined use of bone morphogenic protein inserted in a metal “cage” to encourage and improve desired bone in-growth in spinal procedures. Tissue engineered products hold enormous promise. It is entirely reasonable to expect future products to begin clinical life as a “device” and then remodel into what is indistinguishable from normal tissue. This is the long held vision for a viable small caliber vascular graft and also for tissue repair that fully restores normal function in the presence of disease or trauma. Prognostications of future trends in biomedical engineering innovation would be incomplete without and examination of the environment in which the device industry operates. A number of factors adversely affect the climate for innovation. Although some progress has been made in certain aspects, the regulatory and reimbursement processes continue to lack transparency, consistency, predictability, and timeliness. Of particular concern is the notion of “good science creep” that poses seemingly reasonable questions regarding a proposed technology but which require overly and arguably unnecessarily complex clinical trials in order to secure regulatory or reimbursement approval. These seemingly reasonable clinical requirements can bias development of new technologies toward those that carry low risk, serve predictably large markets, and encourage investments in me-too next-generational technologies that incrementally improve current offerings, often at the expense of important breakthroughs. Associated costs and extended timelines have dampened the enthusiasm of the venture capital community to invest in device industry start-ups – a major source of innovations. Although the industry invests heavily in R&D (11.4% of revenue in 2002), the escalating cost of clinical trials and other mandated requirements are straining, if not reducing, investments in new technologies and innovation. The industry, clinical community, and also academia have not done enough to set expectations for medical implants. As a consequence, many patients have an over-expectation of the capabilities of technology. When performance falls short, the industry may experience a political, regulatory, and investor backlash. Related to this, patients may not possess the information and skills to properly assess risk. They may reject potentially life-saving technologies because of the remote possibility of device malfunction in the face of much greater disease-related risk. These factors, and others, must be more effectively balanced so the latent promise of biomedical engineering innovations can become reality. There
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1140–1141, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Biomedical Engineering Clinical Innovations: Is the Past Prologue to the Future? is reason to be optimistic about the future. Patients and their families have come to appreciate how medical technologies have improved lives by restoring health and reducing suffering. There is now an expectation that new breakthroughs will emerge and that the future will be brighter than the past; that long-standing unmet clinical needs will be met.
1141
Author: Paul Citron Institute: Street: City: Country: Email:
University of California, San Diego 9500 Gilman Drive San Diego USA [email protected]
Keywords— biomedical engineering advancement, patients, trials
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Computer Aided Surgery in The 21st Century T. Dohi, K. Matsumiya and K. Masamune Graduate School of Information Science and Technology, University of Tokyo, Tokyo, Japan Abstract— Realization of new surgical treatment in the 21st century, it is necessary to use various advanced technologies; surgical robots, three-dimensional medical images, computer graphics, computer simulation technology and others. Threedimensional medical image for surgical operation provides surgeons with advanced vision. A surgical robot provides surgeons with advanced hand, but it is not a machine to do the same action of a surgeon using scissors or a scalpel. The advanced vision and hands available to surgeons are creating new surgical fields, which are minimally invasive surgery, noninvasive surgery, virtual reality microsurgery, tele-surgery, fetus surgery, neuro-informatics surgery and others in the 21st century. Keywords— computer aided surgery, surgical robot, three dimensional image, true three dimensional display, surgical navigation.
I. INTRODUCTION Surgical operations have developed in the method which skillful surgeon's hands and eyes are used. Therefore, it is very difficult to apply advanced technologies to surgical operations. To develop the new surgical fields of minimally invasive surgery, non-invasive surgery, virtual reality microsurgery, telesurgery, fetal surgery and others in the next century, it is necessary to use various advanced technologies; surgical robots, three-dimensional medical images etc. based on computer technology. Therefore, this new surgical field is called Computer Aided Surgery (CAS) [1]. Threedimensional (3-D) medical images provide the most recognizable information for medical doctors and advanced visualization for surgeons. Surgical robots function as advanced hands for surgeons. The advanced vision and hands available to surgeons are creating a new surgical environment. II. ADVANCED VISION Usually, medical images in a surgical field are used mainly for diagnosis before and after operation. Computer graphics technology visualizes 3-D structure of organs, vessels and tumors by information from the X-ray CT, MRI, echography and so on. The 3-D medical image as an advanced vision in CAS does not only remain in diagnosis, but is important also for the surgical navigation in a robotic
surgery. This surgical navigation can bring out a surgical robot's capacity. There are three kinds of the methods of a 3-D display; 1) pseudo 3-D display, 2) binocular stereoscopic display, and 3) true 3-D display. The true 3-D display produces a 3-D image in real 3-D space. As displays of this method, there are holography, integral photography (IP), and volume graph based on the principle of IP. As observation by this method is physiological, this observation does not cause visual fatigue. Absolute 3-D positions and motion parallax are given. IP projects 3-D models using a 2-D lens array called a “fly’s eye lens (FEL)” and a photographic film. Recently, a computergenerated IP has been developed by FEL and color liquid crystal display. It is named "Integral Videography (IV)" [2]. IV can display full color video. The volume graph and IV give absolute 3-D positions and they are much simpler than holography using interference of laser light. And they can project 3-D internal structure of a patient into a patient's body exactly and easily. Therefore they are very suiTable 3-D display for surgical navigation. Three-dimensional medical images during an operation by surgical robots are a very important. Especially true 3Ddisplay is the most important technology as well as robot technology for CAS in the 21st century. III. SURGICAL ROBOT [3] An advanced hand for a surgeon is one of the medical instruments and it is called a surgical robot or a therapeutic robot. As surgical robots for CAS, there are two kinds of robots, one is a navigation robot and the other is a treatment robot. By the way, if there is no electrical washing machine in the world, and researchers of robot are asked to develop the washing robot. Probably, the robot that imitates the wash work that man does is made. However everybody knows such a robot is not correct and everybody knows electrical washing machine is correct. The purpose of washing removes the dirt of clothes. The same thing is said about the surgical robot.
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1132–1133, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Computer Aided Surgery in The 21st Century
Surgical operations have developed in the method which skillful surgeon's hands and eyes are used. But surgical robots do not perform the same action as surgeons. In addition, the following thing can be said. • •
Many surgical operations are not suitable for performance by a machine. Machine that performs same action as a surgeon cannot treat better than a surgeon.
Therefore, surgical robots do not just imitate surgeon’s action, but it must be designed in consideration of the following points. • • • •
It should be designed corresponding to purpose of treatment. Mechanical mechanism must be suitable for a treatment by machine. It should provide better treatment than the current treatment provided by the surgeon's eyes and hands. It should make the most of the current knowledge and experience of the surgeon.
1133
REFERENCES 1
2 3
Dohi T et al. (1990) Computer Aided Surgery System (CAS) Development of Surgical Simulation and Planning System with Three Dimensional Graphic Reconstruction, 1st Conference on Visualization in Biomedical Computing, IEEE, pp 458. Nakajima S, et al. (1999) Development of a 3-D display system to project 3-D image in a real 3-D space, Proc. of 3D Image Conference'99, pp 4954. Dohi T (2004) Surgical Robots and Three-Dimensional Displays for Computer Aided Surgery, Recent Advanced in Endourology 6, Endourooncology, Springer, pp 15-26. Author: Takeyoshi Dohi Institute: Street: City: Country: Email:
University of Tokyo 7-3-1 Hongo Tokyo Japan [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Innovations in Bioengineering Education for the 21st Century J.H. Linehan Department of Bioengineering, Stanford University, Stanford, USA Abstract— The National Academy of Engineering published a report titled The Engineer of 2020 [1]. That report suggests that, to meet the future head-on, we not only train our students to possess strong analytical skills but also to have practical ingenuity, be creative, have excellent communication skills, understand leadership, have high ethical standards, and be lifelong learners. Cognitive scientists have been suggesting that we change the educational process for the past 25 years. They opine that effective learning methods have shifted from concentrating on only developing skills and expertise to focus on students’ understanding of application and knowledge [2]. Bioengineering curricula are being created world-wide as new departments and programs are formed3. There are around 50 new undergraduate programs in the US alone. New curricula are largely being developed independently. As expected, bioengineering curricula have been focused on developing deep skills through a biology-infused engineering curriculum. In 2005, The Whitaker Foundation convened an international summit meeting to explicate ideas on the new discipline, bioengineering [3]. Diversity is good for the educational “ecosystem”. We can learn what works and what doesn’t by sharing information amongst programs. In the US, an informal organization has emerged, (BME-IDEA.org [4]), to promote teaching design, innovation, and entrepreneurship in the bioengineering curriculum. Design and problem-based learning are two examples of experiential learning processes meant to train students to be innovative in their approach to problem solving; that is, to become “adaptive experts”[1]. Biomedical engineering applications are particularly engaging to the students because they are “problems that matter”. My talk will
focus on two examples of learning methods that help students develop adaptive expertise are problem-based learning [5] and medical device design [6]. Keywords— diversity, teaching design, innovation, entrepreneurship
REFERENCES 1. 2. 3. 4. 5. 6.
The Engineer of 2020 – Visions of Engineering in the New Century. National Academy of Engineering, 2004 Bransford J (2007) Preparing People for Rapidly Changing Environments, J of Eng Edu. The Whitaker Foundation Biomedical Engineering Educational Summit Meeting, Lansdowne,VA., 2005 at http://bmes.seas.wustl.edu/WhitakerArchives/academic/ BME-IDEA.org. at http://www.stanford.edu/group/biodesign/bmeidea/ http://www.bme.gatech.edu/pbl/about.php http://innovation.stanford.edu/jsp/program/about.jsp Author: John H. Linehan Institute: Street: City: Country: Email:
Stanford University 318 Campus Drive Stanford USA [email protected]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 1142, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Multi-dimensional fluorescence imaging P.M.W. French Photonics Group, Physics Department, Imperial College London Abstract— Fluorescence offers many opportunities for optical molecular imaging and can provide information beyond simply the localisation of fluorescent labels. At Imperial we are developing technology to analyse and image fluorescence radiation with respect to wavelength, polarisation and, particularly, fluorescence lifetime, in order to maximise the information content. This talk will review our recent progress applying fluorescence lifetime imaging (FLIM) and multidimensional fluorescence imaging (MDFI) to tissue imaging and in vitro cell microscopy. Applying FLIM to autofluorescence of biological tissue can provide label-free contrast for non-invasive diagnostic imaging, as we have demonstrated in various tissues including atherosclerotic plaques, cartilage, pancreas and cervical tissue. FLIM and MDFI are also applicable to image intracellular structure and function for cell biology and drug discovery: hyperspectral imaging and FLIM can provide (quantitative) information concerning the local fluorophore environment and facilitate robust fluorescence resonant energy transfer (FRET) experiments while information concerning structure and rotational mobility may be obtained by applying polarisation resolution. Our most recent work includes high-speed and optically-sectioned FLIM for
automated imaging and live cell studies, hyperspectral FLIM for acquiring excitation-emission –lifetime matrices to distinguish different fluorophores and microenvironments and imaging of rotational correlation time, particularly applied to microfluidic devices. Excitation sources are a particular challenge for confocal microscopy and other FLIM modalities including endoscopy, owing to the complexity and limited spectral coverage of available technology. Increasingly we are exploiting ultrafast fibre lasers and continuously tunable ultrafast sources based on continuum generation in photonic crystal fibres for wide-field and confocal FLIM applications. Keywords— fluorescence, imaging, excitation.
Author: Institute: Street: City: Country: Email:
Paul MW French Imperial College Exhibition Road London UK [email protected]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 1134, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Nanomedicine: Developing Nanotechnology for Applications in Medicine Gang Bao Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA Abstract— In this presentation I will discuss the recent development of nanomedicine as an emerging field in the United States. In particular, I will give a brief summary of the US National Institute of Health (NIH) nanotechnology / nanomedicine centers established over the last few years, and present the bionanotechnologies being developed at the NIH nanomedicine centers at Georgia Tech and Emory University. The opportunities and challenges in developing nanomedince will be discussed.
II. MAJOR RESEARCH AREAS Table 1. NIH-funded nanomedicine centers in US Center Name
Lead Institutions
PI(s)
NHLBI Programs of Excellence in Nanotechnology (PEN) Nanotechnology: Detection and Analysis of Plaque Formation
Emory University and Georgia Tech
Gang Bao
Integrated Nanosystem for Diagnosis and Therapy
University of Washington in St. Louis
Karen Wooley
I. INTRODUCTION
Nanotherapy for Vulnerable Plaques
The Burnham Institute, San Diego
Jeff Smith
Nanomedicine is broadly defined as the development of engineered nano-scale (1-100 nm) structures and devices for better diagnostics and highly specific medical intervention in curing disease or repairing damaged tissues. Integrating nanotechnology, biomolecular engineering, biology and medicine, nanomedicine is expected to produce major breakthroughs in medical diagnostics and therapeutics. Due to the size-compatibility of nano-scale structures with proteins and nucleic acids, the development and application of nanostructured probes and devices provides unprecedented opportunities for achieving a better control of biological processes, and drastic improvements in disease detection, therapy, and prevention. Realizing the great potential of nanomedicine, over the last two years the US National Institute of Health has established 24 national centers in nanomedicine, with a total budget of about $300 M over a 5-year period. These centers include four Programs of Excellence in Nanotechnology (PEN), funded by the National Heart Lung Blood Institute of NIH (NHLBI/NIH), eight Centers for Cancer Nanotechnology Excellence (CCNE), funded by the National Cancer Institute of NIH (NCI/NIH), and four Nanomedicine Development Centers (NDC), funded by the NIH Roadmap Initiative in Nanomedicine. Each of these centers involves a multi-institutional collaboration, and represents the state-ofthe-art in the development of bionanotechnologies and their application to medicine. Together, these centers form the cutting edge of nanomedicine in the US. The title, lead institution(s), and the names of the PI of each center are listed in Table 1.
Translational Program of Excellence in Nanotechnology
Harvard University Medical School
Ralph Weissleder
Keywords— nanomedicine, nanotechnology, molecular imaging, targeted therapy
NCI Centers for Cancer Nanotechnology Excellence (CCNE) Carolina Center of Cancer Nanotechnology Excellence
University of North Carolina
Rudolph Juliano
Center for Cancer Nanotechnology Excellence Focused on Therapy Response
Stanford University
Sanjiv Sam Gambhir
Center of Nanotechnology for Treatment: Understanding, and Monitoring of Cancer
University of California San Diego
Sadik Esener
Emory-Georgia Tech Nanotechnology Center for Personalized and Predictive Oncology
Emory University and Georgia Tech
Shuming Nie and Jonathan Simons
MIT-Harvard Center of Cancer Nanotechnology Excellence
MIT and Harvard Medical School
Robert Langer and Ralph Weissleder
Nanomaterials for Cancer Diagnostics and Therapeutics
Northwestern University
Chad Mirkin
Nanosystems Biology Cancer Center
California Institute of Technology
James Heath
The Siteman Center of Cancer Nanotechnology Excellence
Washington University in St. Louis
Samuel Wickline
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1135–1136, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
1136
Gang Bao
NIH Nanomedicine Development Centers (NDC) Center for Protein Folding Machinery
Baylor College of Medicine
Wah Chiu
National Center for the Design of Biomimetic Nanoconductors
University of Illinois Urbana-Champaign
Eric Jakobsson
Engineering Cellular Control: Synthetic Signaling and Motility Systems
University of California, San Francisco
Wendell Lim
Nanomedicine Center for Mechanical Biology
Columbia University
Michael Sheetz
Nanomedicine Center for Nucleoprotein Machines
Georgia Institute of Technology
Gang Bao
The Center for Systemic Control of Cyto-Networks
University of California Los Angeles
Chih-Ming Ho
NDC for the Optical Control of Biological Function
Lawrence Berkeley National Laboratory
Ehud Isacoff
Phi29 DNA-Packaging Motor for Nanomedicine
Purdue University
Peixuan Guo
advances in developing nanostructured probes for molecular imaging, including the design, synthesis, characterization and validation of molecular beacons. I will also discuss the novel properties and functions of nanoparticle probes such as quantum-dot bio-conjugates and magnetic nanoparticles. These probes provide a new platform for molecular targeting and imaging in living cells and animals. Examples will be given on the applications of the novel imaging probes to basic biological studies such as living cell gene expression, and disease studies including cancer and cardiovascular research, and the detection of viral infection. The new challenges in nanomedicine such as the deciphering of engineering design principles and fundamental biology of protein nanomachines will be discussed. Author: Gang Bao Institute: Street: City: Country: Email:
Georgia Tech 313 Ferst Drive Atlanta USA [email protected]
In my presentation I will focus on the development and application of nanostructured probes. I will review recent
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
Synthetic Biology – Engineering Biologically-based Devices and Systems R.I. Kitney Biomedical Systems Engineering, Department of Bioengineering, Imperial College London, London, Great Britain
Synthetic Biology is an emerging field that aims to design and manufacture biologically-based devices and systems that do not already exist in the natural world, including the re-design and fabrication of existing biological systems. The foundations of Synthetic Biology are based on the increasing availability of complete genetic information for many organisms, including humans, and the ability to manipulate this information in living organisms to produce novel outcomes. More specifically, engineering principles, including systems and signal theory, are used to define biological systems in terms of functional modules - creating an inventory of ‘bioparts’1 whose function is expressed in terms of accurate input/output characteristics. These ‘bioparts’ can then be reassembled into novel devices acting as components for new systems in future applications. Systems Biology aims to study natural biological systems as a whole, often with a biomedical focus, and uses simulation and modeling tools in comparisons with experimental information. Synthetic Biology aims to build novel and artificial biological systems, using many of the same tools, but is the engineering application of biological science rather than an extension of bioscience research. There is quite a close relationship between Synthetic Biology and Systems Biology. The basis of quantitative Systems Biology lies in the application of engineering systems and signal theory to the analysis of biological systems .This allows the definition of systems in terms of mathematical equations and complex models – often as individual functional blocks (i.e. transfer functions). Once a system, or part of a system, has been described in this way then Synthetic Biology allows the reduction of the system to parts (bioparts) whose function is expressed in terms of input/output characteristics. These characteristics are then presented on a standard specification sheet so that a system designer can understand the functional characteristics of the part. The parts are then entered into a repository. The parts defined in the repository can then be combined into devices and, finally, into systems. For example, in the same way that standard engineering devices, such as an oscillator, can be realized in the terms of fluidics, pneumatics and electronics, biologically based oscillators can now be realized in terms of protein concentrations2. Tolerances are built into the design of any engineering part, device or system to compensate for imperfections in the manufacturing. Bioparts tend to have wider tolerances than standard engi-
neering parts, so biologically-based devices are designed to accommodate such features. Synthetic Biology uses the classic engineering reductionist method whereby complex systems are built from defined parts and devices. In addition, the approach to the design of such devices and systems which has been successfully implemented is that of the Engineering Cycle. This is illustrated in Figure 1, below. The Engineering Cycle starts with defining the specification for the device or system which is to be designed and built. The next step in the process is to design the device or system on the basis of the specification. Frequently in engineering the design is then tested by extensive modeling. In the case of Synthetic Biology this is almost always an important step. As can be seen from Figure 1, modeling is followed by implementation, testing and validation. Synthetic Biology could revolutionize a number of fields of engineering. Materials are one example of a potentially important area. Here, Synthetic Biology involves the harnessing of biological processes (on an industrial scale) to produce new materials. In many areas of industry, for example the aeronautical industry, there is a pressing need to use materials that are very strong but, simultaneously, extremely light. In aircraft design if it were possible to significantly reduce the weight of the aircraft there would be immediate and major improvements in fuel consumption. The understanding and manipulation of the biological processes
Specification
Testing/Validation
Design
Implementation
Modelling
Fig. 1 The engineering cycle
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, pp. 1138–1139, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Synthetic Biology – Engineering Biologically-based Devices and Systems
that control the production of such materials, via Synthetic Biology, could result in the synthesis of a whole range of new materials. This would significantly change and invigorate several industrial sectors such as civil engineering, aeronautical engineering and the automotive industry. Biologically based electronics and computing are another important area. Biologically synthesized devices may be operationally many thousands of times slower than their electronic equivalents, but this may be an advantage if such devices are to be used to monitor biological processes (i.e. the time constants of the devices match the environment in which they are operating). We may well be at a similar point today to where the great industries of the twentieth century (mechanical, electrical, aeronautical engineering etc) were at the end of the nineteenth century - i.e. at the dawn of new era of engineer-
1139
ing. The biologically based engineering industries of the future will arise from the Cellular and Molecular Biology revolution which has occurred over the last fifty years. The engineering application of this new knowledge via Synthetic Biology will result in new industries that will have similar, if not greater, potential for enormous wealth generation. Author: Richard I Kitney Institute: Street: City: Country: Email:
Imperial College London Exhibition Road London United Kingdom [email protected]
__________________________________________ IFMBE Proceedings Vol. 16 ___________________________________________
The Physiome Project: A View of Integrative Biological Function C.F. Dewey Department of Mechanical Engineering, Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, USA Abstract— The Physiome Project comprises a worldwide effort to provide a computational framework for understanding human and other eukaryotic physiology. The aim is to develop integrative models at all levels of biological organization from genes to the whole organism. This is achieved via gene regulatory networks, protein pathways, integrative cell function, and tissue and whole organ structure/function relations. A key hallmark of the Physiome is that it covers many physical scales of description, from molecule-molecule interactions to whole cell behaviour to whole organ descriptions. This talk will stress the computational and semantic layers of the Physiome, the mathematical and logical “glue” that allows the various physiological scales to communicate and work with one another. The first knowledge domain is Ontologies. Ontology is a specific expression of known facts about the real world. Work on ontologies is being undertaken in order to organize biological data and knowledge at the different levels of the biological continuum. An additional and important component of this work is to facilitate easy and effective access to a range of databases, and to facilitate automated reasoning that can simultaneously extract information from many databases. We will illustrate how ontologies can be used to create and manage databases in an intelligent manner. Biology and medicine are replete with many Databases, and the physiome goes from information on the smallest scales such as genes and proteins to whole organs such as the beating heart. They have been designed to hold experimental data such as those from medical images and microarrays, and they have also been constructed to hold consensus information such as curated scientific “truth” about genes and proteins. A considerable amount of
work has been undertaken to integrate the meaning of different facts that appear in these different databases, but unfortunately that process today is very time-consuming and difficult. We will discuss how working first with the ontologies and then deriving the databases from them makes this problem much easier. The third leg of the Physiome has to do with quantitative prediction using explicit Biologically-Based Models. A number of markup languages, based on the XML standard, have been developed to describe these models, e.g. SBML, CellML and TissueML. The languages are designed to facilitate the encoding of models of biological structure and function in a standard format. The markup languages represent common semantic understandings which have been developed to greatly enhance the ability to share data and models. It is also possible to re-use parts of the more comprehensive models in new models – often being developed by other groups of workers. An example of this reuse is the Cytosolve system of shared solutions to biological pathways. Keywords— Physiome project, databases, markup languages Author: C. Forbes Dewey Institute: Street: City: Country: Email:
Massachusetts institute of Technology 77 Massachusetts Avenue Cambridge USA [email protected]
T. Jarm, P. Kramar, A. Županič (Eds.): Medicon 2007, IFMBE Proceedings 16, p. 1137, 2007 www.springerlink.com © Springer-Verlag Berlin Heidelberg 2007
Index Authors A Abbas, Abbas K. 385 Abbott, D. 26 Abramovic, Z. 453 Acampora, F. 762 Acampora, S. 762 Accardo, A. 78, 445 Accetto, R. 354 Akerman, S. 810 Alcaraz, R. 54, 90, 94 Alesch, F. 651 Alizad, A. 1021 Alvarenga, A.V. 1025 Amon, S. 178 Andersen, O.K. 669 André, F.M. 623 Anier, A. 554 Argenziano, L. 758, 1066 Arifin, A. 647 Armas, J. H. 313 Arora, V. 685 Arredondo, M.T. 558 Auersperg, M. 469 Azevedo-Coste, C. 654
B Bachmann, M. 210 Bajd, B. 365 Bajd, T. 262, 950, 954, 982 Bajic, D. 773 Bakker, J.M.T. de 42 Bao, Gang 1135 Baranov, V. 911 Bardorfer, A 965 Barea, R. 1038 Baretich, M. 1051 Barison, S. 194 Barlic, A. 249 Baroni, M. 847 Barraco, R. 919 Barrella, M. 932 Barthel, P. 38 Batiuskaite, Danute 606 Bauer, A. 38 Bauld, T. 1051 Baumert, M. 26 Belic, A. 478, 501
Belkov, S.A. 856 Bellazzi, R. 14 Bellomonte, L. 919 Beltrame, M. 723, 732 Benazzo, F. 238 Benca, Maja 357 Berdondini, L. 525 Berner, N.J. 190 Bertolini, Giovanni 977 Bester, J. 323, 332, 737 Bevilacqua, M. 932 Bianchi, A.M. 30, 473 Biffi Gentili, G. 752 Bifulco, P. 369, 426, 777, 789, 990, 1096 Bijak, M. 658 Bijelic, G. 654 Birkfellner, W. 834 Bisaccia, L. 758 Blas, J.M. 54 Blazun, H. 716 Blinc, A. 859 Bliznakov, Z. 928 Bliznakov, Z.B. 1092 Bliznakova, K. 923, 928 Bocchi, L. 1062 Boccuzzi, D. 426 Bocquet, B. 170 Bohanec, Marko 708 Bohinc, K. 903 Bojic, T. 482 Bojkovski, J. 338, 361 Bollmann, A 82 Bondarenko, S.V. 856 Boquete, L. 1038 Borean, M. 445 Bosazzi, P. 723, 732 Botelho, M.L.A. 319 Bottacci, D. 1102 Bouatmane, S. 843 Bouridane, A. 843 Bracale, M. 758, 762, 1066 Brai, M. 919 Bravar, L. 445 Brennan, T.P. 50 Brezan, S. 478, 501 Brezovar, D. 965 Brguljan-Hitij, J. 354 Bruno, P. 509, 513
Buchgeister, M. 313 Buchvald, P. 270 Bunce, S. 473 Burdo, S. 390 Burger, H. 943, 965 Burger, J. 875 Busayarat, S. 822 Butolin, L. 943 Butti, M. 473 Buyko, S. 346
C Cabral, J. 505 Cadossi, R. 10 Cagy, M. 492 Calani, G. 752 Calil, S.J. 319, 1085 Campbell, R.I. 969 Capek, L. 270 Carrozzi, M. 445 Casar, B. 887 Castrichella, L. 745, 749 Cavallini, Anna 152 Cemazar, M. 465, 469, 582, 586, 589, 602, 618 Chen, C.H. 242 Cerutti, S. 30, 473 Cervigon, R. 54 Cesarelli, M. 369, 426, 777, 789, 990, 1096 Cestnik, Bojan 708 Cha, H. 274 Chiap, A. 445 Chiappalone, M. 525 Cho, S.W. 1030, 1034 Choi, Y.K. 1034 Christodoulou, C. 118 Christofides, S. 313, 899 Cifrek, M. 529 Cigale, B. 1013, 1017 Cikajlo, I. 936 Cimerman, M. 327, 915 Ciprian, S. 954 Citron, P. 1140 Ciupa, R.V. 665, 895 Coer, A. 465, 589 Collins, C.G. 628 Constantinou, C.E. 286
1144
Conti, S. 194 Corbi, G. 78 Corino, V.D.A. 82 Coronel, R. 42 Corovic, S. 323 Corvi, Andrea 296 Couderc, B. 630 Cret, L. 665 Cugelj, R. 958 Cukjati, David 606 Cunha, D.F. 319 Cunningham, V.J. 457 Curone, D. 986 Cusella De Angelis, M.G. 238 Cvetkov, A. 1 Cvikl, M. 66
D D’Addio, G. 78 Daniel, M. 246, 282, 915 Dapena, M. A. 1038 Darowski, M. 413, 416, 871 David, Y. 1051 De Berardinis, T . 426 Debevec, H. 915 Del Guerra, A. 313 Delp, Scott 685 Dessel, P.F.H.M. van 42 Dewey, C.F. 1137 Di Giacomo, P. 1062 Di Salle, F. 509 Díaz-Zuccarini, V. 895 Diciotti, S. 847 Dickey, D. 1051 Dinevski, D. 719, 723 Docampo, M. 558 Dohi, T. 1132 Dolenc, L. 864 Dolenc, Primoz 357 Dolinar, D. 282 Dori, F. 752, 1102 Dosen, Strahinja 661 Dössel, O. 541 Dragin, A. 482 Drnovsek, J. 338, 342 Drobnič, M. 249, 253
E Efstathopoulos, E. 899 Eljon, M. 332
Index Authors
Emborg, J. 669 Ergovic, V. 677 Eroshenko, V. 911 Escoffre, J.M. 624 Eudaldo, T. 313 Evangelisti, A. 847, 932
F Fajdiga, I. 864 Farina, D. 109, 541 Fassina, L. 238 Fatemi, M. 1021 Faustini, G. 723, 727 Fazekas, G. 266 Feng, J. 278 Fernandes, H. 505 Ferrara, N. 78 Ferrario, M. 781 Fidler Mis, Natasa 166 Filligoi, Gian Carlo 124 Fink, M. 50 Fischer, R. 22 Forjaz Secca, M. 505 Fortunato, P. 847 Frank, M. 566 Fratini, A. 369, 426, 789, 990, 1096 French, P.M.W. 1134 Fridolin, I. 350 Frigo, C.A. 292 Fritzson, P. 685 Fujii, T. 170 Furuse, Norio 689
G Gados, D. 802 Gajsek, P. 218, 222, 234 Gamberger, D. 157 Garfield, R. E. 128 Gargiulo, G. 369 Gazzoni, M. 109 Gerogiannis, I. 879, 899 Gersak, G. 342 Ghandour, H. 170 Giacomini, M. 693 Giansanti, D. 745, 749, 1006 Gieras, I. 1051 Gilly, H. 1070 Giovagnoli, M.R. 745, 749
Glapinski, J. 413 Glaser, V. 105 Goldmann, T. 300, 304 Goljar, N. 936 Golzio, M. 610, 624, 630 Gopalsami, N. 346, 911 Gorisek-Humar, M. 681 Goszczynska, Hanna 793 Gouliamos, A. 899 Grabec, D. 867 Grabljevec, K. 393 Grasser, S. 1058 Greenleaf, J.F. 1021 Grmec, Š. 716 Grobelnik, B. 859 Grosel, A. 582, 589 Gryaznova, V.A. 257 Gudmundsson, V. 139 Guelaz, R. 377 Güler, G. 230, 214
H Hafner, C. 1058 Halasz, G. 430 Hamar, G. 818 Hamid, Azman 1089 Han, K.W. 839, 1030, 1034 Hana, K. 86 Hart, F.X. 190 Hasan, Muhammad Kamrul 405 Hatakeyama, Y. 286 Heblakova, E. 86 Heida, T. 521 Heyman, J. 1051 Hidalgo, M. A. 1038 Himmlova, L. 300 Hinrikus, H. 210 Hocevar, F. 958 Hofer, C. 658 Hofstra, W.A. 487 Holcik, J. 62 Holder, D.S. 798 Holobar, A. 105, 109, 114 Hong, J. 274 Horvath, G. 802, 818 Hose, D.R. 895 Hudej, R. 875, 883 Husser, D 82 Hyman, W. 1051 Hyndman, B. 1051
Index Authors
I Iadanza, E. 752, 1102 Ide, A.N. 525 Iglic, A. 246, 282, 903 Ihan Hren, N. 943 Ilias, Michail 1122 Inchingolo, P. 509, 513, 719, 723, 727, 732, 1107 Inchingolo, P. 1077 Infantosi, A.F.C. 492, 1025 Innocenti, Bernardo 296 Isgum, V. 529 Istenic, R. 114 Ivanova, T. 923 Ivanovski, M. 282 Izzetoglu, M. 473
J Jafari, Ayyoub 99 Jager, F. 34 Jagomägi, K. 562 Jan, J. 174 Jancar, J. 234 Japundzic-Zigon, N. 773 Jarm, T. 148, 469, 1002, 1009 Javorka, K. 766, 769 Javorka, M. 766, 769 Javorkova, J. 766, 769 Jelovsek, A. 704 Jenko, M. 737 Jerotskaja, J. 350 Jobbagy, A. 266 Johnson, J.H. 190 Jovic, Alan 549 Jovic, S. 482 Jung, Y.C. 1034
K Kaik, J. 554 Kalaitzis, A. 879 Kamarianakis, Z. 826 Kamnik, R. 288, 950, 954 Kantelhardt, J.W. 38 Kaplanis, P. 118 Kaplanis, P.A. 879, 899 Kapus, J. 994 Kapus, V. 994, 1002 Karas, S. 86 Karba, R. 478, 501
1145
Kardamakis, D. 923 Karlsson, B. 135, 139 Kasovic, M. 677 Kasthuri, U. 856 Katrasnik, J. 947 Kawahara, K. 537 Keil, O. 1051 Keller, J. 1051 Kern, H. 658 Kervina, D. 737 Khang, G. 274, 835 Khir, A.W. 278 Kim, D.Y. 1030 Kim, I.Y. 839, 1030, 1034 Kim, J.J. 839, 1034 Kim, S.I. 839, 1030, 1034 Kim, Y. 274 Kinsella, R. 1055 Kitney, R.I. 814, 1138 Klemen, A. 681 Knaduser, Masa 570 Kneppo, P. 86 Kochemasov, G. 856, 911 Koder, J. 253 Kohn, A.F. 419 Kokol, P. 716, 719 Konno, D. 442 Konvickova, S. 300 Koren, A. 864 Koritnik, B. 478, 501 Koritnik, T. 262 Kos, A. 323 Kostka, P.S. 70 Kotnik, S. 965 Kotnik, T. 639 Kourtiche, D. 377 Kozarski, M. 416 Kozelek, P. 62 Kozelj, A. 716 Krajnc, I. 719 Krajnik, J. 681 Kralj, P. 157 Kralj-Iglic, V. 246, 282, 566, 903, 915 Kramar, P. 574, 578 Kranjc, M. 574 Kranjc, S. 469, 582, 589, 602 Krbot, M. 529 Krcevski-Skvarc, N. 716 Krečič-Stres, H. 253 Kregar-Velikonja, N. 249, 253 Krevs, Luka 381
Kristan, A. 915 Kristl, J. 453 Krizaj, D. 174, 178, 182, 393 Krizmaric, M. 716 Krkovič, M. 253 Krstacic, A. 157 Krstacic, Goran 549 Krzan, M. 566 Ku, J.H. 839, 1030, 1034 Kulikov, S. 346 Kuraszkiewicz, B. 871 Kurillo, G. 478 Kybartaite, A. 329
L La Gatta, A. 990 La Torre, A. 847 Lackovic, I. 631 Laguna, P. 74 Landa, M. 304 Lanmüller, H. 651 Lass, J. 210, 434 Lauri, K. 350 Lavrac, N. 157, 708 Lawford, P.V. 895 Lazarevic, I. 741 Leal, A. 505 Lebar, A. Macek 578 Lee, H.R. 839 Lee, S. 274 Lee, W.H. 242, 839 Legan, M. 465 Legrand, D. 170 Lendyak, A.A. 700 Lenic, M. 1013 Lennon, E. 170 Leskosek, B. 131 Levin, O. 643 Lewis, C. A. 1 Lindroos, K. 329, 336, 1111 Lindroos, K.V. 1130 Linehan, J.H. 1142 Linnenbank, A.C. 42 Lipschultz, A. 1051 List, I. 282 Livint, Gh. 954 Ljesevic, B. 482 Lo Sapio, M. 969 Loffredo, L. 426 Logar, V. 478, 501 Lokar, M. 566
1146
Loncar-Turukalo, T. 773 Londero, F. 1107 Lorandi, F. 693 Lorens, A. 940 Lu, S.K. 242 Lucache, D. 954 Lukosevicius, A. 1126 Luman, M. 350 Lunghi, F. 781, 986 Lyubynskaya, T. 438, 911
M Maccioni, G. 1006 Macek-Lebar, A. 144, 148, 178, 332 Macellari, V. 1006 Macrini, J.L.R. 1025 Magenes, G. 238, 781, 986 Magjarevic, R. 46, 58, 397, 631 Magli, A. 426 Mahady, J. 1055 Mahmudi, Hedyeh 696 Mainardi, LT 82 Malatara, G. 923 Malataras, P.G. 1092 Maličev, E. 253 Malik, M. 74 Malmivuo, J. 329, 1111 Malmivuo, J.A. 336, 1115 Maner, W. L. 128 Manis, G. 785 Marani, E. 521 Mareš, T. 246 Margo, C. 186 Marini, E. 752 Marjanovic, T. 206 Marolt, D. 253 Marozas, V. 1126 Marque, C. 135, 139 Martinoia, S. 525 Martorelli, M. 969 Martynov, A. 346 Marwala, T. 806 Masamune, K. 1132 Maslov, N.V. 856 Masuko, T. 647 Mateo, J. 54, 90 Matjacic, Z. 673, 681, 936 Matko, D. 501 Matsumiya, K. 1132 Matsuoka, T. 442 Mattei, S. 1102
Index Authors
Mavcic, B. 282, 915 Maver, T. 943 Mayr, W. 658 Mazurier, J. 170 Mazzolini, L. 624 McEwan, A.L. 798 Medved, V. 677 Meesen, R.L.J. 643 Meigas, K. 434, 554 Melillo, P. 758 Melinscak, M. 198 Melnyk, O.O. 257 Mendez, M.O. 30 Mendonca, F.B. 319 Meneghini, F. 509, 513 Merletti, R. 109, 114 Merzagora, A. C. 473 Mesojednik, S. 589 Micetic-Turk, D. 716 Michnikowski, M. 413 Micieli, Giuseppe 152 Mihel, J. 58 Mikac, U. 859 Miklavčič, D. 178, 218, 226, 323, 332, 381, 570, 574, 578, 593, 597, 602, 606, 631, 635, 639, 851 Milicic, P. 1100 Millet, J. 54 Milutinovic, S. 773 Mininel, S. 509, 513, 723 Miodownik, S. 1051 Mir, L.M. 606, 622, 623 Miri, R. 541 Mödlin, M. 658 Molan, G. 162 Molan, M. 162 Molnar, F. T. 430 Moon, H.J. 835 Morradi, M.H. 99 Morrissey, A. 628 Morse, W. 1051 Munih, M. 262, 288, 950, 965, 973, 982 Murayama, Y. 286 Mutapcic, A. 423
N Nadi, M. 186, 377 Nagel, J.H. 1043, 1118 Nagel, M. 1043
Nakayama, Y. 537 Nalivaiko, E. 26 Nam Koong, K. 1034 Narracott, A.J. 895 Nascimento, L.N. 1085 Nekhoul, B. 843 Neumann, Eberhard 18 Ng, S.C. 517 Niknam, Kaiser 696 Niknam, Sahar 696 Nikolopoulos, S. 785 Noaman, Noaman M. 385 Norgia, M. 390 Nousiainen, J. 329, 1111 Nousiainen, J.O. 336, 1115 Novak, D. 148 Nuzhny, A. 438
O O’Sullivan, G.C. 628 Oblak, J. 178 Obreza, P. 950 Obrycka, A. 940 Ogawa, M. 442 Olensek, A. 673, 681 Olsen, K. J. 313 Omata, S. 286 Omejec, G. 998 Onaral, B. 473 Oostendorp, T.F. 42 Oosterom, A. van 42 Osorio, I. 346, 911 Osswald, B. 541 Ott, J. 1051 Ottaviano, M. 558 Ozgur, E. 214
P Pacini, G. 194 Padovani, R. 313 Paganin-Gioanni, A. 624 Paglialonga, A. 390 Painter, F.R. 1085 Palko, K.J. 416 Palko, T. 940 Pallikarakis, N. 826, 923, 928 Pallikarakis, N.E. 373, 1092 Panzarasa, Silvia 152 Papic, M. 323 Parazzini, M. 390
Index Authors
Park, J.S. 839, 1030, 1034 Park, K.S. 835 Park, Y. 274 Pasquariello, G. 426, 990, 1096 Patail, B. 1051 Pattichis, C.S. 118 Pauly, J.M. 423 Pavan, E.E. 292 Paver-Erzen, Vesna 327 Pavlic, J. 903 Pavliha, Denis 381 Pavlin, D. 589 Pavlin, M. 593, 635 Pavlovic, I. 741 Pavlycheva, I.Yu. 856 Pavselj, N. 597 Pecchia, L. 758, 762, 1066 Pedreira, C.E. 1025 Peer, P. 947 Peinado, I. 558 Pentony, P.J.C. 1055 Perdan, J. 950 Pereira, W.C.A. 1025 Pérez, J. F. 1038 Pesatori, A. 390 Pessina, Mauro 152 Petersson, G. 685 Petric, P. 875 Piggott, J. 628 Pilt, K. 434 Plesa, M. 665 Poboroniuc, M.S. 954 Podobnik, J. 973 Podsiadly-Marczykowska, T. 871 Poh, C.-L. 814 Poli, A. 723, 732 Pone, A. 1066 Poola, G. 202 Popovic, B. 661 Popovic, D.Lj. 310 Popovic, D.B. 3, 654 Popovic, Mirjana B. 3 Prado, J. 186 Prado, Manuel 712 Praprotnik, L. 365 Praznikar, A. 681 Prcela, Marin 549 Preat, V. 597 Pucihar, G. 639 Pueyo, E. 74 Pur, Aleksander 708 Pusnik, I. 338, 401
1147
Pustisek, M. 737 Putten, M.J.A.M. van 487, 497, 756
Q Quaglini, Silvana 152
R Raamat, R. 562 Radosavljevic, D. 249 Rafiroiu, D. 895 Rajalakso, M. 1130 Rajsman, G. 46 Rakos, M. 658 Ramat, S. 977, 986 Ramos, P. 1038 Raptis, A.C. 346 Ravazzani, P. 390 Raveendran, P. 517 Rawicz, M. 413 Rebersek, M. 381, 574 Rener, K. 354 Rengo, F. 78 Reumann, M. 541 Reyes-Aldasoro, C.C. 810 Rhee, K. 835 Riener, Robert 7 Rieta, J.J. 90, 94 Roa, Laura M. 712 Rodriguez, B. 50 Rols, M.P. 610, 624, 630 Romano, M. 369, 426, 777, 789, 990, 1096 Rose, J. 685 Rosik, V. 86 Rosmann, M. 434 Rossum, A.C. van 42 Rouane, A 186 Rozman, B. 246, 566 Rubenchik, A. 856 Rubin, D.M. 806 Rubinsky, Boris 629 Rudel, D. 131, 144, 148 Rudolf, M. 936 Ruggiero, C. 693 Rutar, V. 478, 501
S Sadadcharam, M. 628 Saino, E. 238
Sakhaeimanesh, A.A. 545 Sakuma, H. 286 Salerud, E.G. 1122 Salobir, B. 354 Salvi, D. 558 Samini, Mahdi Ghorbani 696 Sanchez, C. 54, 90 Sanders, P.M.H. 756 Sandholm, A. 685 Sanguineti, V. 525 Sarabon, N. 998 Sarker, Md. Atiqur Rahman 405 Sarvari, A. 887 Sayadat, Md. Nazmus 405 Sbrignadello, S. 194 Scabar, A. 445 Scherbakov, A. 350 Schlegel, W. 313 Schmidt, G. 38 Schneider, R. 38 Schwirtlich, L. 482, 654 Secerov, A. 469 Sefer, A.B. 529 Seiner, H. 304 Senez, V. 170 Sentjurc, M. 453, 570 Sersa, G. 465, 469, 582, 589, 602, 614 Sersa, I. 859 Seyhan, N. 214, 230 Shakhova, N.M. 856 Sharp, P. F. 313 Shepherd, M. 1051 Shrestha, R.B.K. 814 Shumsky, S. 438 Sibella, F. 390 Signorini, M.G. 781 Silva, L.B. Da 856 Simunic, B. 393 Skannavis, S. 879 Skarzynski, H. 940 Sladoievich, E. 752 Smrcka, P. 86 Smrdel, A. 34 Soden, D.M. 628 Soimu, D. 826, 928 Sovilj, S. 46 Spaich, E. 669 Spasic-Jokic, V.M. 310 Spyrou, S.P. 879 Stankiewicz, B. 413 Stankovic, S. 310
1148
Stankovski, Vlado 166 Stare, Z. 206 Starfield, D.M. 806 Stefanovic, A. 482 Stefanoyiannis, A.P. 879, 899 Steingrimsdottir, T. 135, 139 Stern, M. 704 Stet, D. 665 Stiblar-Martincic, D. 465 Stikov, N. 423 Stimec, Matevz 166 Stirn, I. 1002, 1009 Stoeva, M. 1 Strojan, P. 867 Strojnik, A. 891 Strojnik, V. 1002, 1009 Strumbelj, B. 994 Stublar, J. 288 Surace, A. 752 Svelto, C. 390 Swain, Martin 166 Swinnen, S.P. 643
T Tabakov, S 1 Takenoshita, S. 286 Talts, J. 562 Tamzali, Y. 610, 630 Tan, H.L. 42 Tanougast, C. 843 Taran, E.Yu. 257 Tarassenko, L.T. 50 Tarjan, Zs. 818 Teissié, J. 610, 618, 624, 630 Terio, H. 1047, 1074 Terrien, J. 135, 139 Tevz, G. 589 Thierry, J. 1058 Tian, T.Y. 242 Tkacz, E.J. 70 Tobey Clark, J. 1051 Tognola, G. 390 Tomruk, A. 230
Index Authors
Tomsic, I. 681 Tomsic, M. 282, 961 Tomson, R. 210 Tonhajzerova, I. 766, 769 Tonin, M. 915 Tonkovic, S. 677 Toomessoo, J. 202 Torkar, Drago 851 Toscano, R. 932 Tozer, G.M. 457, 810 Tozon, Natasa 586 Tranfaglia, R. 1066 Tratar, G. 859 Trcek, T. 222 Treizebré, A. 170 Trunkvalterova, Z. 766, 769 Tura, A. 194 Turk, Z. 716 Tuulik, V. 210 Tysler, M. 86
Villantieri, O.P. 30 Virag, T. 818 Visai, L. 238 Vizintin, T. 1002 Vrhovec, J. 144, 332
W Walkowiak, A. 940 Wasowski, A. 940 Watanabe, T. 647, 689 Wear, James O. 1081 Weingartner, J. 943 Wernisch, J. 651 Wheeler, B.C. 477 Woloszczuk-Gebicka, B. 413 Wong, J. 830
X Xyda, M.G. 118
U Urbanija, J. 246, 566 Usaj, A. 461, 994 Usaj, Marko 851 Usenik, P. 288 Usunoff, K.G. 521
V Valchinov, E.S. 373 Valic, B. 218, 222, 226, 234 Väliharju, T. 1130 Varga, D. 773 Vatta, F. 509, 513, 723, 1077, 1107 Vaya, C. 90 Veber, M. 947, 982 Verdenik, I. 131 Vidmar, G. 131 Vidmar, J. 859 Vilimek, M. 308 Villalba, E. 558
Y Yajima, T. 286 Yeh, H.I. 242 Yoshizawa, M. 647
Z Zaeyen, E.J.B. 492 Zagar, T. 174, 182 Zazula, D. 105, 109, 114, 118, 1013, 1017 Zemva, A. 66, 357 Zidar, J. 478, 501 Zielinski, K. 416 Zoia, S. 445 Zrimec, T. 822, 830 Zupan, A. 681, 958 Zupanic, A. 226 Zupunski, I. Z. 310 Zuzek, A. 943 Zywietz, T.K. 22
Index Subjects 2 2D/3D 834
3 3-axes accelerometer 369 3D Motion Analysis 300 3D reconstruction 826 3D ultrasound images 1017 3-dimensional scaffold 249
A a model of lung and airway 871 absorbance 350 Accessible Web Design 737 accuracy 338, 361 acetabular fracture 915 Acetylcholine 521 action potential duration 42 activation time imaging 42 active medical implanted devices 390 adaptive filtering 135, 990 Adaptive stuttering therapy 712 Afferent stimulation 643 Aggregated autocorrelogram 442 Agreement detector 442 AH-graph 162 AH-model 162 AH-semantic 162 airway segments 871 alcohol craving 1034 ambient assisted living 397, 723 ambulatory blood pressure monitoring 357 amiodarone 82 amplifier 373 analog 879 anisotropy 509 Antennessa DSP 090 222 Anthropomorphic robotics 986 antioxidant 214 Antiphospholipid antibodies 566 antitumor treatment 622 Apolipoprotein 246 applanation tonometry 354 approximate entropy 144
approximate entropy algorithm 785 approximation methods 677 Arm Therapy 7 ARMAX 385 arrhythmic death 74 arterial catheter 430 artificial neural networks 438, 501 Artificial Neural Networks (ANN) 478 Artificial vision 977 assessment 1070 Assistive Technology 737 atherosclerosis 835 Atrial Activity 94 atrial fibrillation 82 Atrial Fibrillation 54, 94 atrio-ventricular block 541 attention 473 attenuation measurements 923 Austria 1070 Automated cell counting 851 automatic clinical tracing 1062 autonomic nervous system 38, 82 Autonomic nervous system 932 Avascular necrosis 282 avatar 1034 a-wave 919
B balance 998 baroreceptor reflex 773 Baseline 90 battery powered 798 BEMS 1089 bending stiffness 270 Berg balance scale 936 Beta-2-glycoprotein I 566 Beta-2-glycoprotein-I 246 between-step control 673 biceps brachii muscle 109 binocular 977 bioelectric phenomena 631 Biofeedback 961 bioimpedance 182, 202 bioimpedance spectroscopy 186 biomechanics 915 Biomechanics 282
Biomedea 1043 biomedical engineering 1089, 1115 Biomedical engineering 1122, 1126 Biomedical Engineering 329, 1111 biomedical engineering advancement 1140 Biomedical Engineering Technicians 1081 Biomedical informatics 319 biomedical instrumentation 369 biomedical technology management 1092 Biomems 170 biomimetic artifacts 986 biomimetics 238 biopotential electrode 373 biosignal measurement 86 Bishop Score 131 biventricular pacing 541 bleomycin 602, 614, 622 blood 186 blood clots 859 blood flow rate 457 Blood pressure 434 blood pressure measurement 342 blood pressure variability 357, 773 blood pulsation 839 Blood Velocity 810 blood volume 461 blood-transfusion 1062 Bluetooth 369, 798 BME 1077 BME accreditation 1118 BME education 1107, 1118 BMET 1081 BMSC 253 body area network 397 body surface mapping 42 BOLD 505 Bologna 1122 Bolus Processing 300 bone healing 253 bone marrow 253 botulinum toxin 393 brain 461 brain activity mapping 513 brain ischaemia 157 brain lesion 509
1150
brain microcooler 911 Brain symmetry index 487 brain trauma 482 breast 1025 breast cancer 438, 856 Breathing retraining 961 BSI 487
C CAD-system 802 CAHTMA 1081 calcium oscillation 537 calculation algorithms 562 calibration bath 338 Cancellation Noise 90 Cancellous Bone Tissue 274 cancer 629 Cancer 628 cancer therapy 457 cardiotocography 777 Carotid endarterectomy 487 Cartilage Wear 814 catechin 230 cats 586 Cavitation 895 CE certification 1118 CE continuing education 1118 cell clusters 639 cell electroporation 593 cell membrane fluidity 570 Cell Signaling 170 center of pressure 936 Central ECG analysis and interpretation 22 Certification 1081 cervical spine 270 cervix 131 charge density 903 Chest 802 chest electrode 554 children 445 choice reaction time 529 chondrocyte 249 Chronic Heart Faillure 758 cisplatin 582, 586, 602, 610, 614, 622 citizen-centric health-care 723, 727 classification 66, 822 Classification 843 clinical data 741 clinical engineering 310, 1043, 1089, 1107
Index Subjects
Clinical engineering 1074 Clinical Engineering 1051, 1055, 1070, 1077, 1085, 1096 Clinical engineering activities 1085 clinical engineering profile 1085 Clinical Engineers 1081 Clinical Practice Guideline 152 clinical thermometer 338, 361, 401 clinical training 327 clinical trial 741 clinical trial solution 22 Cliniporator 10 closed-loop FES 950 closed-loop system 911 CMRR 798 cochlear implant 332, 940 cochlear implants 390 Coded apertures 806 coil design 665 Collaboration 329 combretastatin 457 Communication Protocol 405 complexity 26 Compound signal decomposition 105 Computer Aided Diagnosis 830 computer aided surgery 1132 Computer Simulation 50 computer vision 947 Congenital nystagmus 426 Congestive heart failure 541 conical collimators 887 Continuous EEG 756 Continuous monitoring 58 continuous wavelet transform 1017 contractility 296 contractions 148 contrast set mining 157 control 478 convolution kernel compensation 109, 114 COPD 961 COPD patients 78 coronary flow rate measurements 793 Cortical Bone 304 cortical network 525 cost 756 cross-correlation 525 cross-interval histogram 109 CT 864 culture 477 cultured cardiac myocytes 537
cutaneous tumours 614 cycle-to-cycle control 647
D data mining 166, 677 Data presentation methods 708 data visualization techniques 708 database 166 databases 1137 DC component 434 Decision Aid 696 Decision Analysis 696 decision support 14, 762 Decomposition 118 Deep Brain Stimulation 651 delay of cardiac repolarization 22 delivery 618 denervated degenerated muscles 658 dense system 635 desmin 242 detection 66 Device failure 651 Dexterity Assessment 982 diabetes 194 Diabetes Management 14 diabetes mellitus 766, 769 diagnosis 128 diagnostic imaging 292 diagnostics 818 dialysis dose 350 dialysis monitoring 350 dialysis quality 350 Dielectric & Vibrationnal Spectroscopy 170 dielectric properties 186 dielectrophoresis 178 digital 879 digital mammography 899 digital pathology 745, 749 Digitising 943 Diodes 891 dipole rearrangements 18 Disability 737 discretization 246 Disease Pattern Recognition 830 disorder classification 677 dissipation 278 Distance education 319 distributed 166 distributed health care 723, 732 diversity 1142
Index Subjects
DNA electrotransfer 18, 606 DNA injections 623 Doctor.UA 700 dogs 586 dose distribution 923 dose verification 883 dosimetry 218, 887 Dosimetry 222, 226, 234 drop-foot 654 drug delivery systems 602 DSP based measuring system 86 Dysautonomic neuropathy 932 dysgraphia 445 Dysport 393
E ear 332 ECG 66, 94, 369, 373 ECG acquisition 58 ECG analyzer 405 EDC 741 Edge detection 818 education 716 Education 313, 737 Education and Training 1 EEG 487, 501, 505, 509, 513 EEG analysis 210 effective conductivity 635 effective medium 635 EGCG 214, 230 e-health 745, 749 EHG 135, 139 EITS 798 elastic model 246 Elasticity 286 e-learning 323, 336, 749, 1107, 1115 eLearning 329 e-Learning 1 E-learning 319 E-learning material 1111 electric field distribution 323 Electrical Cardioversion 54 electrical model 182 electrical stimulation 521 Electrical stimulation 3 Electrocardiogram 74 electrochemotherapy 323, 582, 610, 614, 618, 631 Electrochemotherapy 628 electrode configurations 390 electrode displacement sensitivity 206
1151
electrode tissue interface 198 electrodes 477 Electroencephalography (EEG) 478 electrogenotherapy 618 electrolytes in endodontic 206 Electromagnetic fields 222, 234 electromagnetic interferences 1066 Electromagnetic stimulation 238 electromyogram 148 electromyography 128, 1002 electronic apex locator 206 electronic data capturing 22 electronic data collection 741 Electronic Patient Record 152 electropermeabilization 323, 597, 606, 618 Electropermeabilization 622 electroporation 178, 570, 586, 589, 597, 602, 606, 614, 618, 622, 631, 639, 851 Electroporation 624, 628 electrorelease 18 electrotransfer 582 ELF-E Field 230 Embedded System 381 emergency medical system 704 Emergency Patient Care Report Form 1058 Emergency Systems 1058 EMF effect 210 EMG 118, 128, 131, 1009 EMG force relation 114 Employment 737 Encoding 152 endocytosis 623 Endoscope 628 endotracheal tube 413 endurance level 1009 entrepreneurship 1142 epilepsy 346, 482, 911 Epilepsy 505 Epitheses 943 EPR in vivo 453 equalization 899 Equipment Alarm Systems 1051 equipment inventory 1092 ERG 919 estimation 839 estimator 385 Ethernet connectivity 86 evaluation 899 EVICAB 1111
excitation 1134 exercise testing 82 Experiment in Vivo 300 experimental tumors 469 expert system 549 Exploratory data analysis 157 extracellular matrix 238 extracellular matrix components 589 eye movement 426
F fatigue 288, 1002 feature extraction 70 feature selection 70 Femoral head 282 FES 658, 689 fetal cardiac arrhythmias 789 Fetal monitoring 781 film dosimetry 883 Finapres 562 finger pulse pressure 562 finger-tapping 266 finite element analysis 270 Finite element method 226 finite elements 218, 597 finite-element method 631 fixed-point 361 flecainide 82 flow 859 flow resistance 413 flow-volume curves 871 fluid structure interaction 895 fluorescence 1134 fMRI 505, 839 fNIRS 473 Force tracking task 950 foveation 426 FPGA 66 FPGA, Ethernet 202 fractal analysis 78 free radical 214 FreeForm 969 frequency response 430 function assessment 1096 functional connectivity 525 Functional electrical stimulation 661 functional electrical stimulation (FES) 647 Functional electrical stimulation (FES) 654
1152
Functional electrical therapy (FET) 654 fuzzy controller 647 Fuzzy Logic 696
G Gait 685 gait phase 689 Gait Therapy 7 gait training 1030 gap junctions 537 gene electrotransfer 574, 623 gene therapy 606 Gene Therapy 628 gene transfer 597 Giant phospholipid vesicles 566 global outliers 789 glucose transporter-1 465 glycaemia 194 Graphical User Interface (GUI) 381 grid 166 grip force 973 grip strength 954 Ground reaction force 669 gyroscope 689
H half power region 665 half-beams 883 hand closing 950 hand opening 950 hand rehabilitation 954 Handwriting 445 haptic feedback 1038 haptic interface 965, 973 Haptic modeling 969 Hardware 1058 harmonization 1122 Head model 509 Health Care Integration 719 Health Care System 708 health care technology 1070 Health Information Systems 719 health monitoring 558 health process 1102 Health System 752 Health Technology Assessment 758 health tissue protection 928 Healthcare system 1100
Index Subjects
healthcare systems 1077 Heart rate analysis 781 heart rate variability 26, 773, 785 Heart rate variability 38, 766, 769 Heart valves 545 hemiplegia 654 hemiplegic gait disorder 1030 hemodialysis 350 Hepatitis 286 HERG 50 Hierarchical hybrid control 3 hierarchical SOM 977 high speed power MOSFET driver 574 higher education 1077 Hill-type model 308 hip contact stress 915 Hip stress 282 HL7 693, 704 Honeycombing 830 horses 586, 610 hospital engineering 1070 hospital support services 1089 HPC 513 HRCT 822 HRV 78 HTA 762, 1100 human factors 162 Human Femoral Head 274 human gait 677 human hand 365 Human hand 982 Human locomotion 669 human patient simulator 716 human smooth muscle cells 242 hydroporation 623 hypoxic marker 465
I ICT 1077, 1130 identification 385 IHE 732 Image Guided Therapy 834 Image Modulated Radiation Therapy 834 Image Registration 834 Image segmentation 847 images registration 793 imaging 624, 798, 879, 1134 Imaging 1021 imaging monitored 629
immunohistochemistry 465 impedance 190 impedance method 174 impedance spectroscopy 174 Impedance spectroscopy 194 impedance transformation 416 impedance-ratio measuring method 206 implant 911 Implantable 346 implants 218 IMRT techniques 867 in vitro 570 In vivo dosimetry 891 incremental bicycle exercise 994 index 1102 Induced electric current 226 information and communication technologies 14 information exchange 704 information graphics 708 information system 704 innovation 1142 inotropism 296 instrumentation 798 instrumented gait analysis 681 Integrate-and-Fire unit 442 intellectual property 10 intelligent matrix electrode 654 Intensive care unit (ICU) 58 Interactive Notebook 685 intercellular synchronization 537 international cooperation communities 723 internationalization 727, 732 Internet 319, 336 internet site 332 Internet-systems 700 interoperability 727 Interoperability 719 intravital microscopy 457 Inverse Problem 304 inverse procedure 42 ion diffusion 593 IRIS Home 958 irregular volume shielding 928 irreversible electroporation 629 isometric conditions 950 isovolume-pressure-flow curves (IVPF) 871 ITCN algorithm 851 IT-systems 1047
Index Subjects
K kinematic analysis 445 kinematics 288 Kinematics 300 kinetics 288 knee control 1030 knee joint 647 KNN 843 knowledge acquisition 549 KPI 752
L labor 128, 131, 135, 139 labour intensity 756 LDA technique 545 learning 1126 Learning management system 1130 left bundle branch block 541 left ventricle 895 leg 461 linear accelerator 887 linear rising signal 578 liver 230, 629 Liver cirrosis 286 Living Cell 170 Local arterial compliance 562 local outliers 789 localization 665 lock-in technique 182 long bone defects 253 low frequency exposure 218 lower extremities 658 lower extremities training 262 luciferase 589 lumped parameter model 895 Lung 802 Lung HRCT 830 lung surface 822 lungs mechanics 416 lungs model 416
M machine learning 822 magnesium alloys 242 magnetic additions 257 magnetic stimulation 665 maintenance 1096 Management 752 mapping 139
1153
marketing 10 Markov model 50 markup languages 1137 Mastication 300 Matrix of Elastic Coefficients 304 mean frequency 1002 measurement system 182 mechanical heart valve 895 mechanical ventilation 413 median filter 789 medical and pharmaceutical information 700 Medical Decision Making 696 Medical Device Safety 1051 Medical devices 1047 Medical Diagnosis 696 medical image processing 818 medical informatics 741 medical information systems 1047 medical physics 310 Medical Physics 313 medical plans 549 medical reporting workstation 727 medicine 166 meduloblastoma 867 membrane permittivity 178 membrane surface 903 MEMS sensors 661 mesenchymal stem cells 253 method of impedance 430 methodological tool 1102 Mexican-Hat 919 mice 469 Micro Tactile Sensor 286 Microcontroller 405 microelectrode arrays 525 microelectrodes 186 microfluidic study 257 Microprocessor 381 Microscopic Multi-Directional Property Measurement 274 MIDS 1047 minimally invasive surgery 629 Missing fundamental 442 MLAEP 492 MNF 1009 Mobile and Wearable Devices 1058 mobile phone 214 model semantic 162 Model-based control 3 modeling 631 Modeling 118, 685
Modelling 377 molecular imaging 835, 1135 monitoring 194 Monitoring 234, 1051 monopolar recording 135 Monte Carlo 928 Monte Carlo simulation 923 Monte-Carlo Simulation 304 Moodle 1107, 1130 morphometric parameters 1025 motor development 365 motor unit synchronization 109 movement analysis 266 movement planning 986 movement related evoked potentials 529 moving object tracking 793 MR microscopy 859 MRI 423 MUAPs 118 multimodal imaging 509 Multimodal monitoring 712 multiple electrodes 574 Multitier architecture 712 muscle 288 muscle fatigue 517, 1009 muscle force estimation 114 Muscle mechanics 308 Muscle oxigenation 124 musculoskeletal model 661 musculoskeletal modeling 292 Mutual information 1025 MVCBCT 928 myocyte 296 Myoelectric signal 124
N Nano particles 903 Nano-Electrode and wire 170 nanomedicine 1135 nanoparticle 835 nanotechnology 1135 Natural Language Generation 152 Near Infrared Spectroscopy 124 neonatal incubator 1096 networks 477 Neural 477 neural network 1025 Neural networks 847 Neural Networks and Madeline 90 neural noise 419
1154
neurology 497 neuromuscular disorders 681 neuroplasticity 643 newt 190 NIBP 342, 385 NIRS 461 NIRS Slow-Fast phase 124 Nociceptive withdrawal reflex 669 noise 798 non invasive inspection 651 non-invasive 194 noninvasive ischemia identification 86 non-invasive measurement 434 nonlinear 377 nonlinear dynamics 769 nonlinear dynamics, complexity 766 nonlinear lungs models 416 Non-linear methods 781 nonthermal effect 210 Non-Viral Gene Therapy 623 Nuclear medicine imaging 806 Numeric model 296 numerical methods 635 numerical model 639 numerical modeling 218, 323, 597
O objective methods 940 Objective Response Detection 492 obstructive sleep apnea 864 Obstructive Sleep Apnea 30 odontoid fracture 270 ody surface potential mapping 86 office blood pressure 357 Online Education 329 Open Access 329 Open Source 719 Open Standards 719 open-source 723, 727, 732 Optical Coherence Tomography 847 optical scattering spectrum 438 optimal tracking 661 optimization 851 order parameter 570 Organization 94 orthopaedic surgery 292 orthostatic test 461 oscillation 545 oscillometry 342
Index Subjects
Osteoarthritis 814 osteoblast 238 outcome measurement 965 outer volume suppression 423 ovarian follicles 1017 oximetry 453
P p53 582 Pacemaker 1066 PACS 732 PAM 266 Partial volume effect 806 Patient Care Management 1051 patient groups 756 Patient safety 1043 Patient Safety 1055 Patient-Cooperative 7 patients 1140 pattern classification 473 pattern recognition 70 peak alpha frequency 517 pedometers 1006 PEMS 1047 Performance Indicators 752 Performance Measures 752 Personal Area Network 405 Personal Computer (PC) 381 Personal monitoring device 369 personalized applications 558 Personalized healthcare 397 personalized medicine 693 persons with disabilities 958 Pharmacogenomics 693 phase contrast images 851 phase demodulation 478 phase-rectified signal averaging 38 phenotype 249 photoplethysmography 434 physical hybrid lungs models 416 Physiological 1051 Physiome project 1137 pimonidazole 465 planar lipid bilayer 578 polar fluid 257 polarisation effect 186 Policy Statement 313 pore stabilization 593 postural control model 419 postural oscillations 419 postural response 936
potential distribution 390 power absorption within-step control 673 Power density spectra 545 power grip 365 PPG 839 precision grip 365, 973 prediction 128 Pregnancy 226 pre-operative planning 292 principal component analysis 70 privatization 1089 programs 1126 propagation 139, 377 Prostate Cancer Diagnosis 843 Prostheses 943 prosthesis 965 protein carbonyl level (PCO) 230 Psycho-acoustic Threshold 492 Pulmonary rehabilitation 961 Pulse wave 354 pulse wave transit time 434 purinoceptors 537 push-off 673
Q qEEG 482 QFD 1102 QT/QTc-study 22 QT/RR 74 quadratic phase 423 quality 1062 quality measurement 1102 quantitative EEG 497, 756
R Radiation force 1021 Radio Frequency 214 Radiographs 802 radiology 879 radiotherapy 867 Radiotherapy 891, 928 Rapid prototyping 943 rat 26 rating stroke patients 266 ratio method 174 reaction time 529 real-time PCR 249 Real-time signal processing 105 recurrence plot 769
Index Subjects
recurrence quantification analysis 769 Red Blood Cell Tracking 810 reduced breathing frequency 994 Reflex modulation 669 regeneration 190 rehabilitation 915, 965, 1030 Rehabilitation 982 Rehabilitation Robotics 7 rehabilitation technology 958 repeatability 998, 1009 repolarization 74 Resolution enhancement 806 Respiratory muscles 961 response prediction 501 RF saturation pulses 423 RFID 1062, 1066 ripeness 131 ripple-down rules 822 RNA interference 624 root canal 174 root canal length 206 rotational radiotherapy 923 Rule-based control 3
S Safety 1066 sample entropy 144 Sample Entropy 54, 94 Sampling considerations 806 sarcoma experimental – drug therapy – blood supply 602 sarcomere 296 SAW 346 SBS 843 SD 777 segmentation 1017 seizure blockage 911 self paced movement 529 sEMG 124 sensitivity 998 separation 178 Sequential convolution kernel compensation 105 Setup 1058 SFEAPs 118 SFS 843 shape memory alloy springs 969 shear stresses 545 Sherman-Morrison matrix inversion 105 short term variability 777
1155
ShRNA 624 signal processing 148 similarity measures 793 Simplified Ray Method 304 simulation 327, 377 Simulation 685, 716 simulator 342 single-leg jump 288 siRNA 624 skeleton 919 skin 597 skin application 453 small animal imaging 826 SMC a-Actin 242 snoring 864 social pressure situation 1034 soft palate 864 Soft tissue 308 solid tumors 582, 589 Sotalol 50 spasticity 393 specialist studies 310 spectral estimation 419 spectrometry of scattered radiation 856 Speech Recognition 1058 Spinal deformities 969 spinal implants 969 Spontaneous discharge 442 stance phase 689 standardization 401 stapedius muscle reflex 940 steopcounting 1006 stepping-in-place 262 stereotactic radiosurgery 887 Sternberg task 501 stimulus generation 202 STN 521 stroke 936, 954 structure-continual study 257 subgroup discovery 157 Suction wave 278 sudden cardiac death 22 supporting factors 157 surface electromyogram 109 Surface electromyogram 105 surface electromyography 990 surface modification 238 surgical navigation 1132 surgical robot 1132 surgical simulator 1038 Survey 1085
suspensions in blood 257 swimming 1002 swing phase 689 SWOT 762 symbolic dynamics 766, 773 Sympatho-vagal balance 30 Synchronous data acquisition 202 system dynamics 385 systemic electroporation 18
T Tactile Mapping 286 target function 762 target specificity 835 targeted therapy 1135 teaching 1126 teaching design 1142 tele-cytology 749 telediagnostics 397 Telehealthcare 712 Telemedecine 758 telemedicine 745, 749, 1006 telemetry 373 telemonitoring 397, 1006 telepathology 745, 749 temperature 570 temperature sensor 346 Tendon 308 tensiomyography 393 testing 998 Texture 843 Texture analysis 847 theory 593 three dimensional image 1132 threshold current 554 thrombolysis 859 Thrombosis 566 time domaim 278 time series 677 Time-Frequency Distribution 30 time-frequency plots 497 tissue engineering 249, 253 tissue resistance variation 198 TMG 393 tomosynthesis 826 Trabeculae 274 training 879 Training 313, 982 Training program 1074 transcranial magnetic stimulation (TMS) 643
1156
Transcutaneous Nerve Electrical Stimulation 932 transesophageal pacing 554 transformation 497 Translational research 10 transmembrane potential 635 transmural pressure 871 Treadmill Training 7 trials 1140 true three dimensional display 1132 tumor blood flow 469 tumors 453 twitch 114
U Ukraine 700 ultrasound 377, 629 Ultrasound 1021 uncertainty 338, 361 Universal Serial Bus (USB) 381 upper limb amputation 965 usability test 558 USB 58 user interaction 558 uterine electrohysterogram 144 uterus 135, 139 Uterus 128, 148 utrasonography 1025
Index Subjects
V
W
vagal blockade 26 vascular disruption 457 vascular tone 562 vasoactive compound 453 Vasomotion 932 veterinary medicine 310, 586 VHDL-AMS 377 Vibro-acoustography 1021 Video lectures 1111 videoconference 1107 vimentin 242 vinblastine 469 virtual campus 1122 virtual environment 973, 1126 Virtual environment 982 Virtual learning environment 1130 virtual mirror 262 virtual reality 1030, 1034 Virtual reality 1038 virtual rehabilitation 262 Viscoelastic properties 308 visual acuity 426 visualization 513 Visualization Interface 814 VLP-Fuzzy Clustering-HRECG 99 voltage beakdown 578 voltage commutator 574 voltage pulse plethysmography 198
wave intensity 278 wavelet 899 wavelet analysis 919 wavelet transform 70 wavelet variance 919 wax filter compensators 867 Wearable intelligent device 712 wearable systems 558 Web Service 758 Web services 693 Web-based 814 web-based management system 1092 weighted least squares 423 WHO Project 1100 Whole body vibration 990 wireless 798 Wireless Data Transmission 1058 wireless monitoring 373 working memory 501 XML 704 X-Ray 802 young adults 357
µ µCT 826